SBBD

Paper Registration

1

Select Book

2

Select Paper

3

Fill in paper information

4

Congratulations

Fill in your paper information

English Information

(*) To change the order drag the item to the new position.

Authors
# Name
1 Alexander Feitosa(alexander.feitosa@cefet-rj.br)
2 Érica da Silva(ericacqueiroz@gmail.com)
3 Gustavo Guedes(gustavo.guedes@cefet-rj.br)

(*) To change the order drag the item to the new position.

Reference
# Reference
1 Alan, B. and Duncan, C. Quantitative data analysis with spss for windows: A guide for social scientists, 1997.
2 Azevedo, G. d., Pettine, G., Feder, F., Portugal, G., Schocair Mendes, C. O., Castaneda Ribeiro, R., Mauro, R. C., Paschoal Júnior, F., and Guedes, G. Nat: Towards an emotional agent. In 2021 16th Iberian Conference on Information Systems and Technologies (CISTI). IEEE, Chaves, Portugal, pp. 1–4, 2021.
3 Blodgett, S. L., Barocas, S., Daumé III, H., and Wallach, H. Language (technology) is power: A critical survey of "bias" in nlp. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. ACL, Association for Computational Linguistics, Online, pp. 5454–5476, 2020.
4 Davani, A., Díaz, M., and Prabhakaran, V. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Transactions of the Association for Computational Linguistics vol. 10, pp. 92–110, 2022.
5 Geva, M., Goldberg, Y., and Berant, J. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pp. 1161–1166, 2019.
6 Havens, V. and Hedges, M. Uncertainty and inclusivity in gender bias annotation. In Proceedings of the 1st Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Abu Dhabi, UAE, pp. 25–31, 2022.
7 Kowsari, K., Meimandi, K. J., Heidarysafa, M., Mendu, S., Barnes, L., and Brown, D. Text classification algorithms: A survey. Information 10 (4): 150, 2019.
8 Landis, J. R. and Koch, G. G. The measurement of observer agreement for categorical data. Biometrics 33 (1): 159–174, 1977.
9 Lim, S. S., Udomcharoenchaikit, C., Limkonchotiwat, P., Chuangsuwanich, E., and Nutanong, S. Identifying and mitigating annotation bias in natural language understanding using causal mediation analysis. In Findings of the Association for Computational Linguistics: ACL 2024. Association for Computational Linguistics, Bangkok, Thailand, pp. 11548–11563, 2024.
10 Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54 (6): 1–35, 2021.
11 Minatel, D., da Silva, A. C. M., dos Santos, N. R., Curi, M., Marcacini, R. M., and de Andrade Lopes, A. Data stratification analysis on the propagation of discriminatory effects in binary classification. In Anais do 11º Symposium on Knowledge Discovery, Mining and Learning (KDMiLe). Sociedade Brasileira de Computação, Belo Horizonte, MG, pp. 73–80, 2023.
12 Paullada, A., Raji, I. D., Bender, E. M., Denton, E., and Hanna, A. Data and its (dis)contents: A survey of dataset development and use in machine learning research. Patterns 2 (11): 100336, 2021.
13 Raji, I. D., Bender, E. M., Paullada, A., Denton, E., and Hanna, A. Ai and the everything in the whole wide world benchmark. In Proceedings of the NeurIPS 2021 Datasets and Benchmarks Track. NeurIPS, Virtual Conference, pp. 1–10, 2021.
14 Schwindt, L. C. Predizibilidade da marcação de gênero em substantivos no português brasileiro. Gênero e língua (gem): formas e usos vol. 1, pp. 279–294, 2020.
15 Shah, D. S., Schwartz, H. A., and Hovy, D. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, pp. 5248–5264, 2020.
16 Silva, M. O. and Moro, M. M. Nlppipeline for gender bias detection in portuguese literature. In Anais do Seminário Integrado de Software e Hardware (SEMISH). SBC, Sociedade Brasileira de Computação, Brasília, Brazil, pp. 1–10, 2024.
17 Stańczak, K. and Augenstein, I. A survey on gender bias in natural language processing, 2021.
18 Sun, T., Gaut, A., Tang, S., Huang, Y., Sap, M., Clark, E., Friedman, D., Choi, Y., Smith, N. A., Zettlemoyer, L., et al. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, pp. 1630–1640, 2019.
19 Zhao, J., Wang, T., Yatskar, M., Ordonez, V., and Chang, K.-W. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pp. 2979–2989, 2017.