1 |
Albladi, A. et al. (2025). Hate Speech Detection Using Large Language Models: A Comprehensive Review. IEEE Access, 13:20871–20892.
|
|
2 |
Aluru, S. S., Mathew, B., Saha, P., and Mukherjee, A. (2020). Deep learning models for multilingual hate speech detection.
|
|
3 |
Anagnostidis, S. and Bulian, J. (2024). How susceptible are llms to influence in prompts?
|
|
4 |
Assis, G. et al. (2024a). Explorando t´ecnicas de aprendizado em modelos de linguagem para classificação de discurso de Ódio e ofensivo em português. Linguamática, 16(2):91–113.
|
|
5 |
Assis, G. et al. (2024b). Exploring Portuguese Hate Speech Detection in Low-Resource Settings: Lightly Tuning Encoder Models or In-context Learning of Large Models? In Proc. of 16th PROPOR, pages 301–311, Santiago de Compostela. ACL.
|
|
6 |
Brown, T. B. et al. (2020). Language Models are Few-Shot Learners.
|
|
7 |
Chiu, K.-L., Collins, A., and Alexander, R. (2022). Detecting Hate Speech with GPT-3.
|
|
8 |
Choi, H. K. and Li, Y. (2024). PICLe: Eliciting Diverse Behaviors from Large Language Models with Persona In-Context Learning. In Proc. of the 41st PMLR, volume 235 of Proceedings of Machine Learning Research, pages 8722–8739. PMLR.
|
|
9 |
Cohere (2025). Command A: An Enterprise-Ready Large Language Model.
|
|
10 |
DeepSeek-AI (2025). Deepseek-V3 Technical Report.
|
|
11 |
Dong, Q. et al. (2024a). A Survey on In-context Learning. In Al-Onaizan, Y., Bansal, M., and Chen, Y.-N., editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1107–1128, Miami, Florida, USA. Association for Computational Linguistics.
|
|
12 |
Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Chang, B., et al. (2024b). A survey on in-context learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1107–1128.
|
|
13 |
Guo, K., Hu, A., Mu, J., Shi, Z., Zhao, Z., Vishwamitra, N., and Hu, H. (2024). An Investigation of Large Language Models for Real-World Hate Speech Detection.
|
|
14 |
Kim, J. W., Guess, A., Nyhan, B., and Reifler, J. (2021). The distorting prism of social media: How self-selection and exposure to incivility fuel online comment toxicity. Journal of Communication, 71(6):922–946.
|
|
15 |
Lester, B. et al. (2021). The power of scale for parameter-efficient prompt tuning. In Proc. of the 2021 EMNLP, pages 3045–3059, Punta Cana. ACL.
|
|
16 |
Li, L., Fan, L., Atreja, S., and Hemphill, L. (2024). “HOT” ChatGPT: The Promise of ChatGPT in Detecting and Discriminating Hateful, Offensive, and Toxic Comments on Social Media. ACM Trans. Web, 18(2).
|
|
17 |
Li, X. L. and Liang, P. (2021). Prefix-tuning: Optimizing continuous prompts for generation. In Zong, C., Xia, F., Li, W., and Navigli, R., editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online. Association for Computational Linguistics.
|
|
18 |
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. (2023). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Comput. Surv., 55(9).
|
|
19 |
Nguyen, N. D., Truong, N., Dao, P. Q., and Nguyen, H. H. (2025). Can online behaviors be linked to mental health? active versus passive social network usage on depression via envy and self-esteem. Comput. Hum. Behav., 162:108455.
|
|
20 |
Oliveira, A. et al. (2023). How good is ChatGPT for detecting Hate Speech in Portuguese? In Anais do XIV STIL, pages 94–103, Porto Alegre, RS, Brasil. SBC.
|
|
21 |
Oliveira, A. et al. (2024a). Toxic Speech Detection in Portuguese: A Comparative Study of Large Language Models. In Proc.s of the 16th PROPOR, pages 108–116, Santiago de Compostela. ACL.
|
|
22 |
Oliveira, A. et al. (2024b). Toxic Text Classification in Portuguese: Is LLaMA 3.1 8B All You Need? In Anais do XV STIL, pages 57–66, Porto Alegre, RS, Brasil. SBC.
|
|
23 |
OpenAI (2024). GPT-4o System Card.
|
|
24 |
Paes, A., Vianna, D., and Rodrigues, J. (2024). Modelos de linguagem. In Caseli, H. M. and Nunes, M. G. V., editors, Processamento de Linguagem Natural: Conceitos, Técnicas e Aplicações em Português, book chapter 17. BPLN, 3 edition.
|
|
25 |
Pérez, J. M. et al. (2025). Exploring Large Language Models for Hate Speech Detection in Rioplatense Spanish. In NAACL 2025, pages 7174 7187, Albuquerque, New Mexico. ACL.
|
|
26 |
Saraiva, G. D. et al. (2021). A semi-supervised approach to detect toxic comments. In Proc. of the RANLP 2021, pages 1261–1267, Online. INCOMA Ltd.
|
|
27 |
Touvron, H. et al. (2023). LLaMA: Open and Efficient Foundation Language Models.
|
|
28 |
Vargas, F. et al. (2021). Contextual-lexicon approach for abusive language detection. In Proc. of the RANLP 2021, pages 1438–1447, Online. INCOMA Ltd.
|
|
29 |
Vaswani, A. et al. (2017). Attention is All you Need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
|
|
30 |
Wei, J. et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS ’22, Red Hook, NY, USA. Curran Associates Inc.
|
|
31 |
Zhao, W. X. et al. (2023). A survey of large language models. arXiv:2303.18223, 1(2).
|
|
32 |
Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., and Ba, J. (2023). Large language models are human-level prompt engineers.
|
|