SBBD

Paper Registration

1

Select Book

2

Select Paper

3

Fill in paper information

4

Congratulations

Fill in your paper information

English Information

(*) To change the order drag the item to the new position.

Authors
# Name
1 Ayrton Herculano(ayrton.herculano@academico.ifpb.edu.br)
2 Laerty da Silva(laerty.santos@academico.ifpb.edu.br)
3 Damires Souza(damires@ifpb.edu.br)
4 Alex Rego(alex@ifpb.edu.br)

(*) To change the order drag the item to the new position.

Reference
# Reference
1 American Psychiatric Association (2023). Manual diagnóstico e estatístico de transtornos mentais: DSM-5-TR. Artmed, Porto Alegre, 1 edition. Tradução da obra original: Diagnostic and Statistical Manual of Mental Disorders: DSM-5-TR, 2022.
2 Caseli, H. M. and Nunes, M. G. V., editors (2024). Processamento de Linguagem Natural: Conceitos, Técnicas e Aplicações em Português. BPLN, 3 edition.
3 Costa, P. B., Pavan, M. C., Santos, W. R., Silva, S. C., and Paraboni, I. (2023). Bertabaporu: assessing a genre-specific language model for portuguese nlp. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 217–223.
4 Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. NAACL.
5 Feijó, D. D. V. and Moreira, V. P. (2020). Mono vs multilingual transformer-based models: a comparison across several language tasks. arXiv, 2007.09757v1.
6 Feng, F., Yang, Y., Cer, D., Arivazhagan, N., and Wang, W. (2022). Language-agnostic BERT sentence embedding. In Muresan, S., Nakov, P., and Villavicencio, A., editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics.
7 Gorenstein, C. and Andrade, L. (1998). Inventário de depressão de beck: propriedades psicométricas da versão em português. Rev psiq clin, 25(5):245–50.
8 Guo, Z., Wang, P., Wang, Y., and Yu, S. (2023). Dr. llama: Improving small language models on pubmedqa via generative data augmentation. ArXiv, abs/2305.07804.
9 Herculano, A., de Paula, T.-H., Fernandes, D., and Rego, A. (2024a). DepreRedditBR: Um conjunto de dados textuais com postagens depressivas no idioma português brasileiro. In Anais do VI Dataset Showcase Workshop, pages 77–90, Porto Alegre, RS, Brasil. SBC.
10 Herculano, A., Souza, D., and Rego, A. (2024b). Deprebertbr: Um modelo de linguagem pré-treinado para o domínio da depressão no idioma português brasileiro. In Anais do XXXIX Simpósio Brasileiro de Bancos de Dados, pages 181–194, Porto Alegre, RS, Brasil. SBC.
11 Ji, S., Zhang, T., Ansari, L., Fu, J., Tiwari, P., and Cambria, E. (2022). MentalBERT: Publicly available pretrained language models for mental healthcare. In Calzolari, N., Béchet, F., Blache, P., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Isahara, H., Maegaard, B., Mariani, J., Mazo, H., Odijk, J., and Piperidis, S., editors, Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 7184–7190, Marseille, France. European Language Resources Association.
12 Kang, A., Chen, J. Y., Lee-Youngzie, Z., and Fu, S. (2024). Synthetic data generation with llm for improved depression prediction. ArXiv, abs/2411.17672.
13 Li, Z., Zhu, H., Lu, Z., and Yin, M. (2023). Synthetic data generation with large language models for text classification: Potential and limitations. In Bouamor, H., Pino, J., and Bali, K., editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10443–10461, Singapore. Association for Computational Linguistics.
14 Naseem, U., Dunn, A. G., Kim, J., and Khushi, M. (2022). Early identification of depression severity levels on reddit using ordinal classification. In Proceedings of the ACM Web Conference 2022, pages 2563–2572.
15 OMS (2023). Organização mundial de saúde (OMS): Desordem depressiva (depressão). https://www.who.int/news-room/fact-sheets/detail/depression. Acesso em 23 de Abril de 2025.
16 OpenAI (2023). Gpt-4 technical report. Technical report, OpenAI. 23 de Abril de 2025.
17 Pavlopoulos, A., Rachiotis, T., and Maglogiannis, I. (2024). An overview of tools and technologies for anxiety and depression management using ai. Applied Sciences, 14(19).
18 Poświata, R. and Perełkiewicz, M. (2022). Opi@ lt-edi-acl2022: Detecting signs of depression from social media text using roberta pre-trained language models. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 276–282.
19 Sampath, K. and Durairaj, T. (2022). Data set creation and empirical analysis for detecting signs of depression from social media postings. In International Conference on Computational Intelligence in Data Science, pages 136–151. Springer.
20 Santos, W. R. d., de Oliveira, R. L., and Paraboni, I. (2023). Setembrobr: a social media corpus for depression and anxiety disorder prediction. Language Resources and Evaluation, pages 1–28.
21 Silva, M., Oliveira, G., Costa, L., and Pappa, G. (2024). Evaluating domain-adapted language models for governmental text classification tasks in portuguese. In Anais do XXXIX Simpósio Brasileiro de Bancos de Dados, pages 247–259, Porto Alegre, RS, Brasil. SBC.
22 Skianis, K., Dogruoz, A. S., and Pavlopoulos, J. (2024). Leveraging LLMs for translating and classifying mental health data. In Saleva, J. and Owodunni, A., editors, Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), pages 236–241, Miami, Florida, USA. Association for Computational Linguistics.
23 Souza, F., Nogueira, R., and Lotufo, R. (2020). Bertimbau: pretrained bert models for brazilian portuguese. In Intelligent Systems: 9th Brazilian Conference, BRACIS 2020, Rio Grande, Brazil, October 20–23, 2020, Proceedings, Part I 9, pages 403–417. Springer.
24 Uban, A.-S., Chulvi, B., and Rosso, P. (2021). An emotion and cognitive based analysis of mental health disorders from social media data. Future Generation Computer Systems, 124:480–494.
25 Wagner Filho, J. A., Wilkens, R., Idiart, M., and Villavicencio, A. (2018). The brwac corpus: a new open resource for brazilian portuguese. In Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018).
26 Xia, P., Zhang, L., and Li, F. (2015). Learning similarity with cosine similarity ensemble. Information Sciences, 307:39–52.
27 Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., and Artzi, Y. (2019). Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.