SBBD

Paper Registration

1

Select Book

2

Select Paper

3

Fill in paper information

4

Congratulations

Fill in your paper information

English Information

(*) To change the order drag the item to the new position.

Authors
# Name
1 Eduardo Campos(educampos@usp.br)
2 Rafael Conrado(rafaelconrado@usp.br)
3 Caetano Traina Jr.(caetano@icmc.usp.br)
4 Agma Traina(agma@icmc.usp.br)
5 Mirela Cazzolato(mtcazzolato@gmail.com)

(*) To change the order drag the item to the new position.

Reference
# Reference
1 Barbalho, I. M. P., Fernandes, F., Barros, D. M. S., Paiva, J. C., Henriques, J., Morais, A. H. F., Coutinho, K. D., Coelho Neto, G. C., Chioro, A., and Valentim, R. A. M. (2022). Electronic health records in Brazil: Prospects and technological challenges. Frontiers in Public Health, 10. DOI: http://dx.doi.org/10.3389/fpubh.2022.963841.
2 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. DOI: http://dx.doi.org/10.5555/3495724.3495883.
3 Costa, L., Figênio, M., Santanchè, A., and Gomes-Jr, L. (2024). Llm-mri python module: a brain scanner for llms. In Anais Estendidos do XXXIX Simp´osio Brasileiro de Bancos de Dados, pages 125–130, Porto Alegre, RS, Brasil. SBC. DOI: http://dx.doi.org/10.5753/sbbd estendido.2024.243136.
4 FAPESP (2020). FAPESP COVID-19 Data Sharing/BR. Technical report, FAPESP. URL: https://repositoriodatasharingfapesp.uspdigital.usp.br.
5 Hadi, M. U., Qureshi, R., Shah, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Wu, J., Mirjalili, S., et al. (2023). A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints.
6 Jiang, J., Qi, K., Bai, G., and Schulman, K. (2023). Pre-pandemic assessment: a decade of progress in electronic health record adoption among us hospitals. Health Affairs Scholar, 1(5):qxad056.
7 Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM computing surveys, 55(9):1–35. DOI: https://doi.org/10.1145/3560815.
8 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. (2017). Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vish- wanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. URL: proceed- ings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa- Paper.pdf.
9 Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837.