SBBD

Paper Registration

1

Select Book

2

Select Paper

3

Fill in paper information

4

Congratulations

Fill in your paper information

English Information

(*) To change the order drag the item to the new position.

Authors
# Name
1 Laura Petrola(laupetrola@gmail.com)
2 Angelo Brayner(brayner@dc.ufc.br)
3 Wellington Franco( wellington@crateus.ufc.br)

(*) To change the order drag the item to the new position.

Reference
# Reference
1 [de Araujo et al. 2013] de Araujo, A. H. M., Monteiro, J. M., de Macedo, J. A. F., and Brayner, A. (2013). On using an automatic, autonomous and non-intrusive approach for rewriting sql queries. Journal of Information and Data Management, 3(3):1–15.
2 [DeepSeek 2025] DeepSeek (2025). Deepseek. https://www.deepseek.com/.
3 [Fan et al. 2020] Fan, A., Urbanek, J., Ringshia, P., Dinan, E., Qian, E., Karamcheti, S., Prabhumoye, S., Kiela, D., Rocktaschel, T., Szlam, A., et al. (2020). Generating in- teractive worlds with text. In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 34, pages 1693–1700.
4 [Faroult and L’Hermite 2008] Faroult, S. and L’Hermite, P. (2008). Refactoring SQL Appli- cations. O’Reilly Media, Sebastopol, CA.
5 [Garcia-Molina et al. 2000] Garcia-Molina, H., Ullman, J. D., and Widom, J. (2000). Database System Implementation. Prentice Hall, New Jersey, USA.
6 [Hadi et al. 2023] Hadi, M. U., Qureshi, R., Shah, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Wu, J., Mirjalili, S., et al. (2023). Large language models: a comprehen- sive survey of its applications, challenges, limitations, and future prospects. Authorea Preprints, 1:1–26.
7 [Hong et al. 2024] Hong, Z., Yuan, Z., Zhang, Q., Chen, H., Dong, J., Huang, F., and Huang, X. (2024). Next-generation database interfaces: A survey of llm-based text-to-sql. arXiv preprint arXiv:2406.08426.
8 [Minaee et al. 2024] Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Am- atriain, X., and Gao, J. (2024). Large language models: A survey. arxiv 2024. arXiv preprint arXiv:2402.06196.
9 [Ministério da Cultura 2025] Ministério da Cultura (2025). Mapas culturais - funarte. https://mapas.cultura.gov.br/.
10 [Nascimento 2024] Nascimento, E. R. S. (2024). Querying databases with natural language: The use of large language models for text-to-sql tasks. Dissertação de mestrado, Pon- tifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil. Advisor: Marco Antonio Casanova.
11 [OpenAI 2025] OpenAI (2025). Chatgpt.
12 [Ozdemir 2023] Ozdemir, S. (2023). Quick start guide to large language models: strategies and best practices for using ChatGPT and other LLMs. Addison-Wesley Professional.
13 [Pedroso et al. 2025] Pedroso, B. C., Pereira, M. R., and Pereira, D. A. (2025). Performance evaluation of llms in the text-to-sql task in portuguese. In Proceedings of the SBSI25, Recife, PE.
14 [Ramakrishnan and Gehrke 2002] Ramakrishnan, R. and Gehrke, J. (2002). Database Man- agement Systems. McGraw-Hill, 3rd edition.
15 [Sala et al. 2024] Sala, L., Sullutrone, G., and Bergamaschi, S. (2024). Text-to-sql with large language models: Exploring the promise and pitfalls. In Proceedings of the 32nd Symposium on Advanced Database Systems (SEBD 2024). CEUR Workshop Proceed- ings.
16 [Shasha and Bonnet 2003] Shasha, D. and Bonnet, P. (2003). Database Tuning: Principles, Experiments, and Troubleshooting Techniques. Morgan Kaufmann.
17 [Yang et al. 2024] Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Zhong, S., Yin, B., and Hu, X. (2024). Harnessing the power of llms in practice: A survey on chatgpt and beyond. ACM Transactions on Knowledge Discovery from Data, 18(6):1–32.
18 [Zhang et al. 2024] Zhang, Y., Jin, H., Meng, D., Wang, J., and Tan, J. (2024). A compre- hensive survey on process-oriented automatic text summarization with exploration of llm-based methods. arXiv preprint arXiv:2403.02901. Preprint, not peer-reviewed.
19 [Zhu et al. 2024] Zhu, X., Li, Q., Cui, L., and Liu, Y. (2024). Large language model en- hanced text-to-sql generation: A survey. arXiv preprint arXiv:2410.06011. Preprint, not peer-reviewed.