1 |
Dong, X., Zhang, C., Ge, Y., Mao, Y., Gao, Y., Chen, I., Lin, J., and Lou, D. (2023). C3: Zero-shot text-to-sql with chatgpt. ArXiv.
|
|
2 |
Hong, Z., Yuan, Z., Zhang, Q., Chen, H., Dong, J., Huang, F., and Huang, X. (2024). Next-generation database interfaces: A survey of llm-based text-to-sql. ArXiv.
|
|
3 |
Katsogiannis-Meimarakis, G. and Koutrika, G. (2023). A survey on deep learning approaches for text-to-sql. The VLDB Journal, 32:905—-936.
|
|
4 |
Lewis, P., Perez, E., Piktus, A., Petroni, F., and Karpukhin, V. (2020). Retrieval-
augmented generation for knowledge-intensive nlp tasks. In Proceedings of the 34th
International Conference on Neural Information Processing Systems, NIPS ’20, Red
Hook, NY, USA. Curran Associates Inc.
|
|
5 |
Li, J., Hui, B., Cheng, R., Qin, B., and Ma, C. (2023). Graphix-t5: mixing pre-trained
transformers with graph-aware layers for text-to-sql parsing. In Proceedings of the
Thirty-Seventh AAAI Conference on Artificial Intelligence. AAAI Press.
|
|
6 |
Li, J., Hui, B., Qu, G., Yang, J., Li, B., Li, B., Wang, B., Qin, B., Geng, R., Huo, N.,
et al. (2024). Can llm already serve as a database interface? a big bench for large-scale
database grounded text-to-sqls. Advances in Neural Information Processing Systems,
36.
|
|
7 |
Nascimento, E. and Casanova, M. A. (2024). Querying databases with natural language:
The use of large language models for text-to-sql tasks. In Anais Estendidos do XXXIX
Simposio Brasileiro de Bancos de Dados ´ , pages 196–201, Porto Alegre, RS, Brasil.
SBC.
|
|
8 |
OpenAI (2025). Gpt-4o. Available on: https://platform.openai.com/docs/
models/gpt-4o. Accessed on 21 April 2025.
|
|
9 |
Pourreza, M. and Rafiei, D. (2023). Din-sql: decomposed in-context learning of text-to-
sql with self-correction. In Proceedings of the 37th International Conference on Neural
Information Processing Systems, NIPS ’23, Red Hook, NY, USA. Curran Associates
Inc.
|
|
10 |
Talaei, S., Pourreza, M., Chang, Y., Mirhoseini, A., and Saberi, A. (2024). Chess: Con-
textual harnessing for efficient sql synthesis. ArXiv.
|
|
11 |
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E. H., Le, Q. V.,
and Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language
models. In Proceedings of the 36th International Conference on Neural Information
Processing Systems, NIPS ’22, Red Hook, NY, USA. Curran Associates Inc.
|
|
12 |
Yin, P., Neubig, G., Yih, W., and Riedel, S. (2020). Tabert: Pretraining for joint under-
standing of textual and tabular data. In Proceeding of the 58th Annual Meeting of the
Association for Computational Linguistics.
|
|
13 |
Yu, T., Zhang, R., Yang, K., Yasunaga, M., and Wang, D. (2018). Spider: A large-scale
human-labeled dataset for complex and cross-domain semantic parsing and text-to-
SQL task. In Riloff, E., editor, Proceedings of the 2018 Conference on Empirical
Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. As-
sociation for Computational Linguistics.
|
|