1 |
Anderson, P. W. (1972). More is different. Science,
177(4047):393–396.
|
|
2 |
Arslan, M., Ghanem, H., Munawar, S., and Cruz, C. (2024).
A survey on RAG with llms. Procedia Computer Science, 246:3781–3790. 28th International Conference on Knowledge Based and Intelligent information and Engineering Systems (KES 2024).
|
|
3 |
Bengio, Y., Ducharme, R., Vincent, P., and Jauvin, C. (2003). A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155.
|
|
4 |
Bezerra, E. (2016). Introdução à aprendizagem profunda. In Ogasawara, V., editor, Tópicos em Gerenciamento de Dados e Informações, chapter 3,
pages 57–86. SBC, Porto Alegre, Brazil, 1 edition.
|
|
5 |
Deng, N., Chen, Y., and Zhang, Y. (2022). Recent advances in text-to-SQL: A survey of what we have and what we expect. In COLING, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
|
|
6 |
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C., and Solorio, T., editors, ACL 2019, pages 4171–4186, Minneapolis, Minnesota. ACL.
|
|
7 |
Erdogan, L. E., Furuta, H., Kim, S., et al. (2025). Plan-and-act:
Improving planning of agents for longhorizon tasks. In ICML 2025.
|
|
8 |
Fan, W., Ding, Y., Ning, L., Wang, S., Li, H., Yin, D., Chua, T.-S.,
and Li, Q. (2024). A survey on RAG meeting llms: Towards retrieval-augmented
large language models. In KDD’24, KDD’24, page 64916501, New York, NY, USA.
Association for Computing Machinery.
|
|
9 |
Hu, S., Kim, S. R., Zhang, Z., et al. (2025). Pre-act: Multistep
planning and reasoning improves acting in LLM agents. arXiv preprint
arXiv:2505.09970.
|
|
10 |
Jennings, N. R. and Wooldridge, M. J., editors (1998). Agent Technology: Foundations, Applications, and Markets. Springer, Berlin, Heidelberg.
|
|
11 |
Kayhan, V., Levine, S., Nanda, N., Schaeffer, R., Natarajan, A., Chughtai, B., et al. (2023). Scaling laws and emergent capabilities of large language models. arXiv preprint arXiv:2309.00071.
|
|
12 |
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436–444.
|
|
13 |
Liu, X., Shen, S., Li, B., Ma, P., Jiang, R., Zhang, Y., Fan, J., Li,
G., Tang, N., and Luo, Y. (2025). A survey of text-to-sql in the era of llms: Where
are we, and where are we going? IEEE Transactions on Knowledge and Data
Engineering, pages 1–20.
|
|
14 |
Mikolov, T., Karafiát, M., Burget, L., Cernock`y, J., and Khudanpur,
S. (2010). Recurrent neural network based language model. In INTERSPEECH,
pages 1045–1048.
|
|
15 |
Newell, A., Shaw, J., and Simon, H. A. (1956). The logic
theory machine–a complex information processing system. IRE Transactions on
Information Theory, 2(3):61–79.
|
|
16 |
Nilsson, N. J. (1984). Shakey the Robot. SRI International, Menlo
Park, CA.
|
|
17 |
Rawat, M., Gupta, A., et al. (2025). Preact: Multistep planning
and reasoning improves acting in llm agents. arXiv preprint arXiv:2505.09970.
|
|
18 |
Rosenfeld, R. (2000). Two decades of statistical language modeling:
where do we go from here? Proceedings of the IEEE, 88(8):1270–1278.
|
|
19 |
Russell, S. and Norvig, P. (2021). Artificial Intelligence:
A Modern Approach. Pearson, 4th edition.
|
|
20 |
Sapkota, R., Roumeliotis, K. I., and Karkee, M. (2025). AI agents vs. agentic ai: A conceptual taxonomy, applications and challenges.
|
|
21 |
Shi, L., Tang, Z., Zhang, N., Zhang, X., and Yang, Z. (2025). A
survey on employing large language models for text-to-sql tasks. ACM Comput.
Surv.
|
|
22 |
Shorten, C., Pierse, C., Smith, T. B., D’Oosterlinck, K., Celik, T., Cardenas, E., Monigatti, L., Hasan, M. S., Schmuhl, E., Williams, D., Kesiraju, A., and van Luijt, B. (2025). Querying databases with function calling.
|
|
23 |
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L.,
Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), pages 5998–6008.
|
|
24 |
Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud,
S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T. B., Vinyals, O., Liang, P., Dean, J., and Fedus, W. (2022a). Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
|
|
25 |
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E. H., Le, Q. V., and Zhou, D. (2022b). Chain-of-thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
|
|
26 |
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, C., and Narasimhan, K. (2023). Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
|
|
27 |
Yao, S., Zhao, J., Yu, D., Du, N., Yu, W.-t., Shafran, I., Griffiths, T. L., Neubig, G., Cao, C., and Narasimhan, K. (2022). React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
|
|