1 |
Alammar, J. and Grootendorst, M. (2024). Hands-On Large Language Models. O’Reilly Media
|
|
2 |
Berryman, J. and Ziegler, A. (2024). Prompt Engineering for LLMs: The Art and Science of Building Large Language Model–Based Applications. ”O’Reilly Media, Inc.”.
|
|
3 |
Institute, J. B. (2015). The Joanna Briggs Institute Reviewers’ Manual 2015: Methodology for JBI Scoping Reviews. Adelaide, Australia.
|
|
4 |
Kejriwal, M., Knoblock, C. A., and Szekely, P. (2021). Knowledge graphs: Fundamentals, techniques, and applications. MIT Press.
|
|
5 |
Li, Z., Xu, T., Wang, W., Wu, H., Xiong, F., Chen, E., Lyu, Y., Niu, S., Liu, H., and Tang, B. (2024). Crud-rag: A comprehensive chinese benchmark for retrieval-augmented generation of large language models. ACM Transactions on Information Systems, 43:1 – 32
|
|
6 |
Ma, Y., Chen, Z., and Church, K. (2021). Emerging trends: A gentle introduction to fine-tuning. Natural Language Engineering, 27:763 – 778
|
|
7 |
Muscolino, H., Machado, A., Vesset, D., and Rydning, J. (2023). Untapped value: What every executive needs to know about unstructured data. White paper, IDC, Framingham, MA. Sponsored by Box.
|
|
8 |
Negro, A., Kus, V., Futia, G., and Montagna, F. (2025). Knowledge Graphs and LLM in Action. Manning.
|
|
9 |
Nandigam, J., Patil, R., Boit, S., and Gudivada, V. (2023). A survey of text representation and embedding techniques in nlp. IEEE Access, 11:36120–36146
|
|
10 |
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., et al. (2021). The prisma 2020 statement: an updated guideline for reporting systematic reviews. bmj, 372
|
|
11 |
Zhong, L., Wu, J., Li, Q., Peng, H., and Wu, X. (2023). A comprehensive survey on automatic knowledge graph construction. ACM Computing Surveys, 56(4):1–62.
|
|