1 |
Almazrouei, E. et al. (2023). The falcon series of open language models. arXiv, cs.CL, 2311.16867.
|
|
2 |
Bao, K. et al. (2023). TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1007–1014.
|
|
3 |
Brown, T. B. et al. (2020). Language models are few-shot learners. In Proc. of the 34th Intl. Conf. on Neural Information Processing Systems (NeurIPS), p. 1877–1901.
|
|
4 |
Dai, S. et al. (2023). Uncovering ChatGPT’s Capabilities in Recommender Systems. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1126–1132.
|
|
5 |
Fan, W. et al. (2023). Recommender Systems in the Era of Large Language Models (LLMs). arXiv, cs.IR, 2307.02046.
|
|
6 |
Harper, F. M. and Konstan, J. A. (2015). The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst., vol. 5, n. 4, p. 1–19.
|
|
7 |
Hou, Y. et al. (2024a). Bridging Language and Items for Retrieval and Recommendation. arXiv, cs.IR, 2403.03952.
|
|
8 |
Hou, Y. et al. (2024b). Large Language Models are Zero-Shot Rankers for Recommender Systems. In Proc. of the 46th European Conf. on Information Retrieval (ECIR), p. 364–381.
|
|
9 |
Houlsby, N. et al. (2019). Parameter-Efficient Transfer Learning for NLP. In Proc. of the 36th Intl. Conf. on Machine Learning (ICML), p. 2790–2799.
|
|
10 |
Hu, E. J. et al. (2022). LoRA: Low-Rank Adaptation of Large Language Models. In Proc. of the 10th Intl. Conf. on Learning Representations (ICLR), p. 1–13.
|
|
11 |
Jiang, A. Q. et al. (2023). Mistral 7B. arXiv, cs.CL, 2310.06825
|
|
12 |
Liu, J. et al. (2023). Is ChatGPT a Good Recommender? A Preliminary Study. arXiv, cs.IR, 2304.10149.
|
|
13 |
Liu, Q. et al. (2024). ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. In Proc. of the 17th ACM Intl. Conf. on Web Search and Data Mining (WSDM), p. 452–461.
|
|
14 |
Lyu, H. et al. (2023). LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv, cs.CL, 2307.15780.
|
|
15 |
Rajput, S. et al. (2023). Recommender Systems with Generative Retrieval. In Proc. of 37th Conf. on Neural Information Processing Systems (NeurIPS), p. 1–17.
|
|
16 |
Sanner, S. et al. (2023). Large Language Models are Competitive Near Cold-start Recom- menders for Language- and Item-based Preferences. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 890–896.
|
|
17 |
Shao, B., Li, X., and Bian, G. (2021). A survey of research hotspots and frontier trends of recommendation systems from the perspective of knowledge graph. Expert Systems with Applications, 165, p. 113764.
|
|
18 |
Touvron, H. et al. (2023). Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv, cs.CL, 2307.09288.
|
|
19 |
Wang, L. and Lim, E.-P. (2023). Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv, cs.IR, 2304.03153.
|
|
20 |
Wu, F. et al. (2020). MIND: A large-scale dataset for news recommendation. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, p. 3597– 3606.
|
|
21 |
Xu, S. et al. (2023). OpenP5: An Open-Source Platform for Developing, Training, and Evaluating LLM-based Recommender Systems. arXiv, cs.IR, 2306.11134.
|
|
22 |
Zhang, J. et al. (2023). Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. arXiv, cs.IR, 2305.07001.
|
|
23 |
Radford et al. (2018). Improving Language Understanding by Generative Pre-Training. Pre-print CDN OpenAI.
|
|
24 |
HAI - Human-Centered Artificial Intelligence (2024). AI Index Report 2024. Stanford University.
|
|