1 |
[Blondel et al. 2020] Blondel, M., Teboul, O., Berthet, Q., and Djolonga, J. (2020). Fast differentiable sorting and ranking. In ICML’20, Virtual Event.
|
|
2 |
[Bruch 2021] Bruch, S. (2021). An alternative cross entropy loss for learning-to-rank. In Proceedings of the Web Conference 2021, WWW’21, New York, USA.
|
|
3 |
[de S´a et al. 2016] de S´a, C. C., Gonc¸alves, M. A., Sousa, D. X., and Salles, T. (2016). Generalized broof-l2r: A general framework for learning to rank based on boosting and random forests. In SIGIR´16.
|
|
4 |
[Dinc¸er et al. 2016] Dinc¸er, B. T., Macdonald, C., and Ounis, I. (2016). Risk-sensitive evaluation and learning to rank using multiple baselines. SIGIR ’16, New York, USA.
|
|
5 |
[Fu et al. 2020] Fu, Z., Xian, Y., Gao, R., Zhao, J., Huang, Q., Ge, Y., Xu, S., Geng, S., Shah, C., Zhang, Y., and de Melo, G. (2020). Fairness-Aware Explainable Recommendation over Knowledge Graphs.
|
|
6 |
[Inoue 2019] Inoue, H. (2019). Multi-sample dropout for accelerated training and better generalization. In Journal of Computing Research Repository, 716(0).
|
|
7 |
[Knijnenburg et al. 2012] Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., and Newell, C.(2012). Explaining the user experience of recommender systems. J. of User Mod. and User-Adapt. Inter.
|
|
8 |
[Li 2011] Li, H. (2011). Learning to rank for information retrieval and natural language processing. Morgan & Claypool Publishers.
|
|
9 |
[Liu 2009] Liu, T.-Y. (2009). Learning to rank for information retrieval. Foundations and Trends® in Information Retrieval, 3(3):225–331.
|
|
10 |
[Liu 2011] Liu, T.-Y. (2011). Learning To Rank For Information Retrieval. Springer, New York, USA.
|
|
11 |
[Pobrotyn et al. 2020] Pobrotyn, P., Bartczak, T., Synowiec, M., Bialobrzeski, R., and Bojar, J. (2020). Context-aware learning to rank with self-attention. In Journal Computing Research Repository.
|
|
12 |
[Qin et al. 2010] Qin, T., Liu, T.-Y., and Li, H. (2010). A general approximation framework for direct optimization of information retrieval measures. In Journal of Information Retrieval, 13(0).
|
|
13 |
[Qin et al. 2021] Qin, Z., Yan, L., Zhuang, H., Tay, Y., Pasumarthi, R. K., Wang, X., and Najork, M.(2021). Are neural rankers still outperformed by gradient boosted decision trees? In ICLR’21.
|
|
14 |
[Rashed et al. 2021] Rashed, A., Grabocka, J., and Schmidt-Thieme, L. (2021). A guided learning approach for item recommendation via surrogate loss learning. In SIGIR´21, SIGIR’21.
|
|
15 |
[Shi et al. 2010] Shi, Y., Larson, M., and Hanjalic, A. (2010). List-wise learning to rank with matrix factorization for collaborative filtering. In RecSys ’10.
|
|
16 |
[Silva Rodrigues et al. 2022] Silva Rodrigues, P. H., Xavier Sousa, D., Couto Rosa, T., and Gonc¸alves, M. A. (2022). Risk-sensitive deep neural learning to rank. SIGIR’22, Madrid, Spain.
|
|
17 |
[Silva Rodrigues et al. 2024] Silva Rodrigues, P. H., Xavier Sousa, D., Couto Rosa, T., and Gonc¸alves, M. A. (2024). Risk-sensitive optimization of neural deep learning ranking models with applications in ad-hoc retrieval and recommender systems. Information Processing & Management.
|
|
18 |
[Sousa et al. 2019] Sousa, D. X., Canuto, S., Gonc¸alves, M. A., Rosa, T. C., and Martins, W. S. (2019). Risk-sensitive learning to rank with evolutionary multi-objective feature selection. ACM TOIS, 37(2).
|
|
19 |
[Srivastava et al. 2014] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R.(2014). Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(56).
|
|
20 |
[Wang et al. 2018] Wang, X., Li, C., Golbandi, N., Bendersky, M., and Najork, M. (2018). The lambdaloss framework for ranking metric optimization. CIKM ’18, New York, USA.
|
|
21 |
[Zhang et al. 2019] Zhang, S., Yao, L., Sun, A., and Tay, Y. (2019). Deep learning based recommender system: A survey and new perspectives. ACM Comput. Surv.
|
|
22 |
[Zhong et al. 2023] Zhong, J., Hu, X., Alghamdi, O., Elattar, S., and Sulaie, S. A. (2023). Xgboost with q-learning for complex data processing in business logistics management. Inf, Proc. & Manag., 60(5)
|
|