1 |
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
|
2 |
Griffiths, T. L., Jordan, M. I., Tenenbaum, J. B., and Blei, D. M. (2004). Hierarchical topic models and the nested chinese restaurant process. In Advances in neural information processing systems, pages 17–24.
|
|
3 |
Grootendorst, M. (2022). Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794.
|
|
4 |
Hamilton, W. L., Clark, K., Leskovec, J., and Jurafsky, D. (2016). Inducing domainspecific sentiment lexicons from unlabeled corpora. CoRR, abs/1606.02820.
|
|
5 |
Hutto, C. J. and Gilbert, E. (2014). Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media.
|
|
6 |
Li, C., Duan, Y., Wang, H., Zhang, Z., Sun, A., and Ma, Z. (2017). Enhancing topic modeling for short texts with auxiliary word embeddings. ACM TOIS.
|
|
7 |
Sachan, D. S., Zaheer, M., and Salakhutdinov, R. (2019). Revisiting lstm networks for semi-supervised text classification via mixed objective function. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6940–6948.
|
|
8 |
Shi, T., Kang, K., Choo, J., and Reddy, C. K. (2018). Short-text topic modeling via nonnegative matrix factorization enriched with local word-context correlations. In WWW ’18, pages 1105–1114.
|
|
9 |
Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Zhong, S., Yin, B., and Hu, X. (2024). Harnessing the power of llms in practice: A survey on chatgpt and beyond. ACM Trans. Knowl. Discov. Data, 18(6).
|
|