SBBD

Paper Registration

1

Select Book

2

Select Paper

3

Fill in paper information

4

Congratulations

Fill in your paper information

English Information

(*) To change the order drag the item to the new position.

Authors
# Name
1 Washington Cunha(washingtoncunha@dcc.ufmg.br)
2 Sergio Canuto(sergiodaniel@dcc.ufmg.br)
3 Celso França(celsofranca@dcc.ufmg.br)
4 Leonardo Rocha(lcrocha@ufsj.edu.br)
5 Marcos Gonçalves(mgoncalv@dcc.ufmg.br)
6 Welton Santos(weltonsantos@dcc.ufmg.br)
7 Guilherme Nascimento(guilhermefonseca8426@aluno.ufsj.edu.br)

(*) To change the order drag the item to the new position.

Reference
# Reference
1 Canuto, S., Salles, T., Rosa, T. C., and Gonçalves, M. A. (2019). Similarity-based synthetic document representations for meta-feature generation in text classification. In Proceedings of the 42nd International ACM SIGIR Conference.
2 Chowanda, A. and Muliono, Y. (2022). Indonesian sentiment analysis model from social media by stacking bert and bi-lstm. In 2022 3rd International Conference on Artificial Intelligence and Data Sciences (AiDAS).
3 Cunha, W., Canuto, S., Viegas, F., Salles, T., Gomes, C., Rosa, T., Gonçalves, M. A., and Rocha, L. (2020). Extended pre-processing pipeline for text classification: On the role of meta-feature representations, sparsification and selective sampling. IP&M, 57.
4 Cunha, W., Mangaravite, V., Gomes, C., Canuto, S., Viegas, F., França, C., Almeida, J. M., Rosa, T., Rocha, L., and Gonçalves, M. A. (2021). On the cost-effectiveness of neural and non-neural approaches and representations for text classification. IP&M, 58.
5 Cunha, W., Viegas, F., França, C., Rosa, T., Rocha, L., and Gonçalves, M. A. (2023). A comparative survey of instance selection methods applied to nonneural and transformer-based text classification. ACM Computing Surveys.
6 Desai, S. and Durrett, G. (2020). Calibration of pre-trained transformers. In arXiv preprint arXiv:2003.07892.
7 Ding, W. and Wu, S. (2020). A cross-entropy based stacking method in ensemble learning. J. of Intelligent & Fuzzy Systems, 39:1–12.
8 Džeroski, S. and Ženko, B. (2004). Is combining classifiers with stacking better than selecting the best one? Machine Learning, 54.
9 Gomes, C., Goncalves, M., Rocha, L., and Canuto, S. (2021). On the cost-effectiveness of stacking of neural and non-neural methods for text classification: Scenarios and performance prediction. In Findings of the ACL-IJCNLP 2021.
10 Niculescu-Mizil, A. and Caruana, R. (2005). Predicting good probabilities with supervised learning. In ICML’05.
11 Penha, G., Campos, R., Canuto, S., Gonçalves, M., and Santos, R. (2019). Document performance prediction for automatic text classification. In European Conference on IR Research (ECIR).
12 Singhal, R. and Kashef, R. (2023). A weighted stacking ensemble model with sampling for fake reviews detection. IEEE TCSS.
13 Subba, B. and Kumari, S. (2022). A heterogeneous stacking ensemble based sentiment analysis framework using multiple word embeddings. Computational Intelligence.
14 Wahba, Y., Madhavji, N., and Steinbacher, J. (2022). Reducing misclassification due to overlapping classes in text classification via stacking classifiers on different feature subsets. In Proceedings of the 2022 FICC, Vol.