1 |
Breiman, L. (1996). Bagging predictors. Mach. Learn., 24(2):123–140.
|
|
2 |
Breiman, L. (2001). Random forests. Machine Learning, 45(1):5–32.
|
|
3 |
Fernández-Delgado, M., Cernadas, E., Barro, S., and Amorim, D. (2014). Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res., 15(1):3133–3181.
|
|
4 |
Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139.
|
|
5 |
Geurts, P., Ernst, D., and Wehenkel, L. (2006). Extremely randomized trees. Machine Learning, 63(1):3–42.
|
|
6 |
Hastie, T., Tibshirani, R., and Friedman, J. H. (2009). The Elements of Statistical Learning. Springer.
|
|
7 |
Salles, T., Gonçalves, M., Rodrigues, V., and Rocha, L. (2015). Broof: Exploiting out-of-bag errors, boosting and random forests for effective automated classification. In Proc. of the 38th International ACM SIGIR Conference on Inf. Retrieval, pages 353–362.
|
|
8 |
Segal, M. R. (2004). Machine learning benchmarks and random forest regression. Technical report, University of California.
|
|