1 |
M. Marjani, F. Nasaruddin, A. Gani, A. Karim, I. A. T. Hashem, A. Siddiqa, and I. Yaqoob, “Big IoT data analytics: Architecture, opportunities, and open research challenges,” IEEE Access, vol. 5, pp. 5247–5261, 2017.
|
|
2 |
H. M. Gomes, A. Bifet, J. Read, J. P. Barddal, F. Enembreck, B. Pfharinger, G. Holmes, and T. Abdessalem, “Adaptive random forests for evolving data stream classification,” Machine Learning, vol. 106, no. 9, pp. 1469–1495, Oct 2017. [Online]. Available: https://doi.org/10.1007/s10994-017-5642-8
|
|
3 |
K. K. Wankhade, S. S. Dongre, and K. C. Jondhale, “Data stream classification: a review,” Iran Journal of Computer Science, vol. 3, no. 4, pp. 239–260, Dec 2020. [Online]. Available: https://doi.org/10.1007/s42044-020-00061-3
|
|
4 |
H. M. Gomes, J. P. Barddal, L. Boiko Ferreira, and A. Bifet, “Adaptive random forests for data stream regression,” 04 2018.
|
|
5 |
R. D. Baruah, P. Angelov, and D. Baruah, “Dynamically evolving fuzzy classifier for real-time classification of data streams,” in 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2014, pp. 383–389.
|
|
6 |
H. Guo, H. Li, Q. Ren, and W. Wang, “Concept drift type identification based on multi-sliding windows,” Information Sciences, vol. 585, pp. 1–23, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025521011439
|
|
7 |
H. Binder, O. Gefeller, M. Schmid, and A. Mayr, “The evolution of boosting algorithms,” Methods of Information in Medicine, vol. 53, no. 06, pp. 419–427, 2014. [Online]. Available: https://doi.org/10.3414\%2Fme13-01-0122
|
|
8 |
H. M. Gomes, J. P. Barddal, F. Enembreck, and A. Bifet, “A survey on ensemble learning for data stream classification,” ACM Comput. Surv., vol. 50, no. 2, mar 2017. [Online]. Available: https://doi.org/10.1145/3054925
|
|
9 |
T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’16. New York, NY, USA: ACM, 2016, pp. 785–794. [Online]. Available: http://doi.acm.org/10.1145/2939672.2939785
|
|
10 |
R. Santhanam, S. Raman, N. Uzir, and S. Banerjeeb, “Experimenting XGBoost algorithm for prediction and classification of different datasets,” International Journal of Control Theory and Applications, vol. 9, no. 40, 2016.
|
|
11 |
P. B. Dongre and L. G. Malik, “A review on real time data stream classification and adapting to various concept drift scenarios,” in 2014 IEEE International Advance Computing Conference (IACC), 2014, pp. 533–537.
|
|
12 |
M. Datar and R. Motwani, The Sliding-Window Computation Model and Results. Boston, MA: Springer US, 2007, pp. 149–167. [Online]. Available: https://doi.org/10.1007/978-0-387-47534-9 8
|
|
13 |
G. Hulten, L. Spencer, and P. Domingos, “Mining time-changing data streams,” in Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’01. New York, NY, USA: Association for Computing Machinery, 2001, p. 97–106. [Online]. Available: https://doi.org/10.1145/502512.502529
|
|
14 |
A. Bifet and R. Gavaldà, “Adaptive learning from evolving data streams,” in Advances in Intelligent Data Analysis VIII, N. M. Adams, C. Robardet, A. Siebes, and J.-F. Boulicaut, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 249–260.
|
|
15 |
A. Bifet and R. Gavald`a, Learning from Time-Changing Data with Adaptive Windowing, pp. 443–448. [Online]. Available: https://epubs.siam.org/doi/abs/10.1137/1.9781611972771.42
|
|
16 |
J. Montiel, R. Mitchell, E. Frank, B. Pfahringer, T. Abdessalem, and A. Bifet, “Adaptive XGBoost for evolving data streams,” in 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–8.
|
|
17 |
X. Wu, P. Li, and X. Hu, “Learning from concept drifting data streams with unlabeled data,” Neurocomputing, vol. 92, pp. 145–155, 2012, data Mining Applications and Case Study. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0925231212001270
|
|
18 |
B. Calvo and G. Santaf´e Rodrigo, “scmamp: Statistical comparison of multiple algorithms in multiple problems,” The R Journal, Vol. 8/1, Aug. 2016, 2016.
|
|
19 |
J. Demˇsar, “Statistical comparisons of classifiers over multiple data sets,” The Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006.
|
|