1 |
Box, G. and Jenkins, G. M. (1976). Time Series Analysis: Forecasting and Control. Holden-Day
|
|
2 |
Crankshaw, D., Wang, X., Zhou, G., Franklin, M. J., Gonzalez, J. E., and Stoica, I. (2017). Clipper: A low-latency online prediction serving system. In NSDI’17 - USENIX, pages 613–627, Boston, MA. USENIX Association.
|
|
3 |
Du, S. S., Wang, Y., Zhai, X., Balakrishnan, S., Salakhutdinov, R. R., and Singh, A. (2018). How many samples are needed to estimate a convolutional neural network? In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
|
|
4 |
Ghanta, S., Subramanian, S., Khermosh, L., Sundararaman, S., Shah, H., Goldberg, Y., Roselli, D., and Talagala, N. (2019). ML health monitor: taking the pulse of machine learning algorithms in production. In Applications of Machine Learning, volume 11139, pages 191 – 202. International Society for Optics and Photonics, SPIE.
|
|
5 |
Hassani, H. and Silva, E. S. (2015). Forecasting with big data: A review. Annals of Data Science, 2(1):5–19.
|
|
6 |
Hastie, T., Tibshirani, R., and Friedman, J. H. (2009). The elements of statistical learning: data mining, inference, and prediction, 2nd Edition. Springer.
|
|
7 |
Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: The forecast package for r. Journal of Statistical Software, Articles, 27(3):1–22.
|
|
8 |
Hyndman, R. J. and Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4):679 – 688.
|
|
9 |
I. Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., and Muller, P.-A. (2019). Deep learning for time series classification: A review. Data Min. Knowl. Discov., 33(4):917–963.
|
|
10 |
Izakian, H., Pedrycz, W., and Jamal, I. (2015). Fuzzy clustering of time series data using dynamic time warping distance. Engineering Applications of Artificial Intelligence, 39:235–244.
|
|
11 |
Liao, T. W. (2005). Clustering of time series data: A survey. Pattern Recognition, 38(11):1857 – 1874.
|
|
12 |
Mirzasoleiman, B. (2021). Efficient machine learning from massive datasets.
|
|
13 |
Murat, M., Malinowska, I., Gos, M., and Krzyszczak, J. (2018). Forecasting daily meteorological time series using arima and regression models. International Agrophysics, 32(2):253–264.
|
|
14 |
Oregi, I., Perez, A., Del Ser, J., and Lozano, J. A. (2017). On-line dynamic time warp-´ ing for streaming time series. In Machine Learning and Knowledge Discovery in Databases, pages 591–605, Cham. Springer International Publishing.
|
|
15 |
Pereira, R., Souto, Y., Chaves, A., Zorrilla, R., Tsan, B., Rusu, F., Ogasawara, E., Ziviani, A., and Porto, F. (2021). DJEnsemble: A Cost-Based Selection and Allocation of a Disjoint Ensemble of Spatio-Temporal Models, page 226–231. ACM, NY, USA.
|
|
16 |
Polyzotis, N., Roy, S., Whang, S. E., and Zinkevich, M. (2018). Data lifecycle challenges in production machine learning: A survey. SIGMOD Rec., 47(2):17–28.
|
|
17 |
Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53 – 65.
|
|
18 |
Saha, S., Moorthi, S., Wu, X., Wang, J., Nadiga, S., and Becker, E. (2011). Ncep climate forecast system version 2 (cfsv2) selected hourly time-series products.
|
|
19 |
Sakoe, H. and Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43–49.
|
|
20 |
Souto, Y. M., Porto, F., de Carvalho Moura, A. M., and Bezerra, E. (2018). A spatiotemporal ensemble approach to rainfall forecasting. In IJCNN, 2018, pages 1–8.
|
|
21 |
Wang, W., Gao, J., Zhang, M., Wang, S., Chen, G., Ng, T. K., Ooi, B. C., Shao, J., and Reyad, M. (2018). Rafiki: Machine learning as an analytics service system. Proc. VLDB Endow., 12(2):128–140.
|
|
22 |
Xu, G., Ren, T., Chen, Y., and Che, W. (2020). A one-dimensional cnn-lstm model for epileptic seizure recognition using eeg signal analysis. Frontiers in Neuroscience, 14:1253.
|
|