Are more complex models always better?
A part of the curriculum of the Econometrics & Mathematical Economics master’s degree given in the VU University Amsterdam is the course Time Series Econometrics. In this course, students are taught how to analyze time series with the aid of 'state-space models', on the assumption that observations over time (such as the content of the Nile, for example) are driven by non-observed factors. Among other things, state-space modeling makes it possible to derive statistical information from these factors. From analyses that I carried out while following this course, I am convinced that, in any given time series, there will be a great deal of relevant information that cannot be directly observed. The challenge is to extract this information from the data as effectively as possible. But what is the best way to do that?
In econometrics time series analysis, we assume that the values we observe over time are a sample of a stochastic process that has existed, and will continue to exist, for an infinite period of time. Every realization from this sample is derived from a particular distribution, and by assuming that the parameters of these distributions have a lot in common, we are able to model the time series. A distribution is made prior to the realizations and the parameters are approximated using estimation methods such as 'Maximum Likelihood' and 'Bayesian Estimation'. With the aid of these parameters we can look ahead in the stochastic process; in other words we can make a prediction for the time series. It is also possible to carry out a time series analysis using machine learning (ML) or artificial neural networks. Based on the neural networks that we are familiar with from biology, the assumption is that the data can be modeled in an input, output and one or more hidden layers. Algorithms are then trained to work with a subset of the data to establish the relationships (weighting) between these layers as well as possible. These relationships are then used to predict the output layer for a given input layer. But which of these two methods is the preferred method for time series analysis?
In the article 'The Accuracy of Machine Learning Forecasting Methods versus Statistical Ones: Extending the Results of the M3 Competition', Spyros Makridakis et al carry out the time series analysis by making a comparison between econometric and ML methods. They make the comparison by predicting 1,045 monthly time series for 18 horizons, using both ML and statistical methods. Based on 'Mean Absolute Percentage Error' and 'Mean Absolute Relative Error' benchmarks, they concluded that statistical methods are better predictors than the ML methods for all horizons.
Based on this article, I conclude that despite the current popularity of ML methods, they are not necessarily better than statistical methods at carrying out time series analysis. Therefore, before deciding to implement an ML model, we would be well advised to first compare it with a statistical model, to establish whether the results of the ML model really are better. A model that is more popular is not always better (because it does not deliver a better in- and out-of-sample fit) than a less popular model. The same is true when it comes to complexity; a model that is more complex is not necessarily better than a simpler model. Even if a more complex model does perform better, it must be ascertained whether the gain in performance sufficiently compensates for the lack of simplicity.
Above all, let’s not underestimate the value of implementing a model that can be understood by many people in the organization and whose results can be clearly interpreted.