The trend...

It was actually only 25 years ago, but it feels more like looking back on the Middle Ages. It was when I studied math and statistics. Despite the fact that the practical side was often overshadowed by formulae, it was a great way of showing how things worked - even though you often ended up with rather vague statements about confidence levels and significance. You first collect data that you suspect could be related, subject it to a well-defined test and, with a bit of luck, you would be able to establish with some degree of certainty if such a relationship is actually present - in other words, if it’s ‘statistically significant’.

Based on such, it was not only possible to explain what had happened in the past, but within certain limits you could also predict what would happen in the future. That correlation between the likelihood of frosty pavements and elderly people suffering from a hip fracture wouldn’t suddenly change. The same could be said of the financial world. Share prices were at least partly attributable to economic variables, such as the development of prosperity and unemployment.

There were also drawbacks to modeling. More often than not, a spanner would be thrown in the works. Having started modeling financial data across time, you would either find there to be insufficient data or it would be outdated, unworkable or even worthless. For example, how can you predict price-development trends of Belize government bonds if they are not even traded in the first place? And what is there to say about a one-in-a-thousand-year scenario for the Euro interest rate if the Euro is still only a few decades in place? But not to panic, during a pseudo-scientific training in technical analysis I learned that ‘if all else fails’ you could always fall back on ‘your only friend’ the trend. In the real world, trend breaks were pretty rare and correctly predicting one would probably be more luck than judgment anyway.

Several years and a few crises later, much has changed. Driven largely by technological advances; faster computers, huge amounts of data and advances in modelling techniques many more and much better models are available. Behavioral analysis are now let loose on customer data, machine-learning techniques ensure that models automatically self-improve and our daily activities are subjected to robotic process automation. Nowadays, hordes of smart young people busily set up models in Python and R, and do it infinitely better than I was ever able to in the stone-age with Pascal, Matlab and Eviews. If I embrace agile and holacracy terminology, steering the process becomes almost redundant. And difficult too, given that so many people wear headphones nowadays and are quite indifferent to old-fashioned means of communication anyway. It wasn’t that long ago that, as the database administrator, you’d be hard pushed to call yourself the coolest member of the project team. But nowadays it’s a discipline that has launched itself to a crucial position in nearly every project. Well, data, I suppose, is now hot. On the other hand, for as long as membership of the “Flat Earth Movement” continues to grow, doubt about the authenticity of the moon landing continues to increase and, flying in the face of statistics, child vaccinations continue to fall, science still has some work to do.

Now that Zanders has been on this earth for 25 years, and myself double that, you’d be forgiven for wondering what the next 25 years will bring. And how we will be using models by then. Will they be self-improving? Will they still be needed at all, and will we be able to model the world without the simplifying assumptions that currently define a model as a model? And what kind of role can a company like Zanders play in all this? The possibilities are infinite, and I am convinced that the demand for intelligent, advanced but understandable models will only increase. Assuming, of course, that we stay on good terms with that trend...

Enjoy reading this issue of Zanders Magazine.