M4 Forecasting Conference — Some Thoughts

Last week I attended the M4 Conference in New York City. This was a niche event focused on advances in the field of forecasting and the culmination of the 4th M-Competition, a competition organized by renowned forecasting expert, Professor Spyros Makridakis of the University of Nicosia.

My main interests in this area are those of a practitioner. I do not think enough time is spent educating and communicating the value that quantitative models can add to the business forecasting process. I believe participating in this type of conference is a great way to expand knowledge and gain a better understanding of what is (and is not) possible. If we, as practitioners, remain actively curious in this area, our ability to creatively build powerful forecasting systems that enrich our businesses will be greatly improved.

Here are some of the big ideas from the conference and a few thoughts I’ve had since it ended.

Note: I’m not going to spend time reviewing the content of all the talks and panels. Most sessions were quite technical and if you’re interested, Ronald Richman has a great, detailed writeup here.

The Semantics (Stat vs ML)

One reoccurring discussion throughout the two days was around semantics; is a particular method “statistical”, or “machine learning”? Classic statistical methods have been studied and applied in practice for decades. They are powerful and time tested. Machine learning methods, on the other hand, launched onto the scene in a big way over the past decade. These methods have allowed researchers to explore entirely different approaches to time series forecasting. There are many technical differences between these methods. I think the most generalizable way to differentiate the two is that statistical methods tend to focus on a single time series (local) and machine learning methods have the ability to learn across many time series (global).

The amount of time spent debating this topic was somewhat surprising to me. Classifying the method-type is clearly an area of importance to some research and academic crowds but from a practical perspective, I’m not sure the debate is worth the time spent. Microsoft’s Jocelyn Barker hinted at the discussion’s limited value a few times, and got a laugh from the crowd with her comment that, “Most of what we talk about with machine learning are things like ‘maximum likelihood estimation’…which certainly sounds like a statistical problem to me.” Professor Tao Hong of UNC Charlotte followed suit, encouraging the audience to not be overly concerned with methods and semantics, but instead to focus on systems.

Panel Discussion: ML vs. Statistical Forecasting Methods. Jocelyn Barker (Microsoft), Chris Fry (Google), Yael Grushka-Cockayne (HBS), Mike Gilliand (SAS)

From a practical point of view, I’m not convinced the method type matters. There seems to be compelling evidence that both machine learning and statistical models will play a large role in applied quantitative forecasting in the foreseeable future.

Forecastability

“The bad news for Machine Learning is that if you are in a fat-tailed domain, you still have no hope.”

Professor Scott Armstrong of Wharton and scholar/philosopher Nassim Taleb gave the first two talks of the conference. Ironically, both urged a “proceed with caution” approach to quantitative forecasting.

Armstrong’s talk was titled “Data Models vs. Knowledge Models” and he stressed to the audience the importance of thinking about the forecasting process as much as the model. The process and systems-based approach to forecasting is hard to find fault with. Taleb’s criticism was more narrow and largely focused on the perils of forecasting financial markets and the economy. Accurately forecasting these types of complex, dynamic systems is extremely difficult and the stakes can be VERY high. He repeated stern warnings to any forecasters operating in fat-tailed domains, “The bad news for Machine Learning is that if you are in a fat-tailed domain, you still have no hope.”

This early level-setting was helpful in forcing all remaining discussion to be carried out with a healthy dose of skepticism. Understanding when forecasts can add value, and when they become an liability is critical. Relying on quantitative models to predict financial markets is much more difficult than forecasting retail product demand, where the convexity of outcomes is (usually) much more muted.

Industry Application

I could be wrong, but I didn’t get the sense that the the M4 Competition or conference drew much interest from forecasting practitioners. To be honest, the depth and topic of most discussions would have likely only been of interest to the geekiest of business planners. But despite the academic nature of the event, from a practical perspective, there was much to learn. The creative ways that machine learning could be used to improve forecast outcomes was fascinating.

Despite the richness of forecasting research, I still sense a general hesitancy to leverage even the most basic statistical techniques for forecasting within many organizations. At the conference, there was no shortage of discussion around improving forecast model accuracy, but very little discussion on how to improve the accuracy of forecasts practitioners are willing to submit! The question then becomes, what can be done to increase the willingness of practitioners to leverage the research that comes out of events like the M4 competition? This is something I’ve spent a lot of time thinking about recently and a topic worthy of blog post on its own. There is no easy answer.

Conclusion

Personally, I thought the M4 Conference was incredibly interesting. The willingness of everybody in attendance to engage in discussion and thought-sharing was in invaluable experience.

The involvement of companies like Google, Amazon and Uber as sponsors of the M4 event will undoubtedly garner additional interest and increased visibility for quantitive forecasting in years to come. At the risk of research not fulfilling its potential, I think it is important for the community to think deeply about how to effectively bridge the gap between research advancements and the business applications. When implemented thoughtfully within a system, quantitative forecasts have the potential to add huge amounts of value to a business.

Kudos to the entire forecasting community to coming together for this event. At the end of the event the M5 Competition was announced. Details were limited but for the first time the competition will have a focus on causality. Exciting stuff! I’m looking forward to following along!

read original article at https://medium.com/adaptable-blog/m4-forecasting-conference-some-thoughts-37ab8fb0f6f5?source=rss——artificial_intelligence-5