Why Are You So Terrible at Predicting the Future?

Ideas for Improving Trust in Forecasting and Analytics

In his book, “The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t,” Nate Silver describes a fascinating observation that local weather forecasters overpredict rain.

Here’s how it works: If the weather forecaster tells you that there is a 10 percent chance of rain today, will you pack an umbrella? Probably not. How about 30 percent – or 50 percent? If you are like most, it will take some number greater than 30 percent to get you rummaging through your closet for protection from the elements. But what happens in that one day in ten when it rains on the 10 percent day? We are angry and demand an apology from the weather forecaster: “You told us it wasn’t going to rain!” The weather forecaster sheepishly apologizes on the next telecast for the “miss.” The lesson, it seems, is that we did not really want to know the probability of rain – we wanted to know if it was going to rain. And, we will trust our weather forecaster if we celebrate the near miss of rain and distrust our forecaster if it rains when we were not expecting it. So, he lies to us just a little bit... and we like it that way.

This scenario plays out every day in our lives, when we crave simple yes/no answers when the world is really just giving us probabilities. Was my account hacked? Who is going to win the election? Do I have enough money to retire? In each of these cases, there are models and algorithms that are working to provide us guidance, but we don’t really want guidance. We really want certainty. We find it annoying if our bank calls us with concerns of fraud if we are traveling to an unusual city, and we rail against the inaccuracy of the 2016 U.S. presidential polls, when they were actually historically accurate according to this Pew Research piece. In fact, Nate Silver’s final prediction of the outcome of the 2016 U.S. presidential election on the fivethirtyeight.com web site was that Hillary Clinton had a 71.4 percent chance of winning. That means that Donald Trump had almost a 1 in 3 chance of victory. That’s not wrong. That’s statistics.

Let’s contrast this with models that we use in our everyday lives that do not seem to irritate us if they are not quite right. Does anyone write angry letters to Jeff Bezos when Amazon recommends products that we don’t want? Does anyone throw anything at the television when Netflix recommends a movie that we are not interested in? I submit that there is a reason for this that is anchored on our understanding of what an authority promises will happen vs. what someone thinks you might want to happen. Consider credit card solicitations, university recruitment mailers or pop-up ads. They compete for our attention. They put us at the center of the prediction and give us the power of choice. This feels good. Contrast this with predictions that happen about things beyond our control, from weather to politics to the economy. They make us feel out of control, and we crave expertise to re-establish our control.

But this control is just an illusion, and we often allow people to exploit our perceptions in the same way that the weather forecaster can. In his seminal book, “Expert Political Judgment: How Good Is It? How Can We Know?,” Philip Tetlock, one of the world’s authorities on prediction, explains how we generally put our faith in the predictions of famous people, even though they do not predict with any more accuracy, and that we rarely hold them accountable when they are wrong. In fact, he found a counterintuitive correlation in his study of predictions from 1984 to 2003 – the more confident people were in their predictions, the more likely they were to be wrong. Let that sink in for a moment. We trust confidence, and we trust “expertise,” but this very confidence that comes through expertise is actually less successful at predicting the future. (For more information about this fascinating insight, I recommend this New Yorker review of his book, the More or Less podcast episode, in which Tim Harford interviews Philip Tetlock on this topic, or this Data Brilliant podcast, in which I interview Tim Harford on the psychology of prediction.)

So, it would seem that we have a trust gap that is anchored around our inflated sense of our own expertise and that of famous talking heads that comes along with the feeling that we have been let down when predictions don’t “come true.” Certainly, our long-term goal should be to dramatically improve data literacy, so as to avoid these pitfalls, but until then, I recommend one simple theme to remember that should anchor how we deliver analytics and drive trust: From a customer point of view, is this analytic here for me or is it being done to me? Here are three principles of achieving analytics that can provide value and reinforce trust:

1. Frame Analytics in the Context of Decisions, not Statistics

What Google, Amazon and Netflix have discovered is that AI can be used to make people more productive by offering up easily consumable, multiple-choice options. Although it’s easy to think that this outcome is only for consumer retail scenarios, this is far from the case. Algorithmic trading enables high-impact decisions at scale and speed. The weather person is causing you to make decisions about your commuting time, your clothing selection, and, perhaps, even your physical safety. It’s important to frame other predictions in this context. Presidential election predictions? An opportunity for you to choose to make political contributions, start a rally or speak to your children about politics? Sporting event predictions? An opportunity to change your fantasy line-up or make a wager in Las Vegas. Stock market guidance? You get the picture.

What seems to be happening as of late is that there are an increasing number of analytics that are presented that do not seem to be action oriented. If you watch an NFL game, you may see an Amazon AWS commercial that outlines what a receiver’s catch probability was on a particular play. Even to the analytics literate, it seems like an odd show-off maneuver that produces little value. Similarly, politically oriented economic prognostications and grand pronouncements, that of certain impending doom or bonanza, are disconnected from peoples' everyday experiences. They are are beyond peoples' ability to effect and are slowly sapping the energy from the population and eating away at our trust. Not only should we avoid the urge to offer these types of pronouncements, but we should look for ways to provide meaningful alternatives.

2. Don’t Say What WILL Happen. Explain Potential Outcomes and Impacts

One thing that has proven effective with the forecasting of natural disasters has been a focus on the potential damage and the uncertainty. We buy insurance largely because of what might happen, not because of what is promised. We may similarly go to shelter if a cyclone or hurricane is predicted based on the fact that it might happen. Does it matter if the risk of a hurricane hitting my area is 30 percent or 60 percent? No, as both warrant evacuation based on the impact of the event. Effective risk modeling techniques have always balanced the likelihood of an event with the impact of an event. This careful balance creates trust. Even something as trivial as a next best product offer allows the user to imagine both having and not having a particular product. They can see it. They can act on it. They are thankful for having been given the option.

3. Rather Than Using Mathematical Probabilities, Explain Factors That Affect the Outcome

Think of the weather forecaster explaining the lines of the fronts and how they affect the rain or how political models explain the impact of the economy or incumbency. These are effectively features of the model that help the lay person understand what is influencing the forecast and builds trust through transparency. The data scientist would call these features of the model, but the lay person simply sees these as a way for the analyst to show their work and demonstrate why they believe what they believe. Amazon uses the deliciously psychological phrase of “people who have bought this have also bought…” to convey a simple (if oversimplified) explanation of the why behind the analytic. There is a reason that people are concerned about “Black Box” AI and ethical AI. They don’t understand. They don’t trust. Providing clarity on how models are built can go a long way toward addressing this issue.

Until the world comes to understand data and analytics more deeply, they simply want to maintain a degree of control. They fear the robots taking over. By giving the power of choice, the rationale for a decision and a clear understanding of the risk and impact of an event, analytics modelers can develop and grow trust and, ultimately, more effectively refine analytics to drive increasing value to the organization.

Predictions & people are fallible - but are we truly terrible forecasters? Not really, but we may project authority & certainty far too often, notes @JoeDosSantos

 

In this article:

Keep up with the latest insights to drive the most value from your data.

Get ready to transform your entire business with data.

Follow Qlik