When Predictions Fail: Crash Course Statistics #43

CrashCourse
2 Jan 201910:39
EducationalLearning
32 Likes 10 Comments

TLDRThe video discusses the challenges of making accurate predictions in three areas: financial markets, earthquakes, and elections. It examines why experts failed to predict the 2008 financial crisis and 2016 election, attributable to flawed models that left out key factors. Accurate earthquake prediction remains elusive due to complexity and lack of long-term data. The video stresses that even quality prediction models yield uncertainty, and low-probability events will still occasionally occur. It concludes that recognizing the limitations of predictions is as vital as striving to improve them.

Takeaways
  • šŸ˜Š Prediction is useful for understanding the world and planning for the future, but predictions are still just educated guesses based on available information.
  • šŸ˜® The 2008 financial crisis was worsened by overestimating the independence of loan failures and most economists failing to predict the crisis.
  • šŸ˜  Accurately predicting earthquakes requires knowing the location, magnitude and timing, which is currently very difficult.
  • šŸ˜ Earthquake forecasting focuses on estimated probabilities over time rather than exact predictions.
  • šŸ˜² The 2016 US presidential election surprised many as predictions had put Trump's chances very low.
  • šŸ¤” Low probability does not equal impossibility - unlikely events can and do happen.
  • šŸ§ Election prediction errors may have stemmed from biases in polling data and overweighing some respondents.
  • šŸ˜• Prediction requires large amounts of accurate, unbiased data and models that account for all important variables.
  • šŸ¤Ø We should keep trying to improve predictions but also recognize what we cannot accurately predict.
  • šŸ˜ƒ Knowing the limits of prediction can be as valuable as making accurate predictions.
Q & A
  • What three things are needed to accurately predict an earthquake?

    -To accurately predict an earthquake you would need three pieces of information: its location, magnitude, and time.

  • Why was it hard for economists to predict the 2008 financial crisis?

    -Economists struggled to predict the 2008 financial crisis for reasons like: overestimating the independence of loan failures, not accounting for irrational human behavior, and models that underplayed the role of banks.

  • What is the difference between earthquake forecasting and earthquake prediction?

    -Earthquake forecasting focuses on the probabilities of earthquakes over longer periods of time and can help predict likely effects and damage. Earthquake prediction tries to specify the exact location, magnitude and timing of an earthquake.

  • What poll biases may have contributed to underestimating Donald Trump's chances in 2016?

    -Biases like over-representing college-educated voters, who tended to support Clinton, and under-representing less educated voters, who tended to support Trump, may have contributed to underestimating his chances.

  • What does it mean when a prediction says there is a 1 in 100 chance of something happening?

    -A 1 in 100 chance means that, statistically, the event is expected to happen 1 out of every 100 times. So it is unlikely but not impossible.

  • How did the incorrect assumption about loan failure independence contribute to the financial crisis?

    -Banks assumed that the failure of one mortgage loan would be independent from others failing. But when the housing market declined, many loans started failing at the same time.

  • Why can unlikely events still happen even with good prediction models?

    -Even good predictions have margins of error and communicate the chance of various outcomes. So a low probability does not make something impossible - unlikely events can still occasionally occur.

  • What two things are needed to make accurate predictions?

    -To make accurate predictions, you need good, unbiased data in large quantities, and an accurate model that accounts for the most important variables.

  • What value is there in knowing what we cannot accurately predict?

    -There is value in recognizing our limitations, so we don't rely too heavily on inaccurate predictions. Knowing what we cannot predict well helps us focus our efforts.

  • What percentage chance did Nate Silver's model give Trump of winning the 2016 election?

    -Nate Silver's FiveThirtyEight model gave Donald Trump about a 3 in 10 chance or 30% chance of winning the 2016 election, much higher than some other models.

Outlines
00:00
šŸ“ŗ Intro to Predictions and Models

This first paragraph serves as an introduction to the video's focus on using statistics to make predictions about the future. It mentions different prediction examples like sports outcomes and stock performance. The paragraph also notes that examining past failed predictions can provide insights into how to improve models.

05:00
šŸ¦ Banks and the 2008 Financial Crisis

This paragraph examines two issues related to predictions and the 2008 financial crisis: 1) overestimating the independence of loan failures, and 2) economists not foreseeing the crisis. It provides background on risky lending practices and the mistaken assumption that mortgage defaults would be independent events. The paragraph then discusses how prevailing economic models failed to predict the crisis and recession that followed.

Mindmap
Keywords
šŸ’”prediction
The video focuses a lot on using statistics to make predictions about future events. This includes predicting financial markets, earthquakes, election results, etc. The ability to accurately predict these events is difficult and complex. As the video states, 'Prediction isn't easy. Well making bad predictions is easy.' Good predictions require unbiased, high quality data and models that account for the most important variables.
šŸ’”margins of error
When making percentage predictions, margins of error communicate the uncertainty. For example, if a candidate is predicted to get 54% +/- 2% of the vote, it means experts predict they'll get 54% but wouldn't be surprised by 52-55%. This helps quantify the uncertainty in the prediction.
šŸ’”low probability events
Just because an event has a low predicted probability, like 1 in 100, doesn't make it impossible. Unlikely events do happen. As the video states about Trump's election chances, 'low probabilities don't equal impossible events.'
šŸ’”non-response bias
Biases in data collection can skew predictions. For the 2016 election, polls oversampled college educated voters who favored Clinton. This non-response bias from less educated voters wrongly skewed predictions.
šŸ’”correlation vs causation
Predictions rely on correlating certain variables with outcomes. But correlation does not prove causation. So predictions can fail if they make false causal assumptions.
šŸ’”outliers
Extreme outlier events can also make predictions fail. With limited historical data, rare or unprecedented events may not be accounted for in prediction models.
šŸ’”black swan events
The video discusses events like the 2008 financial crisis and Trump's election as surprising 'black swans' that models failed to predict. Black swans are outlier events beyond regular expectations.
šŸ’”confirmation bias
People often selectively focus on data that confirms their existing beliefs. This confirmation bias can negatively impact predictions if it causes poor model assumptions.
šŸ’”unknown unknowns
No matter how much data or how good our models, there may always be 'unknown unknowns'. Important variables that were unforeseen and not accounted for in predictions.
šŸ’”hindsight bias
After surprising events occur it's easy to see them as inevitable in hindsight. But predicting the inevitability of events beforehand is much harder. Hindsight bias makes past predictions look worse.
Highlights

Prediction is a huge part of how modern statistical analysis is used, and itā€™s helped us make improvements to our lives. Big AND small.

Many investors overestimated the independence of loan failures. They didnā€™t take into account that if the then-overvalued housing market and subsequently the economy began to crumble, the probability of loans going unpaid would shoot way up.

There was a global recession that most economistsā€™ models hadnā€™t predicted.

Prediction depends a lot on whether or not you have enough data available. But it also depends on what your model deems as ā€œimportant.ā€

Weā€™re not bad at earthquake forecasting even if we struggle with accurate earthquake prediction.

To predict a magnitude 9 earthquake, weā€™d need to look at data on other similar earthquakes. But there just isnā€™t that much out there.

Itā€™s possible for predictions to fail without models being bad.

If a meticulously curated prediction gives a 1 in 100 chance for a candidate to win, and that candidate wins, it doesnā€™t mean that the prediction was wrong.

Many who have done post-mortems on the 2016 election polls and predictions attribute some blame to biases in the polls themselves.

While we shouldnā€™t stop trying to make good predictions, thereā€™s wisdom in recognizing that we wonā€™t always be able to get it right.

First, we need good, accurate, and unbiased data. And lots of it.

And second, we need a good model. One that takes into account all the important variables.

Thereā€™s great value in knowing what we can and canā€™t predict.

Knowing what we canā€™t accurately predict may be just as important as making accurate predictions.

Making good predictions is hard. And even good predictions can be hard to interpret.

Transcripts
Rate This

5.0 / 5 (0 votes)

Thanks for rating: