The polls missed Donald Trumps election. Individual polls missed, at the state level and nationally (though national polls werent far off). So did aggregated polls. So did poll-based forecasts such as ours. And so did exit polls.
The miss wasnt unprecedented or even, these days, all that unusual. Polls have missed recent elections in the U.S. and abroad by margins at least as big. Every poll, and every prediction based on it, is probabilistic in nature: Theres always a chance the leader loses. And Clinton probably didnt even lose the national popular vote; she just didnt win it by as much as polls suggested. But Tuesdays miss was an important one because Clinton appeared to lead by a margin small enough that it might just have been polling error. That turned out to be mostly true true enough for her to lose in the Electoral College, and for Democrats to fall far short of taking control of the Senate.
It will take a while to figure out exactly why polls missed. Reviews by pollsters and their professional organizations can take months. The polls were largely bad, including mine, Patrick Murray, director of the Monmouth University Polling Institute, wrote us in an email. But if anyone thinks they have the answer right now, they are just guessing.
We wrote before the election that a polling error of 2 to 3 percentage points is normal these days. Before we dive into the details of what happened Tuesday, and why it happened, lets step back and look at the different kinds of polling error. All of them are important because all of them were present in this result.
Every poll has error, some from statistical noise and some from factors more difficult to quantify, such as nonresponse bias.
Poll-based forecasts such as ours attempt to reduce error by combining many different polls and accounting for their quality and lean. In states with a weaker batch of polls not enough, not recent enough or not by good enough pollsters well see more error. But polls are vulnerable in all states to systematic errors: underestimating the proportion of voters who are white, say, or failing to get supporters of one candidate to respond with the same enthusiasm as supporters of his opponent. Its possible for polls to be wrong in many states but not in the same direction. These errors could then cancel each other out or not matter at all, if theyre smaller than a candidates winning margin.
But, more often, state polls and the forecasts based on them miss in the same direction. Thats a more systematic polling error, indicating that pollsters were struggling with the same challenges no matter where they were polling or their particular methodology. That also shows up in the plentiful national polls, which we use to adjust our state polls.
Errors of all of those types added up to Tuesdays result. Individual polls were wrong. Aggregated, they missed in individual states, including in many swing states. National polls were off in the same direction: Polls overstated Clintons lead over Trump. And her true lead wasnt enough to overcome her weak position in the Electoral College.
While the errors were nationwide, they were spread unevenly. The more whites without college degrees were in a state, the more Trump outperformed his FiveThirtyEight polls-only adjusted polling average, suggesting the polls underestimated his support with that group. And the bigger the lead we forecast for Trump, the more he outperformed his polls. In the average state won by Trump, the polls missed by an average of 7.4 percentage points (in either direction); in Clinton states, they missed by an average of 3.7 points. Its typical for polls to miss in states that arent close, though. The most important concentration of polling errors was regional: Polls understated Trumps margin by 4 points or more in a group of Midwestern states that he was expected to mostly lose but mostly won: Iowa, Ohio, Pennsylvania, Michigan, Wisconsin and Minnesota.
STATE | AVG. POLLING MARGIN | ELECTION RESULT | OVERPERFORMANCE |
---|
Utah | +9.9 | +18.4 | +8.5 |
Ohio | +2.0 | +8.6 | +6.6 |
Wisconsin | -5.4 | +1.0 | +6.4 |
Iowa | +3.4 | +9.6 | +6.2 |
Pennsylvania | -3.7 | +1.2 | +4.9 |
Minnesota | -5.9 | -1.4 | +4.5 |
North Carolina | -0.7 | +3.8 | +4.5 |
Michigan | -4.0 | +0.3 | +4.3 |
Maine | -6.9 | -3.0 | +3.9 |
New Hampshire | -3.5 | -0.2 | +3.3 |
Arizona | +2.4 | +4.4 | +2.0 |
Florida | -0.6 | +1.3 | +1.9 |
Colorado | -3.8 | -2.1 | +1.7 |
Georgia | +4.0 | +5.7 | +1.7 |
Virginia | -5.4 | -4.7 | +0.7 |
Nevada | -0.7 | -2.4 | -1.7 |
New Mexico | -5.3 | -8.3 | -3.0 |
Trump mostly outperformed his swing state polls
FiveThirtyEight polls-only model adjusted polling average in states to watch. Election results as of Nov. 9 at 1:45 p.m. EST
Source: Associated press
Theres a lot we still dont know. We dont yet have final margins in many states; were using ones from Wednesday afternoon. We also dont know yet if this miss was really due to systematic problems among pollsters, as opposed to shifts toward Trump after their last polls ended (though polls showed Clinton gaining in the final days, not Trump).
Pollsters will need weeks or months to sort through what happened, and some had bigger misses than others. Some also were finished polling before the full effects of FBI Director James Comeys letter to Congress could be felt.
We emailed dozens of pollsters the same group weve polled regularly since 2014 about their work early Wednesday for their first impressions. Nearly 20 got back to us by early afternoon.
We may be looking at a 4-point or so national miss which as noted in the past by FiveThirtyEight is not an insane level of error, but it is real error and the publics right to question polls is justified, said Nick Gourevitch of Global Strategy Group.
Several pollsters rejected the idea that Trump voters were too shy to tells pollsters whom they were supporting. But James Lee of Susquehanna Polling & Research Inc. said his firm combined live-interview and automated-dialer calls, and Trump did better when voters were sharing their voting intention with a recorded voice rather than a live one.
Women who voted for Trump might have been especially reluctant to tell pollsters, said David Paleologos of Suffolk University. The USC Dornsife/Los Angeles Times poll corroborated that: Women who said they backed Trump were particularly less likely to say they would be comfortable talking to a pollster about their vote.
Gourevitch offered a theory for why polls underestimated Trump support: that some percentage of the Trump vote is distrustful of institutions and distrustful of poll calls.
Pollsters also cited lower-than-expected turnout, particularly in the Midwest. Democrats had a turnout problem, Gourevitch said, and therefore so did pollsters. The turnout models appear to have been badly off in many states, said Matt Towery of Opinion Savvy.
It also looks as if Trump pulled late support from many Republican voters who had been undecided or were supporting a third-party candidate. Libertarian candidate Gary Johnsons recent decline coincided with Trumps gains in the polls.
Pollsters, and the media companies whose dwindling budgets have left them commissioning fewer polls, have to decide where to go from here. Traditional methods are not in crisis, just expensive, said Barbara Carvalho of Marist College, whose final poll of the race showed Clinton leading by 1 point, in line with her current lead. Few want to pay for scientific polling.
Berwood Yost of Franklin & Marshall College said he wants to see polling get more comfortable with uncertainty. The incentives now favor offering a single number that looks similar to other polls instead of really trying to report on the many possible campaign elements that could affect the outcome, Yost said. Certainty is rewarded, it seems.