Did Survey Software Fail in The Recent Election

Posted on : October 30, 2016 - by :

Election polls are conducted to help predict who will win an election. Did survey software fail us this year?

As we all know now, the large majority of national polls predicted a Clinton win. While everyone knows that Trump won the election, many don’t realize that the national polls did in fact closely predict the results they were designed to measure – the national vote totals. The polls just before the election file tended to show Clinton with about a 3% lead, and the popular vote totals show Clinton actually received about a 1.7 percentage points more of the popular vote. So the real difference between the averages of the major polls and the vote totals was only about 1.3%. That is actually a very good prediction.

That sort of small discrepancy definitely does not indicate a problem with any of the survey software used by those polling organizations. It is well within the confidence interval expected for the sample size is used in these polls. For example the plus or minus figures that should be attached to a poll of 1000 people is 3, while the figure for a poll of 1500 people is about 2.5. These are common sample sizes in national polls. In fact some media outlets use smaller samples, which result in a larger confidence interval.

While most people only pay attention to the national poll results, we all know that the national vote totals don’t determine the winner. That is done on a state-by-state basis. The polls in some states were off by bigger margins than the national polls. Sometimes this was because of smaller sample sizes. In most cases, however, the problem was not in basic methodology or in the programs used, it was in determining who would actually vote. When interviewing someone before an election, you cannot be sure whether they will actually vote, even if they say they will. So most polling organizations use likely voter models to determine a person’s likelihood of voting based on various demographic and other criteria. They also attempt to weight the results to make the relative percentages of different kinds of people in their sample reflect what they believe will be the percentages of those groups in the actual vote count. In some cases, the weights chosen in advance did not reflect the actual percentages of gr
oups turning out. This was likely the biggest source of error in those polls.

And lastly, one must remember that all surveys are snapshots in time. Some people change their minds or make up their minds at the last minute. If they make these changes after answering a poll, the poll not reflect their new choice. There is evidence that people deciding in the last few days decided in favor of Trump at a higher rate than people deciding earlier. So this means a poll that might have been accurate a week before the election, was no longer accurate on the day of the election.