Skip navigation.
 
 SEARCH
NCPP.ORG
THE WEB
Welcome 

Improvement in Reporting of Polls

  • : Function ereg() is deprecated in /home/ncpporg/public_html/includes/file.inc on line 649.
  • : Function ereg() is deprecated in /home/ncpporg/public_html/includes/file.inc on line 649.
  • : Function ereg() is deprecated in /home/ncpporg/public_html/includes/file.inc on line 649.

Reporting Partisan Polls as News

Pollsters who work for candidates and parties are usually committed to helping their clients win. Their first loyalties are to their clients, and rightly so.

Any decision to release data from these private, partisan polls is based on the assumption that their release and publication will help the candidate, not that it will inform the media or enlighten the public. All too often the release of data from partisan polls misleads rather than informs. Sometimes journalists report partisan polls without identifying them as such.

Occasionally, inaccurate or even fictional data are involved. There have also been several credible reports of American media reporting the alleged findings from partisan polls that did not exist and were false.

But data can be both accurate and misleading. In any survey there may be 20 findings which make the pollster's candidate look bad and two which make him look good. By releasing only the two good results and not the other 20 (sometimes called "cherry picking" the data), real poll data can be used to give a wholly misleading impression of a candidate's standing and strength.

Journalists should generally not report poll data leaked to them by partisan pollsters. If they feel they must do so they should label it clearly as partisan and caution readers against accepting it at face value.

Reporting Polls as Predictions When They Are Not

Some pollsters say that no polls, except perhaps exit polls, should ever be described as predictions because they only measure voting intentions some time before people vote. A survey may be completely accurate at the time it is conducted but still get the election wrong - because intentions change and do not translate into actual voting behavior. This makes good sense.

However, most pollsters think of their final pre-election polls as predictions, and are willing to be judged by the closeness of their final polls to the results. The only reason pollsters are so rash is because, historically, the real margins of error between their final polls and the results, especially for presidential elections, have been remarkably small. An analysis of all presidential elections since 1948 shows that, on average, the polls' average errors on the major candidates has been 1.9 percentage points.

Even if we accept, as we usually do, that our final polls are predictions it is essential for journalists to avoid describing any polls other than the final pre-election polls as predictions. Polls taken before and during an election campaign are no more predictions of the result than are photographs of the leading horses taken at various stages of a horse race. The only photographs of a horse which always get the winner right are taken at the finish line.

Unfortunately, the media have often described polls as "predicting" a particular result, when they were only intended to show which candidates were ahead well before the finish.

Grandiose Conclusions Based on Small Samples To many journalists and editors, the newsworthiness of a poll finding is greatly increased if it is unexpected. If it is inaccurate it may well be more surprising, i.e., newsworthy, than if it is accurate. Unsurprisingly, some of the most unexpected, i.e., newsworthy, results come from polls based on very small samples. The sampling error on a sample of 200 - that is only the possible error due to random sampling, not all the other substantial causes of error - is + 7 percentage points at the 95% confidence level. So the possibility that a poll of this size will be seriously misleading is substantial.

The dangers of using polls based on small samples are even greater when two different percentages in one survey, or the results of two different surveys are compared. With small samples the possibility that small differences in the polls reflect real differences is reduced and the likelihood that they are just random sampling error are increased.

Journalists should normally not report the results of such polls because of the high risk that they are misleading. If they decide to repeat them anyway they should emphasize the possibility that the errors may be very substantial.

When is a Lead Really a Lead?

It has become common practice in news stories to call small differences between candidates "statistical dead heats." This happens when a candidate's lead is less than the error due to sampling. We want to say right up front that it is encouraging to see reporters aware of sampling error. We discuss it in a separate report. This time we want to discuss candidate leads.

"Statistical dead heat" just does not do justice to what is known. In a poll, a lead is a real lead if the difference between two candidates is bigger than the error due to rounding. We said bigger than "rounding," not sampling error. Simply put, any difference greater than one percentage point is a lead. However, all leads are not equal. A big lead is still more certain than a small lead. There are leads that we will bet the ranch on and leads that are only worth an even money bet. Personally, we would be happy to take any leading candidate over the trailing candidate in an even money bet. The leading candidate's chances of winning are better than the other candidate's. Maybe not much better if the lead is small, but still it is better.

A small lead is any difference between candidates that is smaller than the sampling error on the difference. A better choice than "statistical dead heat" would be: "Candidate A is leading at this time, but victory over Candidate B is less than certain." Or, "Candidate A's small (2 point) lead does not assure him/her of victory on Election Day."The points to be made are (1) candidate A has a lead; (2) it is a small lead; (3) the outcome is not assured; (4) the results apply to the days of the poll, not Election Day.

Decimal Mania

For some reason that is not clear to us, some pollsters like to report their poll results to the nearest tenth of a percentage point. We think this gives a misleading appearance of accuracy.

Polls are approximations of what we would get if everyone in a city, state, or the nation had been asked the same questions. A poll of 400 or 1,500, with an error due to sampling of from 3 to 5 percentage points, hardly merits reporting results that look, for example, like 42.6 percent. Rounding the number to 43 percent is close enough.

Perhaps, given the size of the error due to sampling, we should round our numbers to the nearest 5 percent. Then we would not be tempted to over interpret our results.

For more information about this and other polling issues, contact the NCPP Polling Review Board Members.

Printer Friendly Version