Polls and surveys can be an important resource to understand how people feel about hot button issues. But not all polls are trustworthy, and some polls have an outright bias. Here is an example of a “poll” that commits every survey design sin. Other times — especially during elections — journalists report on the poll numbers as if they are news by themselves. The AP just issued guidelines telling journalists to avoid this.
Here’s a checklist for assessing the quality of a poll, courtesy of Dr. Rebecca Goldin, whom you might remember from our Navigator on scientific studies. She’s the director of STATS at Sense About Science USA, an organization dedicated to helping journalists understand statistics and a professor at George Mason University.
The information in this checklist is also information that you should share with your audience, so that they have the context to make their own assessment about the polling.
1. First, find out what organization conducted the poll
When you have an organization like Pew or Gallup, Dr. Goldin says those professional pollsters think a lot about survey design and reducing bias. If the organization has an agenda, the survey could be biased, intentionally or unintentionally. That on its own doesn’t mean you shouldn’t report on the survey or poll, but be sure to assess the quality and credibility. Regardless of the organization behind it, be transparent with the audience about who conducted the poll.
2. Look at the poll question to assess the phrasing and question order
The way a question is phrased and the order in which questions are asked can influence the respondents’ answer. If the poll first asks a question that is phrased in an inflammatory or leading way, and then asks a broader question about support for a particular position, it can bias the respondent’s answer.
Dr. Goldin gives an example of asking college students about their attitudes toward allowing hate speech. If the first question a poll asks is about hate speech that advocates violence, then the students’ answers about their attitude toward hate speech as a whole are likely to be biased, because they’ve been primed to associate hate speech with inciting violence.
You can read about a real life example of biased question order and how it affects responses — this was polling to track support of the Affordable Care Act, back in 2009. Another thing to do is assess whether the respondents understood the question as you understand it. It is very possible that respondents do not understand or misunderstand the question, but answer anyway
3. Determine whether the sample is a good representative of the whole group
At its core, a survey is asking: Does what the sample believes hold true for the whole group.
A survey can be as big as “the American people” or as specific as “college students who are registered Republicans.” Since you can’t ask every single person, you need to ask a representative sample. But …“You can very easily get an extraordinarily biased sample,” Dr. Goldin says.
A survey might not poll a representative sample of people because of its methodology. For example, if the poll is phone-based, and the call is made in the middle of the afternoon, the pollster will likely hear from older people or people who don’t have full time jobs. And that may not be a representative population.
Another issue is that even if the organization tries to get a representative sample, the full range of viewpoints may still not be represented.
Dr. Goldin says she thinks supporters of President Trump during the presidential election may have been undercounted in polls because they were unwilling to participate in polls — they had heard candidate Trump say not to trust the polls.
4. Understand what the margin of error tells you
The margin of error tells you how confident you can be that the poll result is representative of the whole population you’re trying to measure (presuming all of the other factors about survey design are satisfied). Margin of error is given as +/- value, like the margin of error is +/- 5 percent.
If your poll finds 52 percent of respondents say they prefer vanilla over chocolate ice cream, with a +/- 5 percent margin of error, the true value of how many people prefer vanilla can fall within the range of 47% to 57%. That is a pretty wide range. A larger sample size will decrease the margin of error. If you sample 1,000 people across the country about whether they like vanilla or chocolate ice cream, you’ll be more confident that the result is true of the nation as a whole than if you just sampled 100 people.
And that matters, Dr. Goldin says, because let’s say there is an election for whether to make vanilla or chocolate the national ice cream flavor. A 47-57% range doesn’t confidently tell you whether vanilla is in the lead.
Pew has these tips about understanding the margin of error in election polls, which we’ll be hearing a lot of as the U.S. midterm elections approach.