Wednesday, April 30, 2014

Breaking down the Suffolk poll

The political press in Minnesota extensively covered Massachusetts-based Suffolk University's statewide poll testing Minnesota's gubernatorial and senatorial races released yesterday.

There was Michael Brodkorb.
A new poll from Suffolk University has good news for both Governor Mark Dayton and U.S. Senator Al Franken as the face re-election this later year. But the poll does show opportunities for the Republican candidates running against both Dayton & Franken.
And Rachel Stassen-Berger of the Strib.
DFL Gov. Mark Dayton and Democratic U.S. Sen. Al Franken have double digit leads against Republican rivals but a significant number of Minnesotans remain undecided in both of this year's premier races, according to a new Suffolk University poll.
And the St Cloud Times, KSTP, the Weekly Standard, and plenty of others.

While I always appreciate a good poll, reporting poll toplines on their face is like failing to fact-check your sources. If methodological problems lurk under the surface of your poll, then reporting those toplines could give your readers the wrong information about public opinion and the state of a given race.

Unfortunately, I have several issues with this poll.

1. Hundredths of a percent?

This one is more of a quibble than a major problem. Reading a poll that breaks down its results to a hundredth of a percent sounds sexy. It makes it seem like this poll has a lot of precision, that it can narrow in that accurately on really subtle changes in voter preference. But that impression is problematic. Unless you have a sophisticated tracking operation (think Obama 08/12's impressive phonebank operation that provided constant feedback with much larger samples than a scientific survey), you simply don't have large enough samples to drill down into the electorate with that much precision. Thus, while there is nothing technically wrong with reporting to a hundredth of a percent, you aren't really telling a reader anything meaningful when you have a margin of error and a relatively small number of respondents. Better to be honest about the limitations of your sample.

2. High number of undecideds.

Everybody is uncritically accepting that there are so many undecideds in the electorate. But did they dig into the pollster and find out this is a recurring problem for them? In February a Massachusetts gubernatorial poll found 25% undecided between the best-known potential nominees, and in March a New Hampshire gubernatorial poll found 19% undecided, and in April an Iowa gubernatorial poll found 24% undecided. The atypically high number of undecideds does not make these polls worthless, but it does mean the pollster needs to do a better job pushing respondents instead of making races appear more fluid than they will actually be on election day.

3. Small samples. 

Yes, the 800 likely voter sample is solid, and that's why you see such a small 3.5% margin of error (many robopollsters minimize cost by going with 400-500, which will produce something like a 5-6% margin of error all else equal). But dig a little further and it gets pretty fuzzy. There are only 198 likely voters in the GOP primary sample, and it is from this tiny group we make striking conclusions about who is leading each primary. The presidential caucus numbers are worse. We can already tell they are probably anomalous when one of the big winners is Texas Governor Rick Perry, who despite a very public 2012 implosion should not be very well known in Minnesota. But our tiny sample of 87 Republican caucus-goers happens to find 14 Rick Perry supporters, which means he suddenly ties Jeb Bush to lead in 2016.

I'm a bit skeptical.


This one is not a problem with Suffolk's poll but with the interpretation of it. I'm going to single out a tweet from Brodkorb's Politics.MN here.
Suffolk Uni poll shows Ortman's lead over McFadden in primary has dropped - 8% in Feb/March - now 2.52%
But this isn't really accurate for a few reasons. For one, the previous poll is actually a poll commissioned by Citizens United, which later released its endorsement for Ortman. That poll functioned as an internal poll,  which are often fudged when publicly released through various tricks (this one only included a single topline, for example). But at least the sample was 400- Suffolk had half that for its primary heat. When you further consider the small number of respondents that would make the difference between 3% and 8%, and the decreased confidence we can have in our sample with a high margin of error, talking conclusively about some trend is misleading at best.

5. Burying the lede

Normally when you conduct a survey you get to your horse race as quickly as possible. When RRH pegged the FL-13 special election, we had a very short survey. First question: are you going to vote? Second: who will you vote for?

In contrast, this Suffolk poll buries the questions that have garnered the focus of the media.

Each question read before the horse race can potentially 'prime' a respondent to respond differently than they otherwise would have absent that question. Ask them if they are Republican or Democrat, and you may reinforce their partisan allegiances when they are asked about real candidates. Ask about what issues are important to them, and you may make that issue salient suddenly when discussing the gubernatorial or senatorial race. Suddenly, our sample may not be representative of how the real electorate would behave. Further, waiting to get to the important questions may cause some respondents to drop off the survey before they finish answering. If certain types of voters drop off more than others, our sample is not representative again. We get something like 30 or 40 questions before we get to the important questions, and that is a huge problem.

In Sum

If you are going to report on polls, do your due diligence. Just like you don't accept everything your human source says at face value, you don't just accept a poll because it provides a bunch of sexy numbers.

No comments:

Post a Comment