All errors should be reported to DonSurber@gmail.com

Sunday, August 13, 2017

CNN dumps pollster. It should dump polling

After more than a decade of having ORC International conduct CNN's political polls, the Fake News network hired SRSS to try to gauge public opinion using numbers instead of actually talking to people.

Sigh.


Politico explained that ORC screwed up.
The final CNN/ORC International state polls in last year’s presidential election were, in some cases, off from the eventual results. In Nevada, the final CNN/ORC International poll — conducted about a week before Election Day — showed Trump leading Hillary Clinton among likely voters by 6 points, but Clinton prevailed there by 2.4 points. In Pennsylvania, Clinton led the poll by 4 points, but Trump won by 0.7 points. Its poll in Florida was closer and well within the margin of sampling error — Clinton led by 2 points, but Trump won by 1.2 points — but still missed the eventual winner.
What a whiff.

Republicans flipping Pennsylvania -- the Keystone State -- was the big muff. The last Democrat to win without Pennsylvania was Harry Truman. He still holds that record.

CNN said its poll had a margin of error of 3 points either way, meaning CNN said Hillary would win Pennsylvania by 1 to 7 points.

But the polling industry -- much like the news industry -- is in denial about its failure.

The American Association for Public Opinion Research in April brushed off criticism of the mass breakdown by people who interrupt mealtime across America.

From their report:
There are a number of reasons as to why polls under-estimated support for Trump. The explanations for which we found the most evidence are:
  • Real change in vote preference during the final week or so of the campaign. 
  • Adjusting for over-representation of college graduates was critical, but many polls did
  • not do it.
  • Some Trump voters who participated in pre-election polls did not reveal themselves as Trump voters until after the election, and they outnumbered late-revealing Clinton voters.

Pollsters make billions off gullible politicians and the news media.

The pollsters are scrambling to protect a reputation that self-destructed last fall.
A spotty year for election polls is not an indictment of all survey research or even all polling.
The performance of election polls is not a good indicator of the quality of surveys in general for several reasons. Election polls are unique among surveys in that they not only have to field a representative sample of the public but they also have to correctly identify likely voters.
The second task presents substantial challenges that most other surveys simply do not confront. A typical non-election poll has the luxury of being able to be adjusted to very accurate benchmarks for the demographic profile of the U.S. population. Election polls, by contrast, require educated estimates about the profile of the voting electorate.
It is, therefore, a mistake to observe errors in an election such as 2016 that featured late movement and a somewhat unusual turnout pattern, and conclude that all polls are broken.
Well-designed and rigorously executed surveys are still able to produce valuable, accurate information about the attitudes and experiences of the U.S. public. 
Yes, do not use the results to judge their performance because even though they blew the election. The patient may have died, but the operation was a success.

Breitbart reported:
The McClatchy-Marist Poll conducted in March found that only seven percent of Americans said they had a “great deal” of faith in polls, and 29 percent said they put a “good amount” of trust in polls. The majority of respondents, however, said they do not trust polls to at least a certain degree.
Thirty-nine percent of people said they do not trust polls much, and 22 percent of respondents said they do not trust polls at all.
Of course, that is a poll so after adjusting for the margin of error, the poll shows that between 0 and 100 percent of the public trust polls.

***

Please enjoy my books on how the press bungled the 2016 election.




Caution: Readers occasionally may laugh out loud at the media as they read this account of Trump's election.

It is available on Kindle, and in paperback.



Caution: Readers occasionally may laugh out loud at the media as they read this account of Trump's nomination.

It is available on Kindle, and in paperback.

Autographed copies of both books are available by writing me at DonSurber@GMail.com

Please follow me on Twitter.

Friend me on Facebook.

7 comments:

  1. There are a number of reasons as to why polls under-estimated support for Trump. The explanations for which we found the most evidence are: Better knows as ten year old's excuses for breaking the window.

    Real change in vote preference during the final week or so of the campaign.
    Because we all know that people vacillate back and forth between communism and capitalism.
    Adjusting for over-representation of college graduates was critical, but many polls did not do it.
    They just assumed every one would be dumb and believe their 24 hour continuous stream of BS. Campaign rally attendance numbers alone should have told them something.
    Some Trump voters who participated in pre-election polls did not reveal themselves as Trump voters until after the election, and they outnumbered late-revealing Clinton voters.
    They are saying the people they poled lied? Seriously? I have never lied to a polster and I doubt any one else does either. Hang up on them? You betcha, especially when they form their questions to achieve a desired result.

    ReplyDelete
  2. Well-designed and rigorously executed surveys are still able to produce valuable, accurate information about the attitudes and experiences of the U.S. public, but polling organizations don't create anything resembling a well designed and executed poll.

    ReplyDelete
  3. How accurate was SRSS?

    oh, and SRSS backwards is SSRs. They are obviously a satellite Russian polling firm.

    ReplyDelete
  4. Died in the Wool Con Artists.

    "A spotty year for election polls is not an indictment of all survey research or even all polling."

    It sure as hell is. One poll, and one poll only came close -- LA times.

    What this means is that their methodology is completely and utterly confounded from a statistical standpoint. So muddled, in fact, that it blows away the mathematical assumptions underlying the statistics.

    If you're going to be in the polling business you're committing to INFERENTIAL statistical methods. That means inferring a population mean from a sample. They deny this by claiming it wasn't their fault.

    What it really means is that their alpha error is too weak for the concept, methodology, and conclusion. That is to say, most of the time they are willing to make a mistake 5 times out of 100 tries at estimating the population mean (p=.05). It's one thing to measure bolt tolerances, and it's another thing to extrapolate a precise mean that determines a binary decision outcome (win/lose)--That should tell them that their alpha needs to be completely re-examined. Otherwise, you're betting a paycheck from the start that you'll be wrong.

    They did, and they were.

    But, let's face it. As Don mentioned that's not the purpose of the polling results. It's social conformity theory. They think everyone wants to be a member of the ingroup. How wrong they were, and how little they have learned.

    ReplyDelete
  5. I've actually been called by Gallup...last week, in fact...that's what showed up on caller ID, anyway. I didn't pick up.

    ReplyDelete
  6. The fact that they tried to answer the question as to whether polling is useful by taking another poll can indicate two things which may both be true.

    One is that those considering the question have painted themselves into a corner, intellectually.

    The other is that by trying to answer the question this way they are implying that polling is an indispensable tool for finding out the truth and that another poll is the only way to answer the question as to its utility. This is of course circular thinking, which makes perfect sense. They use circular reasoning to push the ideas they want to advance in the way they frame their questions, so it makes perfect sense for them to use circular reasoning in an attempt to determine whether what they do for a living is justified.

    In a way this is actually much more common than you would expect. Medical researchers have this unspoken conversation with themselves every day. Most big advances in medicine are based on intuition and deduction, and most research is useless. Yet we continue to pour billions into grants for studies in circular reasoning. and if there is a question as to whether the studies are useful. well, I'm sure the attempt to answer that question will be with another study.

    ReplyDelete
  7. As a registered Nonpartisan, I used to get polls almost every day during election time, until I got one that only dealt with trying to determine my politics. Eventually the caller said, "you really are non-partisan!" I replied, "Yes, both parties are corrupt."
    That was the last pol I received, going on ten years now.
    That and the nature of the push polling, has led me to conclude that polling is for the benefit of making fraud believable.
    Reuben J

    ReplyDelete