5 thoughts on “Don’t Believe The Exit Polls”

  1. The networks are going to be frantic trying to be the first to call the victory for Obama and launching the associated fireworks extravaganza. Regardless of the actual outcome of the election, it is going to be a nauseating night. At least we can be assured that the MSM will totally embarrass themselves again.

  2. Rand has a Hypothesis:

    “McCain voters will be significantly less likely to answer
    polls on Election night, then, Obama Supporters”.

    That is a statistically testable supposition,
    we should see R(Obama) > R(McCain) vs Poll_Count(obama)/(Poll Count(McCain)> (Margin of Error)

    The way to test the exit polls is to look at clear victory
    states ( Texas, Utah, Alabama) for McCain and (IlL, NY, Cal)
    for Obama and then compare this against Battleground
    states (PA, FL, IN, ) and look to see if the
    results are statistically significant.

    if we see the same results in (Blue, Red and Yellow) states
    then Rand’s Hypothesis is statistically significant,

    if we don’t then we may have a problem in polling.

    Consider the 2004 election.

    he DECIDEDLY RIGHT-LEANING Real Clear Politics average of Polls before the 2004 election: Bush +1.5%
    Actual Final Results in 2004: Bush +2.5% (off by 1.0%)

    And then this…

    RCP Average MN the day before the election: Kerry +3.2%
    ACTUAL RESULTS for MN: Kerry +3.5% (off by .3%)

    And…

    RCP Average PA the day before the election: Kerry +.9%
    ACTUAL RESULTS for PA: Kerry +2.3 (off by 1.4%)

    And…

    RCP Average IA the day before the election: Bush +.3%
    ACTUAL RESULTS for IA: Bush +.9% (off by .6%)

    And…

    RCP Average WI the day before the election: Bush +.9%
    ACTUAL RESULTS for WI: Kerry .4% (off by 1.3%)

    And while we’re at it…

    RCP Average OH the day before the election: Bush +2.1%
    ACTUAL RESULTS for OH: Bush +2.5% (off by .4%)

    now these were polls right before the election not
    exit polls, but, statistical polling is a science with
    some art.

    but in 2004, we saw the tracking polls hit the actuals
    within the margin of error.

  3. This may come as a complete surprise to some, but a poll falling within double the margin of error is statistically meaningless (raise one quantity by MoE, decrease the other). That is, no conclusion may be drawn from it. This is hardly a virtue when comparing polls against each other, nevermind comparing against hard news stories.

    I don’t know about anyone else, but when breathless reporters talk up polls that are within one multiple of the MoE, it makes me want to change to something with a bit more substance. Nonsubscribed “blue-screened out” channels, for instance.

    For the history fans out there – in 2004 the exit polls that came under fire for gross inaccuracies were in the Eastern coastal states, particularly the Southern ones, taken in the morning and early afternoon. Not the midwest states, which closed their polls after the discrepancies were already being discussed by the chatteratti on the cable news nets.

  4. “The way to test the exit polls is to look at clear victory
    states”

    I think that is a bad testing methodology. I can tell you from experience having been back and forth between a red state (Texas), and a blue state (New Mexico). Democrats can air their opinions around Republicans and get a respectful nod. A Republican airing their opinions with Democrats about is like walking into a shit storm without an umbrella.

Comments are closed.