The Truth will prevail, But only if we Demand an Article V Convention

9-11 Inside Job and Neocons Hacked 2004

Best Articles
and Videos

About Exit Polls

by Steve Freeman, Election Integrity

Why should we care about exit poll results?
When properly conducted, exit polls should predict election results with a high degree of reliability. Unlike telephone opinion polls that ask people which candidate they intend to vote for several days before the election, exit polls are surveys of voters conducted after they have cast their votes at their polling places. In other words, rather than a prediction of a hypothetical future action, they constitute a record of an action that was just completed. Around the world, exit polls have been used to verify the integrity of elections. The United States has funded exit polls in Eastern Europe to detect fraud. Discrepancies between exit polls and the official vote count have been used to successfully overturn election results in Ukraine, Serbia, and Georgia.

Are exit polls data better than other polling data? Exit polls, properly conducted, can remove most sources of polling error. Unlike telephone polls, an exit poll will not be skewed by the fact that some groups of people tend not to be home in the evening or don’t own a landline telephone. Exit polls are not confounded by speculation about who will actually show up to vote, or by voters who decide to change their mind in the final moments. Rather, they identify the entire voting population in representative precincts and survey respondents immediately upon leaving the polling place about their votes. Moreover, exit polls can obtain very large samples in a cost-effective manner, thus providing even greater degrees of reliability.

    The difference between conducting a pre-election telephone poll and conducting an Election Day exit poll is like the difference between predicting snowfall in a region several days in advance of a snowstorm and estimating the region’s overall snowfall based on observed measures taken at representative sites. In the first case, you’re forced to predict future performance on present indicators, to rely on ambiguous historical data, and to make many assumptions about what may happen. In the latter, you simply need to choose your representative sites well. So long as your methodology is good and you read your measures correctly, your results will be highly accurate.

How do exit polls work? There are two basic stages of an exit poll. The exit pollster begins by choosing precincts that serve the purpose of the poll. For example, if a pollster wants to cost effectively project a winner, he or she may select “barometer” precincts which have effectively predicted past election winners.

     The second stage involves the surveys within precincts. On Election Day, one or two interviewers report to each sampled precinct. From the time the polls open in the morning until shortly before the polls close at night, the interviewers select exiting voters at spaced intervals (for example, every third or fifth voter). Voters are either asked a series of questions in face-to-face interviews, or, more commonly, given a confidential written questionnaire to complete. When a voter refuses to participate, the interviewer records the voter’s gender, race, and approximate age. These data allow the exit pollsters to do statistical corrections for any bias in gender, race, and age that might result from refusals to participate. For example, if more men refuse to participate than women, each man’s response will be given proportionally more weight.

      Voting preferences of absentee and early voters can be accounted for with telephone polls. 

The 2004 US Presidential Election Exit Poll Discrepancy

What were the results of the 2004 US Presidential election exit poll? The exit polls indicated a seven percentage point Kerry victory. According to the official count, Bush won by 3,000,000 votes. Had votes been cast as voters leaving the polling place said they voted, Kerry would have won by 6,000,000 votes nationwide and would have had a decisive electoral victory.

Why was the exit poll surprising? Seven percentage points doesn't sound like that much. With over 100,000 respondents nationwide the poll's margin of error was a fraction of one percent. Now, it's true that there are other potential sources of polling error, so if the polls were off by 1 or even 2 percentage points, it would not be a major source of concern. But it's in the nature of the bell-shaped curve that the probability falls very steeply beyond this point. It would be eye-popping if the discrepancy between the survey results and the official count were four times the margin of error. But the discrepancy in the 2004 US Presidential Election was far more than that. It's out in never-never land. The probability of this happening by a statistical fluke is astronomically small.

    The exit poll figures reported by the pollsters who conducted the polls are different than yours. This is because their numbers have been "corrected" so as to conform to the count based on the assumption that the count is, by definition, correct.

    Several analyses have been conducted about the exit polls by the pollsters, myself and many others using different data and assumptions. But in order to understand the discrepancy between the exit-poll survey results and the official count, the best measure is the simplest rendering of the discrepancy within the precinct itself. Within Precinct Disparity (WPD) is the difference between the way people said they voted as they exited the polling place and the official count in these same precincts. This is the simplest rendering of the data. As such it is different from the many more complex analysis that we and others have performed. And it is not how Mitofsky and Lenski analyze the data. But in its simplicity, it is revealing and powerful.

Election Integrity Research and Analysis

Could the discrepancy between the exit poll results and the official count have been due to chance or random error? No, the discrepancy could not have occurred by chance or random error. My initial report sent out one week after the election indicated that the dramatic differences between the official count and the exit-poll projections could not have occurred by chance. This means that there must be an explanation for these irrefutable differences between the vote count and the exit polls. By now, everyone who has studied the question accepts this as fact.

Are we saying that the discrepancy itself means that Kerry must have really won the election? No, the evidence that cast doubts on the election results come from diverse sources. The exit polls have never been cited as primary evidence of fraud, but only as a reason to take that primary evidence to heart. US Representative John Conyers, the ranking member of the House Judiciary Committee and author of the foreword to our book says the discrepancy is "but one indicia or warning that something may have gone wrong -- either with the polling or with the election." The discrepancy is an undisputed fact. The question is "What caused it?"

     There are only two possible explanations for the discrepancy: 1) far more Kerry voters than Bush voters agreed to fill out the questionnaires offered by pollsters, or 2) the votes were not counted as cast. In our book, we examine these two possible scenarios as thoroughly as possible.

The official NEP explanation that more Kerry voters than Bush voters agreed to fill out the questionnaires seems plausible. Why question this conclusion?  It is not a conclusion, but rather a presumption. The pollsters merely asserted that this must be true without evidence or even a theory as to why it may be the case. The limited data that the pollsters present not only fail to substantiate the presumption, they undermine it entirely.

     All independent indicators on poll participation suggest not lower, but higher response rates among Bush voters. One of these is that response rates are higher, not lower, in precincts where Bush voters predominated as compared to precincts where Kerry voters predominated. In precincts where Bush got 80 percent or more of the vote, an average of 56 percent of people who were approached volunteered to take part in the poll, while in precincts where Kerry got 80 percent or more of the vote, a lower average of 53 percent of people were willing to be surveyed.

How, then, do the exit polls indicate fraud? There are more than a dozen indicators. I’ll mention just two of them. First, there is no reason why exit polls should be more or less accurate in key states, but they are a key corruption variable: If you are going to steal an election you go after votes most vigorously where they are most needed. The discrepancy is significantly higher in the 11 swing states than other states and significantly higher yet in the three critical battleground states of Ohio, Florida and Pennsylvania, 

      Second, in light of the charges that the 2000 election was not legitimate, the Bush/Cheney campaign would have wanted to prevail in the popular vote. If fraud was afoot, it would make sense that the president's men would steal votes in their strongholds, where the likelihood of detection is small. Lo and behold, the report provides data that strongly bolster this theory. In those precincts that went at least 80 percent for Bush, the average within-precinct-error (WPE) was a whopping 10.0—the numerical difference between the exit poll predictions and the official count. That means that in Bush strongholds, Kerry, on average, received only about two-thirds of the votes that exit polls predicted. In contrast, in Kerry strongholds, exit polls matched the official count almost exactly (an average WPE of 0.3).

Criticism and Validation of our Research 

Have your papers been peer reviewed? Yes. There is no formal mechanism for papers like this (nor is there any good forum in which to publish them), but when I leave a "t" uncrossed in these papers, people write to the dean and demand my dismissal (actually, they do that anyway). The conclusions of the initial paper, in fact, have been accepted, and the "debate" has moved on.

     The US Count Votes paper which I co-authored with 11 mathematicians, statisticians, and other social scientists was extensively peer reviewed.

Has evidence come to light since the publication of these pieces which would explain this exit poll discrepancy?  No such evidence has come to light. All indications are that if the primary exit poll data were made available, it would conclusively show count corruption and identify where count corruption occurred. Unless there is some great public pressure or successful legal action, none of this primary exit poll data will be released.

Have there been any rebuttals to your analyses? There are many "rebuttals." They come from every angle you can think of, and many you could never think of. They are easy to find on the web. Here are two examples:

http://www.counterpunch.org/landes03032005.html. Intended to sow confusion? Counterpunch is supposedly one of the leading "alternative" media forums.

http://elections.ssrc.org/research/ExitPollReport031005.pdf. This is a report by reputable academics at top universities, sponsored by a reputable foundation. Its purpose seems to be to justify that (1) exit poll results should never be released until they have been "corrected" to the vote count, and (2) that the raw uncorrected data should never be released at all for methodological reasons that are not even sound methodology. (Many of us are appalled by (lack of) election reporting, but academic commentary has been no better.)

What do the pollsters say? Incredibly, Warren Mitofsky, the lead exit pollster justified ignoring the vast preponderance of publicly available evidence which we have presented by claiming that data which they refuse to share, “kill the fraud argument.”

The retort is a triple outrage. First, the dismissal of public data in favor of secret data. Second, that this supposedly conclusive analysis is the work of an entrpenuer and doctoral student hired by Mitofsky. No independent researchers or serious scholars have ever seen the data or the methods by which they reach this conclusion. Third, the data is secret.

If there were indications of fraud, wouldn’t the pollsters be the first to say so? Wouldn't they want to defend their methods? No, the last thing that Edison Media Research and Mitofsky International want to do is to imply fraud. By minimizing the discrepancy and attributing it to polling factors, they were re-awarded one of the most prestigious and lucrative contracts in the polling world. The incentive of Warren Mitofsky, was, in his own words, to "make this thing go away."  

Lack of Transparancy in the National Election Pool Exit Poll

Have you been able to obtain the "uncorrected"* data from the polling consortium?  The data needed to fully investigate the integrity of the election has never been made available to independent researchers. Rather, it remains the property of the NEP consortium that commissioned the exit polls, which says it cannot be released. Data has been made available, but not the data that could be used to verify the validity of the election. In the future, it's unlikely that any media poll will even let us know about any exit poll discrepancy. (For this reason and more, we have undertaken to develop an independent exit poll.)

Why won’t they release this data?  NEP pollsters claim that release could violate confidentiality agreements, i.e., that under some extreme circumstances one conceivably might be able to figure out how one unusual individual in an unusually homogenous precinct may have said he or she voted.

The pollsters say they are protecting respondent anonymity – what’s wrong with that? Protecting respondent anonymity is, of course, proper and ethical. It is highly improper and unethical to use this as a dissimulation for failing to comply with the more fundamental ethical considerations of open data and protecting democracy. The NEP claim of protecting respondent anonymity is a crock, for at least six reasons:

1. it’s unclear that such identification would be, in fact, be a realistic possibility

2. Why would any researcher ever go through the trouble of doing this? Certainly, it’s clear that our intention is to detect fraud, determine how a lone obscure voter might have said they voted.

3. Even if, in the extremely remote circumstances, that someone might think he or she could identify a voter, what harm could it cause? Yet NEP would have us accept that a small, extremely hypothetical risk that a few individuals’ confidentiality might be compromised but causing no apparent harm – outweighs the importance of an independent check on our nation’s voting procedures and, very likely, evidence of a stolen election.

     Even if this doesn’t persuade you, consider that:

4. Confidentiality could not be a concern in the vast majority of precincts that have even minimal demographic diversity. Why not release precinct identification for these data?

5. In those few precincts where some individual identification might conceivably be possible, NEP could simply have blurred the demographic data. Indeed, given the choice between precinct identifiers – critical to the investigation of fraud -- and demographic data, not only is the relative importance plain as day, but demographic data make no sense at all. After all, what is the point of trying to explain why voters purportedly voted as they did, when we cannot even say how they voted?

6. Finally, consider that NEP denied this data to highly qualified and experienced independent academics from the nation’s leading research institutions, many of whom have experience working with sensitive and national security data, who offered to work only onsite and reimburse NEP for any additional costs incurred. Yet they have given it to two individuals whose only qualifications seem to be an ability to promote the Mitofsky perspective.  

•  Elizabeth Liddle, a British doctoral student in an unrelated field, who has argued ubiquitously (4,000 posts, many of them very long, in one year on democratic underground and similar numbers on other sites) and extensively that the data, which she cannot share, indicate no fraud.

•  Steve Hertzberg, a man with no record at all of either research or maintaining confidentiality, whose qualifications includes no background in research, polling, or political science, but rather, in direct marketing.

 

It is clear that NEP’s primary concern is not respondent confidentiality, but rather control over the findings.

What can be done about all this?

Help our efforts to conduct an independent election verfication exit poll for the 2008 US Presidential Election. See the Election Integrity website.   

 

 
Only search This Site
Seed Newsvine
__________________


__________________
Post Comment on Facebook
__________________
Click Here to Chat!
__________________

Featured Articles & Videos

9-11 Truth Manifesto-Article V Constitutional Convention


The Biggest Scam in History

Olberman Interviews Michael Moore on Sicko Movie

Michael Moore Rips Lies of CNN

Olbermann-Bush and Cheney Should Resign

1967 War and Israeli Occupation of Gaza and West Bank

Israel-Violent Oppressor

AIPAC Intervenes on Iran

Mark Crispin Miller-Imposition of Theocracy Video

Overthrow: America's Century of Regime Change Video

Israel-Not So Cool Facts Video

Israeli Neocon Connection Video

Impeach Cheney Movement

How George Tenet Lied

America Freedom to Fascism 

Open Complicity-Anatomy of 9-11 Cover-Up Video-MUST WATCH

Israeli Lobby-Portrait of a Great Taboo-Video

Top Federal Reserve Bank Scam Articles and Videos

Are Rove's E-Mails the Smoking Gun of 2004 Election

WTC Demolition Video

Biden to Bush-Stop War-Video

How Iraq Was Looted


Neocons in Cheney Office
Fund al-Qaeda Type Groups

Bush/Exxon Fund 90% of U.S. Soldiers Killed in Iraq

Scott Ritter-Middle East Abyss-Videos

Afghanistan to Iraq-Connecting Dots with Oil

U.S. Role in Sadam Invading Kuwait

Essential Films on Globalization

 

Bill Moyers' Talk on Media Reform-Part 1