It’s looking more and more as if the official Iranian election returns were at least partially fictional. I wrote last week about one unconvincing statistical argument for fraud; now a short paper by Bernd Beber and Alexandra Scacco offers more numbers and makes a stronger case.

Keeping in mind that I like their paper a lot, let me say something about a part of it where I thought a bit more justification was needed.

Consider the following three scenarios for generating 116 digits that are supposed to be random:

- Digits produced by 116 spins of a spinner labeled 0,1,…,9.
- Final digits of vote totals from 116 Iranian provinces.
- Final digits of vote totals from U.S. counties.

Now consider the following possible outcomes:

**A**. Each digit appears either 11 or 12 times.**B.**0 appears only 4% of the time, and the other digits appear roughly 10% of the time.**C**. 7 appears 17% of the time, 5 appears only 4% of the time, other digits appear roughly 10% of the time.

Which outcome should make you doubt that the digits are truly random?

In scenario 1, I think **B** and **C** are suspicious; that level of deviation from the mean is more than you’d expect from random spins. Outcome **B** would make you suspect the spinner was biased against landing on 0, and** C** would make you think the spinner was biased towards 7 and against 5.

But of course, outcome **A** is much more improbable (or so my mental calculation tells me) than either **B** or **C**. So why does’t it arouse suspicion? Because there’s no apparent mechanism by which a spinner could be biased to produce near-exactly uniformly distributed results like this. Your prior degree of belief that the spinner is “fixed” to produce this behavior is thus really low, and so even after observing **A** your belief in the spinner’s fairness is left essentially unchanged.

In scenario 3, I don’t think any of the three outcomes should raise too much suspicion. Yes, the probability of seeing deviations from uniformity as large as those in **C** in random digits is under 5%. But we have a strong prior belief that U.S. elections aren’t crooked — in this case, I think it’s fair to say that scenarios **A,B,** and **C** are all *evidence* that the digits being faked, but not enough evidence to raise the very small prior to a substantial probablity of fraud.

Scenario 2, the one Beber and Scacco consider, is the most interesting. Outcome **C** is the one they found. In order to estimate the probability of fraud in a Bayesian way, given outcome **C**, you need three numbers:

- The probability of seeing outcome
**C**from random digits; - The probability of seeing outcome
**C**from digits made up from whole cloth at the ministry; - The probability —
*prior*to any knowledge of the election results — that the Iranian government would release false numbers.

The third question isn’t a mathematical one, but let’s stipulate that the answer is substantial — much larger than the analogous probability in the United States.

The first question is the one Beber and Scacco assess in their paper; they get an answer of less than 5%. That sounds pretty damning — deviations like the “extra 7s” seen in the returns would arise less than 1 in 20 times from authentic election numbers. In fact, outcomes **A,B** and **C** are all pretty unlikely to arise from random digits.

But outcome **C** is evidence for fraud only if it’s *more* likely to arise from fake numbers than real ones. And here we have an interesting question. Beber and Scacco observe that, in practice, people are bad at choosing random digits; when they try, they tend to pick some numbers more frequently than chance would dictate, and some less. (Their cites for this include the interesting paper by Philip J. Boland and Kevin Hutchinson, Student selection of random digits, Statistician, 49(4): 519-529, 2000.)

So on these grounds it seems outcome **C** is indeed good evidence for faked data. But note that the Boland-Hutchinson data doesn’t just say people are bad at picking random digits — it says they are bad *in predictable ways* at picking random digits. Indeed, in each of their four trial groups, participants chose “0” — which just doesn’t “feel random” — between 6.5% and 7.5% of the time, substantially less than the 10% you’d get from a random spinner.

So outcome **B**, I think, would clearly be evidence for fraud. But outcome **C** is a little less cut-and-dried. Just as it’s not clear what mechanism would make a fixed spinner prone to outcome **A**, it’s not clear whether it’s reasonable to expect a person trying to pick random numbers to choose lots of numbers ending in “7”. In Boland and Hutchinson’s study, that digit came up just about exactly 10% of the time.

Here’s one way to get a little more info; let’s say we believe that people trying to imitate random numbers choose 0 less often than they should. If the Iranian election digits had an overpopulation of 0, you might take this to be evidence *against *the made-up number hypothesis.

So I checked — and in fact, only 9 out of the 116 digits from the provincial returns, or 7.7%, are 0. Point, Beber and Scacco.

In the end, it’ll take people with better knowledge of Iranian domestic politics — that is, people with more reliable priors — to determine what portion of the election numbers are fake. But Beber and Scacco have convinced me, at least, that the provincial returns they studied are more consistent with made-up numbers than with real ones.

Here’s a post from Andrew Gelman’s blog in which Beber and Scacco explain what their tests reveal about the county-level election data.

**Update:** A more skeptical take on Beber and Scacco from Zach at Alchemy Today, who also makes the point that in order to get this question right it’s a good idea to think about the *way* in which people’s attempts to choose random numbers deviate from chance. I think his description of Beber and Scacco’s reasoning as “bogus” is too strong, but his observation that the penultimate digits of the state totals for Obama and McCain are as badly distributed as the final digits of the Iran numbers is a good reminder to be cautious.

**Re-update:** Beber remarks on Zach’s criticisms here.

In the first sentence of paragraph -5, (“So outcome C, I think, would clearly be…”) you’ve switched outcomes B and C. Either that or I’m very confused…

Crap. Think I fixed this now.

I would be cautious in equivalencing biases among American university students and Iranian voters: although the former are biased against 0s, etc., that is only weak evidence that the latter do.

Also note that Beber & Scacco present a second test, which combined with the first, greatly reduces the probability. Their calculations, though, contain a minor error: the probability of the two results occurring by chance is not the stated 0.5%, but rather 0.14%. I pointed that out to the authors and they concurred; see the report online at Discovery Magazine:

http://blogs.discovermagazine.com/discoblog/2009/06/22/update-irans-numbers-even-fishier-than-previously-reported/

I agree that “bogus” is strong, but when the authors are arrogantly proclaiming that their analysis “leaves little room for reasonable doubt” and that they “systematically show” what likely happened, a strong response is required. Frankly, it’s dishonest. Go read their work on Nigerian elections and see what they’re omitting when it comes to expected numbers — they conveniently pick the evidence from cognitive psychology that fits their observations and selectively summarize that which doesn’t. Lastly, mentioning the 0.005 probability (ignoring that it’s wrong and uncorrected by the Post thus far) is specious when there is a much, much higher probability that an equivalent event would occur in a random sequence.

I don’t have the time or skill to prove this, but I suspect that it’s more likely than not that an article could be written following precisely the same logic for any random sequence of 116 two-digit numbers.

More here btw – http://alchemytoday.com/2009/06/25/more-on-that-devil/

[…] We’ve talked about attempts to prove election fraud by mathematical means before. This time the election in question is in Russia, where angry protesters marched in the streets with placards displaying the normal distribution. Why? Because the turnout figures look really weird. The higher the proportion of the vote Vladimir Putin’s party received in a district, the higher the turnout; almost as if a more ordinary-looking distribution were being overlaid with a thick coating of Putin votes… Mikhail Simkin in (extremely worth reading pop-stats magazine) Significance argues there’s no statistical reason to doubt that the election results are legit. Andrew Gelman is not reassured. Advertisement GA_googleAddAttr("AdOpt", "1"); GA_googleAddAttr("Origin", "other"); GA_googleAddAttr("theme_bg", "ffffff"); GA_googleAddAttr("theme_text", "333333"); GA_googleAddAttr("theme_link", "da1071"); GA_googleAddAttr("theme_border", "cccccc"); GA_googleAddAttr("theme_url", "0d78b6"); GA_googleAddAttr("LangId", "1"); GA_googleAddAttr("Autotag", "politics"); GA_googleAddAttr("Tag", "math"); GA_googleAddAttr("Tag", "news"); GA_googleAddAttr("Tag", "politics"); GA_googleAddAttr("Tag", "andrew-gelman"); GA_googleAddAttr("Tag", "election-fraud"); GA_googleAddAttr("Tag", "elections"); GA_googleAddAttr("Tag", "russia"); GA_googleAddAttr("Tag", "significance"); GA_googleFillSlot("wpcom_sharethrough"); Share this:EmailFacebookTwitterLike this:LikeBe the first to like this post. Filed under: math, news, politics | Leave a Comment Tags: andrew gelman, election fraud, elections, russia, significance […]