Prosecutor’s fallacy — now with less fallaciousness!

I gave a talk at East HS yesterday about “not getting fooled by probability,” in which I talked about the Diana Sylvester murder case, discussed previously on the blog as an example of the prosecutor’s fallacy.  While getting ready for the talk I came across this post about the case by Penn State law professor David Kaye, which explains how, in this case, the calculation proposed by the defense was wrong in its own way.

Here’s the setup.  You’ve got DNA found at the scene of the crime, with enough intact markers that the chance of a random person matching the DNA is 1 in 1.1 million.  You’ve got a database of 300,000 convicted violent criminals whose DNA you have on file.  Out of these 300,000, you find one guy — otherwise unconnected with the crime — who matches on all six.  This guy gets arrested and goes to trial.  What should you tell the jury about the probability that the defendant is guilty?

The prosecution in the case stressed the “1 in 1.1 million” figure.  But this not the probability that the defendant was innocent.  The defense brought forward the fact that the expected number of false positives in the database was about 1/3; but this isn’t the probability of innocence either.  (The judge blocked the jury from hearing the defense’s number, but allowed the prosecution’s, whence the statistical controversy over the case.)

As Kaye points out, the missing piece of information is the prior probability that the guilty party is somewhere in the DNA database.  If that probability is 0, then the defendant is innocent, DNA or no DNA.  If that probability is 1, then the defendant is guilty, because everyone else in the database has been positively ruled out.  Let x be the prior probability that someone in the database is guilty; let p be the probability of a false positive on the test; and let N be the size of the database.  Then a quick Bayes shows that the probability of the defendant’s guilt is

x / (x + (1-x)Np).

If Np is about 1/3, as in the Sylvester case, then everything depends on x.  But it’s very hard to imagine that a good value for x is any more than 1/2, especially given the existence of a pretty good suspect in the case who died in 1978 and isn’t in the database.  Then the defendant has at most a 3/4 probability of guilt, not nearly enough to convict.  The prosecution and the defense both presented wrong numbers to the judge; but the defense numbers were less wrong.

Tagged , , , ,

13 thoughts on “Prosecutor’s fallacy — now with less fallaciousness!

  1. Frank says:

    So have people tried to put a quantitative, percent value on “beyond a reasonable doubt”?

  2. Dan Greaney says:

    Hi Jordan,

    So I read your post and I have to say that I was not left feeling as though I understood what was going on inside your calculation–what your calculatuon was actually doing, logically. If you want to reach non-mathematically trained readers you may need to break it down a bit further. Also, does your approach account for the possibility that each population of 1.1 million will contain a true match. Assuming a US population of 300 million that means there could be 299 innocent true matches, and that a random group of 300,000 has a .1% chance of having one of them in it and then a .33% chance of that one being the right one? Or something?
    -Dan

  3. Jeff says:

    Frank, that’s an interesting question. I have no idea if there is any discussion about that. Personally, I would probably say that greater than 10% chance of innocence is certainly a reasonable doubt. I don’t know what my lower bound would be.

    I feel like a 1% chance of innocence is not a reasonable doubt, but on the other hand if you told me that 1% of all inmates were actually innocent I would think that something needed to be changed. That would be 23,000 innocent people were in jail out of 2.3 million inmates.

  4. fetgal says:

    I wonder would the judge have allowed “there are 300m people in the US so there are 300 people who match that 1-in-a-million DNA. The police picked Diana just because she was in their database”? Seems like a much less academic argument and maybe more admissible to the judge and accessible to the jury.

  5. James Martin says:

    If I were putting a case for the defense to the jury (or to a mathematically unsophisticated judge) I wouldn’t start with the probability of a false positive in the database, which is already a bit confusing.

    Instead I’d start with: the test produces a match on 1 person in a million, and there are 300 million people in the country. So there are about 300 people in the country who match the sample. This guy is one of those 300. What makes you think that the guilty man is this guy rather than one of the other 299?

    I think that’s much easier to get one’s head around.

  6. Nemo says:

    Was the accused living in San Francisco in 1972?

    Does the answer to that question, and similar questions, impact the analysis at all?

  7. DNA Birthday says:

    The fallacies in the second half of the Bobelian story were very amusing.
    Kaye gets that part right, including the factor of {13 \choose 9} = 715 enhancing the probability by searching for a match of any nine markers of thirteen, not specified in advance. The numbers of matching pairs reported (33 in a database of 30k, 144 in a database of 65k, and 903 in a database of 233k), are then roughly as expected (actually the latter is, if anything, slightly lower than expected).

    The problem is a simple variant of the Birthday Problem, and the criticism roughly analogous to: “Almost every time I pick a group of 50 people, there are at least two with coincident birthdays — there must be something wrong with the estimate of 1/365 probability for any two birthdays to match.”

  8. J. Polchinski says:

    Reading the article, it appears that the prosecutor actually presented to the jury what is probably the most intuitive way to understand the statistics:
    “Merin brought the figure into sharp relief with a simple calculation: the year Sylvester was murdered, he noted, California had eighteen million residents, about half of them men; given the rarity of the crime scene DNA profile, he argued that there were only eight or nine people living in the state who could have done it—and Puckett was one of them.” The reason the jury was convinced that Puckett was the guilty one of these eight was that the details of the crime matched his earlier convictions. It’s an interesting question: to pose it as a murder mystery, if there is a murder and the murderer is one of eight suspects at the crime scene, do you convict purely based on previous convictions with the same MO (sexual violence with a sharp weapon?) I have a high standard for reasonable doubt, but if you fold in the additional probability that the individual identified by DNA also has this history, I believe that the probability of guilt is quite high.

  9. J. Polchinski says:

    To amplify a bit, there are two relevant pieces of data, the DNA match and the conviction for a similar crime. The probability that a random person has been convicted of a similar crime is around 10^{-4}. The probability that Sylvester’s murderer has been convicted of a similar crime is likely to be around 0.5. You can quibble with these numbers, but not by a large factor I think. So the prior probability that Puckett is the murderer, as compared to a random individual is around 5000 times greater. Thus, if we estimate that there are 9 DNA matches in California, the probability of Puckett’s guilt is not 1/9 but 5000/(5000+8). Remarkably, based on the article you cite, this is essentially the argument presented to the jury, without the numbers of course. Other factors you discuss, like the size of the DNA database, I think are not relevant to the calculation.

  10. DNA Birthday says:

    What this really brings into sharp relief, unfortunately, is just how much difficulty even a mathematically sophisticated jury would have resolving this. Suppose such a jury happened to contain a U Wisc math prof and a UCSB physics prof.

    The math prof argues compellingly that the probability of guilt is between 0 and 1, depending entirely on how a priori likely it’s assumed that the murderer is in the database. Asked to flesh out his argument, he requests a whiteboard for the jury room, and writes

    Let g = condition that person is guilty,
    let m = the dna matches someone in the database
    let db = person is in the database, then

    p(g,db|m) = p(m|g,db)p(g,db) / p(m) = p(m|g,db)p(g,db) / (p(m|g,db)p(g,db) + p(m|g,not db)p(not db))

    Then he estimates p(m|g, not db) = 1-(1-p)^N (close enough to pN for pN small, but for only p small a better approximation is 1-exp(-pN)), where p=10^{-6} and N=338000 (size of database), so about .3 (close enough to the above 1/3).
    He then notes that p(m|g,db)=1, and lets x=p(db), giving
    p(g,db|m) = x/(x+.3(1-x))
    which ranges from 0 to 1 as x ranges from 0 to 1.

    Now the physics prof effectively argues that the prior x in the above should be set very high (x=.995), given that the single match found corresponds to someone convicted of a similar crime. Because the conclusion is so much stronger, he’s asked to flesh out his argument as well, and he writes the additional condition
    c = person has been convicted of a similar crime,
    so that

    p(g|c,m) = p(g,c,m)/p(c,m) = p(c|g,m)p(g,m) / (p(c,g,m) + p(c,not g,m))
    = p(c|g,m)p(g|m) / (p(c|g,m)p(g|m) + p(c|not g,m)p(not g|m))
    = .5 (1/9) / (.5 (1/9) + 10^{-4} 8/9) = 5000 / (5000 + 8)

    where he has estimated p(c|g,m) = .5 and p(c|not g,m) = 10^{-4}, and used p(g|m)=1/pN with pN=9 (now N the full 9million males in the state).
    As he points out, the probability is high as long as p(c|not g,m) << p(c|g,m), and the prior p(g|m) is of order unity.

    Now whom should we believe? Some members of the jury suspect that mathematicians are too theoretical for their own good, so prefer to trust the more practical physicist. But the instructions to the jury permit assessing fellow jurors via internet search, and a computer scientist on the jury discovers that the physicist not only works with p-branes but believes in the anthropic principle, and hence has significantly compromised credibility.

    The computer scientist goes on to point out that the world-renowned physicist has used a potentially unprincipled methodology for combining the two pieces of evidence p1= p(g|c) and p2=p(g|m). An alternative methodology known as "naive Bayes" assumes that the pieces of evidence are statistically independent, together with a flat prior (p(g)=1/2, though that is easily relaxed), and results in
    p(g|m,c) = p1 p2 / (p1 p2 + (1-p1)(1-p2)) .
    Since p2 is fraction of order 1/9, this suggest to the remainder of the jury that the physicist argument depends implicitly on assuming an overwhelming probability of guilt based on a prior conviction for a similar offense.

    The point is not to argue that anyone is wrong, only that such probability problems frequently involve a variety of assumptions to be well-specified.
    The point is really to wonder what a less mathematically sophisticated jury should make of this, given that more mathematically-minded onlookers can have a reasoned disagreement on a seemingly simple issue.

  11. Refine Prior says:

    Well if I were on the jury with the mathematician, computer scientist, and p-brane anthropocist, I would request to know the percentage of people in the Ca felony database who have been convicted of similar crime (p(c|db) in above notation), and also estimate the probability of having committed a similar crime but never been caught.
    That data would help refine some of the unknown priors, in particular the weight to be given to a prior conviction for similar crime.

  12. Dave says:

    To understand and use bayesian inference, it helps a bit to use the *odds formulation*. But the problem remains with the priors…those flaming priors. You would hope that, no matter what prior is chosen, the function would tend to converge.

    I might say with respect to the whole DNA thing that it bothers me that the probabilities change with the size of the database. 15 years ago, Joe Serial Killer, who we all know is guilty, was far less likely to be matched because the database was so much smaller than he is today.

  13. [...] to have played some small part in building their book — I was the one who told Leila about the murder of Diana Sylvester, which turned into a whole chapter of Math on Trial; very satisfying to see the case treated with [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 546 other followers

%d bloggers like this: