A Book from the Library of Defense
Namespaces
Variants
Actions

Library Collections

Webinars & Podcasts
Motions
Disclaimer

Base Rate Neglect

From OCDLA Library of Defense
< Blog:Main
Revision as of 17:29, December 21, 2012 by Maintenance script (Talk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This wikilog article is a draft, it was not published yet.

by: Abassos • January 27, 2010 • no comments

Suppose an expert testifies that Veronica Victim fits the SAP profile for someone who has been sexually abused. Suppose also that the SAP is a highly valid and reliable test which was normed on the exact same population as Veronica. That is, the test is a good one. In fact, the test is so good that 90% of the time when it says a kid has been abused, that kid has been abused. 10% of the time it says a kid has been abused, it's a false alarm. The kid has all the characteristic markers of abuse but for reasons having nothing to do with abuse.

What is the probability that Veronica Victim has been abused?

Most people would say 90%. The actual answer is that we need more information. We need to know how common sex abuse is in the general population. That is, we need to know the base rate for sex abuse. Without knowing that information we have no idea how many false alarms there are.

Assume that the base rate for sexual abuse is 4% and that there are 50 million kids in the nation. That means that 2 million kids have been abused and 48 million have not.

If you gave this test to every kid in the nation, it would accurately spot 90% of the 2 million abused kids. But it would also think that 10% of the 48 million unabused kids were abused. 90% of 2 million is 1.8 million kids. 10% of 48 million is 4.8 million. That means there will be way more false alarms or "false positives" than there will be kids who are accurately identified.

To find the actual probability that a particular person identified by the test as having been abused was actually abused you: divide the number of people accurately identified by the total amount of people identified by the test (accurately and falsely). Stated another way, you divide the number of people in the 90% by the number of people in the 90% plus the 10% of false positives. Here, we'd divide 2.8 million (the 90%) by 7 million (90% + 10%). 2.8 divided by 7 is .4. That is, there's a 40% chance that Veronica Victim has been accurately identified by the test. Not 90%. Less likely than a coin flip.

Now think about all the times that police officers take the stand and say that in their training experience they know this person is a drug dealer/pimp/prostitute/etc. because of behavior and details they've noticed. One problem is that officers only notice the people they actually arrest as pimps, not the people who they don't arrest. For various reasons one should be dubious of such an officer statement in any case.

But the bigger problem is that there are serious base rate issues here. Even if the cops unscientific profile is 80% accurate, the base rate for, say, pimps in the general population is very, very low. One in 10,000, maybe. So the 20% inaccurate will capture way more people than the 80% accurate. Assuming the aforementioned numbers, that means that in a population of 1 million people, the officers profile would accurately spot 80 of the 100 pimps. But it would misidentify 199, 980 average citizens as pimps. That turns out to be a very, very bad probability: .0004 percent chance that the actual person identified by the officer's test would be a pimp.

My point is that there are a lot of statements out there that we should be challenging as either irrelevant or as inadmissible scientific evidence. If there aren't numbers because it's not a real test, put some numbers to it so that it will become obvious that it's a really badly done study rather than the sort of experience and training we should trust.