Here’s a paragraph from a recent Scientific American article on the interpretation of statistics:
…only about one out of every 10 women who test positive in screening actually has breast cancer. The other nine are falsely alarmed. Prior to training, most (60 percent) of the gynecologists answered 90 percent or 81 percent [chance that the woman actually has cancer], thus grossly overestimating the probability of cancer. Only 21 percent of physicians picked the best answer—one out of 10.
The context here is false positives; the chance that the test will indicate a problem even if there is no cancer. The false positive rate is small, but since mammograms are recommended as a routine screen there are vastly more healthy patients getting them than ones with cancer. The result is that even with a positive test, there’s only a 1 in 10 chance that the patient has cancer. (Which is, of course, why further tests are done at that point.) The same reasoning applies to any screening procedure.
Now for the scary part: read that paragraph again. It’s not just that only 21% of physicians could pick out the right answer; these were gynecologists being asked about one of the most common tests they perform. The fact that statistics are badly understood is routine, but professionals misunderstanding one of the central statistics of their discipline is both surprising and horrifying.
The authors of the Scientific American article advocate a different way of presenting the statistics. So instead of saying “1% of women have breast cancer”, they would recommend “10 out of every 1000 women have breast cancer”. This apparently had good results.
After learning to translate conditional probabilities into natural frequencies, 87 percent of the gynecologists understood that one in 10 is the best answer.
I’m not sure if I should be happy about this, or incredibly sad that 13% still couldn’t.