Monday, November 14, 2011

Parkinson's and trichloroethylene

BBC reports on a report in Annals of Neurology claiming an estimated 6-fold increase in Parkinson's risk in those exposed to the chemical, albeit with a lag time of 40 years.

Two things jump out at me in this: the long lag time and the low statistics (99 twin pairs: 1 with Parkinson's, 1 without). Two others seemed to be associated with somewhat higher risk: perchloroethylene and carbon tetrachloride. They find "No statistical link was found with the other three solvents examined in the study - toluene, xylene and n-hexane."

I don't trust that 6x number. Some increase in risk is plausible, but ...

The sample size is small enough that the uncertainty on the ratio is going to be a substantial fraction of that number 6. I can't get at the original article, but be generous and assume that almost all the participants were able to accurately self-report (warning! uncertainties here!) exposure to the chemicals: 94; 12 with exposure and 82 without. An ordinary 1-sigma fluctuation reduces that to less than 5, or more than 10. The article undoubtedly reports the error estimates, but your typical reporter is statistic illiterate and omits them. Systematic errors add to the uncertainty--remember that this is self-reported exposure.

Another factor to consider is the surprising result bias. Suppose for the sake of argument (PLEASE DON'T USE THESE NUMBERS! I JUST MADE THEM UP.) that half the solvents gave a 50% increase in risk of developing the disease. If the report had found that 50% increase in risk, that would have been an important finding, but it would not have gotten the attention of BBC. An accidental fluctuation that gives one of the ratios a value of 6 rather than 1.5 would be dramatic and make news. An accidental fluctuation that gives one of the ratios a value of 0.1 (reduces risk) would also make the news, and no doubt lead to people trying to drink xylene to treat Parkinson's. An accidental fluctuation of the 1.5 to 1 (no change in risk) would not get any attention at all, and might not even be published--which is pretty scary, when you think about it.

The thing to keep in mind here is that if there's a 5% chance that testing one chemical on a sample this small gives a crazy result, if you are looking at 6 different ones (and the reported exposures are not correlated), you've got about a 26% chance that at least one of the comparisons will be crazy. That's why you want large sample sizes, and to repeat experiments. And why scientists, as opposed to reporters and politicians, report not just the result, but what their uncertainty is.

No comments: