There's a world of difference between the meaning of looking at a histogram, seeing a medium sized bump at point A, and saying "I wonder what's there?" and the meaning of puzzling through the theory to figure that there might be a bump at point B, and then looking at the histogram and finding a small bump there.
Why?
In a few hundred random histograms, there will typically be a 3-σ bump--that is to say a bump that will only occur 1/300 times. Random stuff will sometimes look like a real signal of some kind; that's life. So if you have a sky map of where neutrinos came from, and one 5 degree wide spot looks like it has extra neutrinos coming from it, that doesn't mean much of anything. Looking through the sky catalogs for some sort of match is likely to be a fool's errand. You'll probably find something: there'll be dozens of possible candidates. The chances that you have a coincidence are high.
If you look along a tide-washed beach for charred wood, and find a few extra sticks together, it might be the site of a fire or it might merely be where an eddy deposited some debris washed from a mile away.
But if you know you saw a light on the left side of the bay last night and this morning you find the charred wood there--but not much anywhere else--you can surmise with some confidence that the light was from the campfire. If theory says there should be a bump at point B and you see a smallish sort of bump there, you start to feel confident that you are seeing something real.
To keep from "bump hunting" and losing the significance of other analyses to trial factors when you reveal that there's something signal-like in region X, the experiments try to coordinate analyses. (This also makes sure there are enough thesis topics to go around.)
The first step is to figure out a question and then "back of the envelope" it to see if your experiment's data has a snowball's chance in hell of answering it. Usually it doesn't.
If it does, the next step is to create as accurate a model as you can and use the data simulation system to create signal and background data, and try to refine an analysis that can tell the difference between them. You then have to estimate two things: the "discovery potential" and the "limit potential" of your analysis. BTW, you have to give reports on this in your working group, and learn from the comments and suggestions. "Discovery potential" is usually described by a set of curves. If the signal exceeds the 5-σ curve you can announce a discovery, if 3-σ you have "evidence", if below 1-σ you grump, and see if you can publish a limit. "Limit potential" is similar, but in the opposite direction--how strongly can you rule out a signal.
So far so good, but simulated data is never quite the same as real. There are several things one can do at this stage. You can scramble the data, mixing bits from this event and that one together, and use that as a new simulation. Or you can run your analysis on a small part of the dataset and tune your analysis to address the problems you find from the real noise rates. This is the "burn sample". Typically you are allowed to only look at certain quantities that measure the quality, and not on those that display a signal.
Then you present your results again, and the collaborators argue, and give you the go-ahead to run on the whole dataset.
Then you present your results from that, and they argue a while about what you should do instead. At some point they agree to the "unblinding" and the results you really wanted to see are finally produced and presented.
Usually by this time there's some general idea of whether this is going to be discovery or limit, but there are sometimes surprises. Bert and Ernie were surprises--two neutrinos at such high energy in only 1 year's running isn't easy to explain with existing models (in fact a talk this morning ruled out the top 5 models, with nothing left).
Now comes a lot of argument over what you really should have done to make the analysis better. And over the interpretation, and how it should be presented to the world. Sometimes personalities clash. (Funny, that.)
In a big group there is usually a team to help shepherd the paper and guide the analysis, especially if the researcher is a grad student. Anybody can make a typo in their code; it is good to have extra eyes.
If the working group approves it, and the physics coordinator approves it, and the collaboration approves it, off it goes to arXiv and the journals. Usually it gets past the reviewers with only minor changes, and several months later it is printed. And nobody reads it, because if they were really interested they already read it on arXiv. Or it was leaked at a conference. Scientists are generally lousy at keeping secrets.
1 comment:
This was actually quite fascinating. I knew fragments of this, but it's nice to see a fuller version set down neatly.
Your mention that the new data are impossible is what most of us think about physics since Schrodinger anyway, so we're not shocked that the top 5 theories are now toast. Some of them will be back.
Post a Comment