Whereas the control subjects showed activity in the part of the brain associated with self-representation, the subjects with autism did not. This means that the autistic individuals envisioned the words and actions being told to them (actually they read the words on a screen) without themselves as a participant in defining the scenario, while the control group saw themselves being hugged, complimented, kicked, and insulted when thinking about these concepts.
That there are only 17 neurotypical participants means there is no good estimate for the number of false positives, since autism shows up at the 1% level. And quite a few participants had to be removed from the study:
The data from the 25 excluded participants (12 with autism and 13 controls) had been affected by either excessive (above 3.5 mm) head motion (6 with autism and 3 controls) or lack of attention to the stimulus in a substantial number of trials (6 with autism and 10 controls). Participants in such studies comment that occasionally their mind wanders when processing some items, and we have previously found such inattention to be characterized by an abnormal occipital activation time course.
The effect is certainly dramatic, and it may be real, but this needs a lot more study before we can talk about "diagnosis", especially diagnosis of children young enough that intervention might help. Can you imagine a 2-year old inside a noisy machine lying quietly and listening to Mommy say "hug!"?
The data they are dealing with is somewhat fuzzy, so they use what we call a "neural net" to try to analyze the results. It works something like this: for a given event (a particular patient i, for example), you have several different measurements on channels A, B, C: values A_i, B_i, C_i. If you know in advance that the first 10 events are from bald Martians and the rest are from hairy Venusians, and if you "notice" that in the first 10 cases A_i=B_i but in the rest of them A_i=-B_i, you can generate a "bald Martian" formula (A_i+B_i). When this is 0 you probably have a hairy Venusian, otherwise you have a bald Martian.
The "noticing" is the secret to making the process work. You can use algorithms to combine the data with weights based on the a priori known type of the event, and after a few iterations get a formula (generally linear in the variables) that gives you a kind of probability that a event is one type or the other. There's typically a spectrum, but everybody hopes there will be a nice sharp peak at 1 or 0.
With enough variables you can easily "over train", and the usual procedure is to train on either simulations (risky) or part of the dataset, and then apply the formula to the rest of the data. The experimenters here did something like the latter. If I read the article correctly, they did 34 different neural net training exercises, each time omitting one event and then trying out the resulting formula on the omitted event. They got accurate predictions 33 times out of 34--which is quite good.