You could perhaps explain it if the shifts appear because of interactions with other languages, but not so easily if they are supposed to be spontaneous or universal.
At any rate, the BBC reported on a new computer model for reconstructing ancient languages. Unfortunately the original paper assumes extensive knowledge of the literature, and about all I came away with is that they have a giant optimization problem and that they used Monte Carlo techniques to solve it rather than futilely trying an analytical approach or even a simplex method. They had too many language variables, and in cases like that the systematic approaches require impractical amounts of computer memory and take roughly forever, so you often get a better answer by randomly throwing sets of values through the "phase space" of the variables.
To be specific, suppose you have a million parameters in your problem. You can generate ten thousand sets of the million parameters with random values, calculate whatever you're trying to optimize for each of those ten thousand sets, and look for the minimum. Then do it again in a neighborhood of that minimum. And again in a tighter neighborhood, and after a while you'll be pretty close. Unless the function is pathological.
OK, fine. The optimization is only as reliable as the assumptions that go into it, of course, and I'm not able to judge those.
They looked at Proto-Austronesian (Polynesian et al).
Eldest Son asked "Why didn't they try it on the Romance languages and see if they could reconstruct Latin?" Good question. That sort of exercise has been done by hand, but it would be a nice calibration tool for the automated system. Assuming the algorithm could work smoothly with only a handful of dialects in each language...