They are used in face recognition. The idea is that these represent patterns you can build an image of a face out of: 0.15 FaceType_1 + .36 FaceType_4 + .06 FaceType_17 + ... gives a pretty good approximation of the original face.
Some patterns have asymmetries of one kind or another. Look at the lower right face below.
You need that sort of thing for real faces. Abe Lincoln was kicked by a horse and his face was never quite symmetric afterwards.
The "Eigen" in eigenface comes from a German word meaning self or own or characteristic. Transformations that shift and twist will generally turn a vector in one direction into one of a different size pointing a different direction. Some vectors are left pointing the same way, though--and they're obviously special. For linear transformations these vectors are called eigenvectors. For each untwisting vector, the ratio its size changes is the eigenvalue.
If you are using tiny images (100x100 pixels), you have "only" 10,000 pixels to compare. When you are looking for patterns among these, the number of combinations is pretty large: 10,000 x 10,000 for starters. Manipulating vectors 10,000 elements long and a matrix 100,000,000 elements big isn't entirely trivial. The two teams mentioned in the article came up with some clever ways to deal with the problem to make it tractable.
And it turns out that many faces can be described with only a few "basis faces". Wikipedia links to the FaceMachine java applet which generates faces with only a few (I think 6) bases, but unfortunately the java is old enough that my browser won't let it run. YMMV.
My office mate took linear algebra over a decade after I did; this was one of the things developed in the meantime, so his class learned it and mine didn't.