HOWEVER, its a limited achievement. I was able to get passwordless SSH working from my Mac, off campus, connecting through the VPN to a windows machine. When I try to do the same thing from a windows machine in the same room, I fail. Go figure: reduce three major layers of complexity, get a bad result.
Ironically, I think the issue arises from connecting from a windows machine. SSH apparently works by checking a certain file for public keys, checking them against the private keys on the remote machine if it finds any, and asking for a password if it doesn't. I think the problem lies in where it's looking for the keys. On a *nix machine (like my mac), it looks in the /User/Home/.ssh directory by default. The closest corollary to that on a windows machine (where I've installed COPSSH (which implies Open SSH and Cygwin)) is buried in "program files/icw/home/user/.ssh." It seems that when I try to connect from the windows machine, even using the BASH terminal emulator, it's not looking in that directory for the key. I've also tried adding that directory to the %PATH% variable (which lets me launch SSH from a CMD prompt), but I still get asked for a password.
I'm quite sure there's a way to make it work, I just haven't found it yet. The consultants in the lab office have all become quite amused at my exasperation... I've tried to recruit a few of the more savy folks into helping me, but thus far I haven't had an enthusiastic response. I guess supercomputing isn't attractive to everyone, eh? How do I reach these kiiids?
On the other hand, a few of the people I've talked with about running problems on it have been enthusiastic. One guy I know has a friend who's working on a protein folding simulation in Matlab (which netWorkSpace also supports), which I think would be an ideal sort of problem to run on such a setup. His description of it sounded incredibly computationally intensive, and it sounds like the problem has the attributes that make it viable for distributed computing; ie: many independent sub-problems arising from a small amount of data.
I've also found some lectures on the subject from Google, which is exciting. http://code.google.com/edu/submissions/mapreduce-minilecture/listing.html
Apparently they teach this series of classes to their summer interns. I'm gratified that my post-graduate studies have enabled me to follow along and understand, at least through the parts I've seen so far. I learned, among other more technical things, that its distributed computing, that I'm trying to achieve, not parallel computing. The latter refers to tasks broken up between processing cores in a single machine, the former refers to tasks broken up between multiple machines. Same spirit; differing challenges.
Added: Dad mentioned that the "before" picture "actually doesn't look that bad..." Its just the light. It was bad. But of course, the most significant parts are the ones you can't see; all the internals that didn't work now (mostly) do.