<p>What Ben said isn’t what I meant, but it is also true. By being flexible, I mean that after you finish the sophomore year courses the specific requirements literally read: 63 units of CS numbered 114 and above, 36 units in Ma, ACM or CS, and 18 units in Ma or E&AS. Most majors have at least one or two courses you’re required to take during your junior and senior years, or try to make sure you take at least one course from each of differnt fields (in CS this might look something like forcing you to take a languages course, a course on networking or distributed computing, a course cross-listed with EE and a course cross-listed with Ma). </p>
<p>Hehe… I always think your words sound better than mine, so don’t worry about it. Plus I’ve answered plenty of questions aimed at you in the past.</p>
<p>“Yes. Grad school rankings can be a tolerable proxy for the quality of the undergrad experience, but sometimes this doesn’t work at all – consider, for example, the UC’s.”</p>
<p>Why not use the undergraduate rankings in the first place?</p>
<p>Also, off the top of my head I know that for undergraduate engineering Berkeley is ranked 3rd and UCSD is ranked 12th by USNWR’s ranking system. For graduate engineering, they are ranked 3rd and 11th, respectively. The disparity doesn’t seem large at all. Is general science wildly different?</p>
<p>Well, sometimes a really excellent undergraduate department doesn’t show up at all on the graduate rankings (Harvey Mudd surely offers a good physics education)… the same kind of effect, though less extreme, is observed for other schools too.</p>
<p>It seems odd to simultaneously use a system to promote Caltech, and then discredit it because the UCs have a high rank. Harvey Mudd doesn’t offer doctorate degrees and is therefore classified in a different category by USNWR for undergrad schools. Using undergraduate rankings would result in the same problem.</p>
<p>i’m not trying to push any point as far as i can tell… i.e. it is quite possible that i am casting doubt on a system in which caltech does well.</p>
<p>I’ve spent many years worrying about this problem so perhaps I can help. We ask rankings to do too many things so it’s not surprising we disagree so often.</p>
<p>But the best way to think of it is to ask “Which variables matter (or should matter)?” and “How should they be weighted?” For the purposes of discussing university quality in research output the three most important are Total amount of quality research, Distribution of quality by department, and average quality of researcher (or per capita if you like). Caltech does really well on the first, pretty well on the second, and spectacularly on the third. You decide on the relative weights. Look at the Shangai-Jiao Tong rankings and see how Caltech’s position bounces around as they flip the weights on per capita citations/Nobels, etc.</p>
<p>Throwing undergrad into the mix you have Quality of Undergrads, Faculty Student Ratio, Quality of Teaching and Training, Spillovers from Research/Reputation Quality to undergrads. Quality of undergrads is itself complicated with regards to number of top quality vs average quality of all students. Looking at the best does better for large state schools. Looking at the average brings Harvey Mudd much nearer the top.</p>
<p>The latter two categories are hardest to judge. Is teaching quality determined by How much the students like their classes or by how well the teachers make sure the students learn even if every one of their students hates their guts? The Spillovers from Research gets to the Big Research vs LAC problem. The smaller a weight you put on spillovers, the better LACs look and vice-versa.</p>
<p>Caltech is in a unique situation because it IS a mega-research palace, but it is also smaller than many/most LACs. That makes any comparisons with standard universities difficult.</p>
<p>Then there is USNWR which survives because of high marketing. It has good variables and ridiculous variables (yield, selectivity measured in percentage accepted, and alumni giving are pointless to me + selectivity needs to be quality of pool adjusted) and totally arbitrary weights which vary from year to year.</p>
<p>The true puzzle is why other big name mags, such as the Economist don’t come up with a high profile but consistent ranking which at least keeps the weights the same from year to year. And then let people decide which variables they favor.</p>
<p>hrm, princeton review has Caltech with THE worst professors…
can anyone explain? (it says that most of the score is based off of student surveys)</p>