<p>Not to belittle UMich or anything, but why is their grad program ranked so highly on USNews and NRC for aerospace engineering (#3 in the former and #2 in the latter)?
Is the program really that good?</p>
<p>I was looking through their faculty profiles and couldn’t really find any faculty who seems to be producing exceptional research output. There is also a lack of NAE members in that department compared to some schools ranked lower.</p>
<p>I am wondering why is a school like Princeton only ranked #9? Ivy league name aside, the MAE department at Princeton has had and still have quite a few world class faculty (current faculty that comes to mind are Smits, Stone, Law, Miles), who have produced some ridiculous research output.</p>
<p>I also think MIT at #1 for aero is strange as well. I feel that ranking is a result of their excellence in the other engineering departments and a derivative of their reputation as the overall best school for engineering.</p>
<p>1) Why are you more qualified to judge the quality of research output than those surveyed by the US News?</p>
<p>2) The Ivy League brand doesn’t go very far in engineering circles. Princeton is a good aerospace program, but it has nothing to do with being in the Ivy League (it is one of only a couple exceptions in that group of schools, actually).</p>
<p>3) Just because one program has several faculty with whom you are familiar in your area of research doesn’t mean it is necessarily better than another program whose prolific faculty are in a different area.</p>
<p>4) The number of NAE members says a lot about how well-known the top scholars at a school are, but in the end, you have to be nominated to get it and it is still therefore at least somewhat of a popularity contest. It also generally only goes to older faculty. It doesn’t, therefore, say a whole lot about the actual research output of a program other than meaning that that group of individuals has a high research output and has been internationally noticed for it.</p>
<p>5) MIT always seemed controversial on that list to me, too, but they do a ton of high-powered research in areas outside my area of expertise, so I can’t really judge. I don’t know jack about controls or sensing systems, which are two areas in which they are allegedly really awesome. This sort of illustrates my point number 4.</p>
<p>6) The NRC rankings are designed to rank the desirability of programs to faculty members, not students. They have a group of surveyed faculty rank a sample of 20 programs in their field and then use statistical analysis to determine the relative weights of a bunch of measurable factors (grant money, total citations, etc.) that makes the model most closely match the survey. Then they go out and survey a ton of faculty members about what factors are most important to them and use that to rank all the programs based on those weights. That is probably a more valid approach than scrolling through their faculty pages.</p>
<p>7) The US News rankings take into account a lot more than just research output (though that is part of it). They also weight things like industry reputation, and Michigan excels there.</p>
<p>Actually, the USNews graduate program rankings are heavily weighted toward the two surveys that they use. The survey of the Department Chairs and Graduate Program Directors (25%) and recruiters (15%). Larger schools often have more recognition in these kinds of surveys. The NRC data give a range of rankings for various categories and in this case, the Aerospace programs of Michigan and MIT are more or less the same.</p>
<p>Most universities know that these rankings are flawed and that there are really not great differences between highly selective programs. Particularly, at the graduate level, strength of department is much more nuanced. You can have a world-leading researcher at a school that is not ranked highly.</p>
<p>I’ve always found it interesting that the US News rankings for undergraduate programs are based solely on peer evaluation but the graduate rankings are based on what they try to say are an objective set of indicators. In reality, it would be much easier, in my opinion, to rank undergraduate programs that way since they are much more standardized. With graduate programs, each program is strong in its own area or set of areas and usually weak in others. MIT does almost exactly nothing in my area of research, for example, despite its high ranking. It is a fool’s errand to try and rank graduate programs on some kind of absolute scale.</p>
<p>I have spent a lot of time looking at the undergraduate rankings data that USNews releases for my own university and they actually have about the same amount of peer evaluation as the Engineering rankings I mentioned above. The largest contribution is peer evaluation by Presidents and Provosts (25%) and about 10% or so is from randomly selected high School Counselors. The rest of the rankings are based on retention rates, admission statistics and Faculty resources (class sizes and such).</p>
<p>For the sciences, however, the USNews rankings for graduate programs are SOLELY based on the peer survey. So it seems that the undergraduate and graduate Engineering rankings are based on somewhat more objective criteria even though 35% is still a large fraction for peer rankings. We just got the physics rankings survey a couple of months ago and we had to give a score of 1-5 for each university with a graduate program. It is a ridiculously impossible task that is usually results in random values being assigned to places where you don’t know anyone professionally. Imagine the same kind of survey being done by Presidents and Provosts as well as High School Counselors for the undergraduate rankings. </p>
<p>If you look at the schools from which Presidents and Provosts graduated, you will see that most are from the usual suspects in the top 20-40 of the rankings. A perfectly self-perpetuating system. Having spent a lot of time staring at these numbers, I could go on forever on the subject but it is really not worth it. Suffice it to say that rankings are a good business for those who put them out but they are all flawed.</p>
<p>@bone3ad - You are correct about the undergraduate engineering rankings. I was referring to the overall undergraduate rankings and the graduate Engineering rankings. I really haven’t looked at the undergraduate Engineering rankings at all, basically because they are only peer assessment and virtually useless.</p>
<p>A while ago I came up with a list of “wouldn’t it be cool if…” ideas. One of them was tiny deep space cluster/cubesat probes (as part of an idea to piggyback off larger payload launches). Michigan’s aerospace program gets my peer review props for coming to a similar conclusion :-)</p>