Research university ranking based off of just university quality

^It should be the case that peers in disciplines have a better take on what’s happening in their disciplines than, say, provosts would know about other universities as a whole. Department heads, graduate advisers, and leading faculty are often keenly aware of who is gaining or losing ground in their disciplines. I do not believe that any ranking is exact, but I publish the departmental rankings (and compare public and private university departments) because parents and prospective students should see the data in the most usable format, i.e., comparative.

I also think that both overall university academic reputation and peer assessments of disciplines are important to honors students (which I write about, and for) because even if a highly-rated discipline is in a large public research university, honors students can avoid many of the negatives of such universities by taking advantage of smaller honors class sizes and research/mentoring opportunities.

If one is looking for academic reputation/quality, there is nothing better than the NRC ranking of doctoral programs.

The NRC rankings are difficult to sort out, and they are now a decade old.

Brown is in a 5-way tie for 8th place - so it is ranked 8-12, not 8. The move looks dramatic because the scores for the schools in that range are so close to each other. It would be hard to argue that it is statistically significant.

The peer assessment is best thought of as an attempt to measure the school’s “reputation within academia”, which may or may not correlate to a school’s “reputation within industry” or it’s “reputation within government” or it’s reputation within any other sector that a student may choose enter after finishing their education. This means that one should consider it’s potential relevance as well it’s potential accuracy.

It is really hard to argue that “reputation within academia” is the same as “academic quality” at the undergraduate level as academia appears to use different criteria for determining the “reputation” of LACs and Research Universities. If undergraduate academic quality equals ranking of doctoral programs, then the “academic quality” of Williams equals zero.

The more interesting experiment would be to take the peer assessment out of USNWR and see what happens…

@Mastadon, it really depends on the field and industry. I know that for CS, the academic reputation of a school correlates very well with its reputation in industry. In other fields, that’s not so true.

@Mastodon, I can run the numbers with academic rep taken out of the mix, but given the US News emphasis on financial resources, selectivity, grad rates, alumni donations,and metrics related to financial resources (class size, ratios of students to faculty), the rankings would tilt further toward private elites, and not so elites. Many rankings acknowledge the quality of public university faculties, and they are not based only on peer assessments. One should keep in mind that I am writing for prospective honors students, most of whom could do well in any university. Again, for them, the admitted negatives of large public research universities, especially large classes, can be be reduced or eliminated.

@Mastodon, sorry, but I failed to respond to the comment about the difference between the U of Chicago and Brown being statistically insignificant. In the US News rankings, Chicago receives 95 points out of a possible 100, while Brown receives 84.


[QUOTE=""]
strong, and U.S. News assigns a significant weight to academic reputation; but its rankings of academic departments, mostly on the graduate level, are not a part of the widely-read “Best Colleges” report each year. The departmental rankings, while still subjective, are probably a better measure. - See more at: http://publicuniversityhonors.com/rankings-academic-departments-private-elites-vs-publics/#.dpuf <<<

[/QUOTE]

What is subjective is the definition of better measure.

The PA, as it stands, is the opportunity for the USNews to level the playing field with intangibles. Considering the abysmal process and nebulous instructions of the PA survey, there is no distinction between the undergraduate and graduate school “distinguished” position.

What is done here is simply overloading on the most criticism-worthy part of the PA. It is not a better or worse measure. It simply is not germane to a ranking that should interest potential undergraduates. It is a misleading hodgepodge of mostly irrelevant metrics that suffers from the same cronyism and lack of knowledge by the responders.

I believe that’s been done and reported before on College Confidential. Yes, I think the result would be as Uniwatcher describes in #125 (“the rankings would tilt further toward private elites”). You can observe a similar effect in a ranking based on SAT scores alone:
http://www.stateuniversity.com/rank/sat_75pctl_rank.html

The mere fact that it would “tilt” more toward private elites doesn’t necessarily make it wrong. It would be wrong if the USNWR peer assessment truly captures important insights about undergraduate institution quality that are missed by all the other metrics combined. If that is the case, then it ought to be possible to isolate at least one or two objective metrics that corroborate the “correcting” tilt of the peer assessments.

What are those metrics? What objective measurements clearly show the same “correcting” effect (the same counter-tilt)? Averaging the USNWR department rankings would not be a very good example (since those rankings are based entirely on subjective peer assessments, and usually are focused on graduate programs.)

@purpletitan, note that I said “may or may not” correlate.

At its most simple level, academia tends to value the ability to create and disseminate knowledge most highly, while industry tends to value the ability to apply knowledge most highly. Somewhat different goals, but it makes for a good symbiotic relationship.

Computer Science (in contrast to some fields of engineering, for example) makes for an interesting case.

On the software side of the world of computers, the medium you are working in is “virtual” rather than “real”, so the difference between knowledge built upon studying theory vs. the knowledge built upon “real-world” experience is less pronounced.

I would caution that even on the software side of the world of computers there are many application segments. Since an industry segment tends to be more granular than an academic department, a university that has a single, strong program that happens to align with a company’s application segment can be regarded as highly (or even more highly) than another university that is ranked more highly by academia - just due to the effects of averaging. The potential for divergence becomes even greater as the granularity of the rating system decreases.

I would also caution that there are many types of jobs within a segment. Different jobs require different mixes of breadth of knowledge vs. depth of knowledge as well as different mixes of analytical skills vs. people skills. Colleges tend to have reputations in this dimension as well.

Personally, I feel that USNews rankings are overvalued (and many of the criteria they use are kind of bunk), not to mention gameable, so I don’t believe that taking out the reputation ranking improves it.

^^^ I believe that the departmental rankings, while largely anecdotal, are less subjective than the academic reputation rankings. Faculty within disciplines have specific knowledge of departments in other universities, by keeping up with publications, learning about moves or retirement of star faculty, and interviewing new PhD’s for tenure-track jobs. Provosts and deans are much more removed from the academic life of universities other than their own, although they may “keep up” through professional networks and some publications.

Yet the academic reputation metric in US News correlates much more strongly with the US News rankings than the departmental rankings, and doubtless has more influence with prospective students. Please note that in our “alternative rankings” we use academic reputation, grad rates, and class sizes only to illustrate that filtering out the finance metrics used by US News yields a very different ranking. If we were to come up with a true alternative ranking of our own based on the best metrics we could find, we would use departmental rankings instead of academic reputation.

Uniwatcher: “I believe that the departmental rankings, while largely anecdotal, are less subjective than the academic reputation rankings” [empasis added].

It just struck me as odd to see an admission of subjectivity in claiming something is non-subjective.

All of this analysis is completely and utterly flawed (including US News and all the other commercial rankings). Simply put, none of these analyses consider errors in the weights and inputs. Should quality X be given 10% weight, 15% weight, or 24.78654% weight? Who knows?!

NRC tried to do an honest ranking of PhD programs, taking into account ranges of weights and errors… and what they ended up with frustrated the public who wanted a simple ranking. The NRC rankings gave a range of rankings. For example, a school might have program X ranked in the range 4-28. To the public, that seems absurd that a program might be ranked #4 or #28… but the reality is, it really is hard to tell the difference between similarly ranked programs (well, if you are being honest).

So carry on… but just know that you are just fooling yourself if you think you can measure academic programs in an objective, precise way. It can’t be done. The best you can hope for is very approximate groupings of roughly similarly quality schools. The rest is just measuring how many angels can dance on the head of a pin.

^^How can anyone claim that the US News measures of either reputation or departments are not subjective. Just stating a fact…about what is subjective.

^Very approximate groupings are in fact the best we can do. Should we not do “the best we can do”? I laud the NRC for at least avoiding ordinal rankings. But in the end, they did not reach a more definitive assessment, although they did add confirmation that HYPS/MIT/UCB and a few other schools widely recognized for being excellent are (probably…in fact) excellent. As for the rest, the NRC rankings don’t add much clarity, and they are far too out of date now.

The most recent NRC rankings came out in 2010. That is not “far too out of date.” The quality of academic programs do not change appreciably on a year-to-year basis… more like a decade-to-decade basis.

Well, no—it’s that you offered an explicitly subjective judgment of your preferred rankings as less subjective. I would have thought that a claim of non- (or at least minimal) subjectivity would be based in something more objective.

While it is true, as I noted previously, that some universities and most of their academic departments will hold up over time and across various “measures,” it is also true that a lot of the NRC rankings are based on snapshots of research, and that research is itself only a partial sample. Please see http://leiterreports.typepad.com/blog/2010/09/a-quick-guide-to-the-new-national-research-council-rankings.html–and note that he refers to the USC philosophy department making rapid improvement over a decade. Well, the NRC data is a decade old, even if the report itself wasn’t issued until 4 years after the data was crunched.

The NRC rankings came out in 2010, but they collected the data in 2005-2006.

It literally took a team of people about 5 years and $4 million to create the ranking, yet they were still not completely satisfied with the result.

I have attached a summary article so you can draw your own conclusions, but I am of the opinion that a single, definitive ranking is neither possible nor interesting - even at the department level, never mind at the university level.

I do find the NRC database useful though (as long as it is used with care, due to its age) and I like the concept of the user interface program that Phds.org provides. The interface gives the user the ability to change the weighting scheme to match their own priorities as well as probe the database and perform sensitivity analyses. It would be nice to have such an interface for the CDC database.

Here are some examples of good, specialized Phd programs that are not widely known outside of Massachusetts that are revealed by the high level of granularity of the NRC database.

Clark is a really tiny university (better thought of as a LAC with a small number of small Phd programs), but a world class geography program that few know about.

http://phds.org/rankings/geography

Tufts is a small university that has a world class nutrition program that few know about, because it is traditionally aggregated with agricultural sciences rather than health/life sciences.

http://phds.org/rankings/nutrition

https://www.insidehighered.com/news/2010/09/29/rankings

http://phds.org/