Research university ranking based off of just university quality

The flaws of USN&WR’s peer assessment system are well discussed, but If any of you have access to the paywalled parts of the Chronicle of Higher Education’s archives, with a simple search you’ll find lots and lots more about them, including references to things like statisticians demonstrating how useless they are, college administrators admitting to simply making up numbers for college they know little to nothing about, and so on.

That’s what particularly bothers me about this refigured ranking system—it takes what is arguably the most problematic part of USN&WR’s formula, and makes it bigger. Not a recipe for meaningful success, I’m afraid.

xiggi,I appreciate your longstanding interest in the rankings and your perspectives on peer assessments of reputation, and having read many articles about the PAs over the last several years, I am sure that the peer assessments are frequently subjective. Again, my point is this: if you carve out the magnifying effects of the “wealth” metrics used by US News, you can see how those metrics distort their rankings.

You are correct that after removing the wealth metrics, those that remain are magnified; however, in the alternative rankings, retention rates, grad rates, and the percentage of classes with fewer than 20 students are magnified as well as reputation, and carry more combined weight than reputation. I would argue that if some leading public research universities get a boost from the academic reputation weight, many are "penalized’ to an equivalent extent by the increased weight of grad rates and lower class sizes.

Uniwatcher, I understand the theory behind removing the impact of wealth. In the same vein, nothing precludes anyone from maximizing or minimizing the separate categories (and this despite their hidden links) to arrive at a better “ranking.” In fact, the older versions of the USNews allowed such reordering with just a bit of work.

The main problem I have, as you might have guessed, is that placing a higher weight on the PA exacerbates the weakest link of Bob Morse’s work. Inasmuch as the USNews has realized over time that the survey (which asks school officials to rank between 200 and 300 schools on a simplistic “distinguished” scale) is a blunt instrument. The ranking organization does realize that anyone who has to invest 30 seconds “thinking” about how ONE of its peers ranks would need several hours to complete it with a modicum of integrity. And this does not address that the responders might, at best, know about a dozen schools at the undergraduate level. Add the fact that the responses are in the 40-60 percent level, and the final product is far different from what it is purported to be!

Over the years, we have had numerous discussions. Here’s just of them: http://talk.collegeconfidential.com/college-search-selection/1551708-us-news-rankings-what-would-they-look-like-without-peer-assessment-score-p1.html
Probably missing are the links to the type of survey filled by the “bosses” at Clemson, Wisconsin, or the University of Florida that showed the kind of orchestrated manipulation that … has paid dividends for schools. But such stories have triggered mostly a series of denials or yawns. In fact, not to many people care about the integrity of that survey. And that is why the final product is what it is!

However, back to the impact of removing the impact of wealth! Is it really serving the aspiring undergraduate to share that a ranking should pay closer attention to the reputation of a school than to the potential “amenities” provided by wealth? Are the resources available to a student not directly important to his or her time at a school? Does the research reputation of a faculty the students might never have any contact with more important than the size of a library or the size of … lecture halls? You did use the -20 classes as a criterion, but how about ascertaining who also teaches those lecture cum section systems? The reputed faculty or an army of GSI or TAs with qualifications ranging from superb to the abysmal – courtesy of FOB recruiting and book balancing?

In the end, there are a number of rankings that present divergent methodologies, and this from the grossly UG-irrelevant ARWU type of graduate schools metrics to the socially-minded Mother Teresa ranking that heralds an academic wasteland such as UT at El Paso as a peer to Harvard! It is a free for all market and one can decide what matters to him or her the most!

The main part is that none of the rankings that attempt to steal the thunder of USNews have provided BETTER alternatives. And this for the simplest of reasons: in the end, almost everything reverts to the simplest of metrics: the wealth, the age, and the … selectivity of the school. And, unfortunately, none of the rankings have sufficient integrity to clearly measure the education and well-being of undergraduates. And this includes the USNews … via its reluctance to revamp its PA into a useful and honest tool.

xiggi, I agree with much of what you say. I do think US News is useful to parents and prospective students who want to know about class sizes, admissions stats, retention and grad rates, etc. It is likely that no other ranking can do a a better job of obtaining and generating these data, because no one else has the clout of US News. In addition, I agree that the ARWU rankings are not very useful to undergrads.

If US News used only the (probably excellent) output data they receive, along with a better assessment of academic reputation, they would, IMO, find the sweet spot. Yes, wealth lies BEHIND some of the most useful metrics (class size, in particular), but counting it in its several forms (alumni giving, faculty resources, etc.) serves to magnify an input rather than strictly considering the outputs. Similarly, the selectivity input is behind the output of graduation rate, and to a lesser extent, of the freshman retention rate. Selectivity data are certainly important and should be listed by US News; but adding them as a metric again has a magnifying effect. We do not use them in our ratings of honors colleges and programs for that reason, although we do list them.

Here’s a study that identifies the main mover of the PA index:

http://dev.opb.msu.edu/institution/cuc/documents/SweitzerAIR2009Paper-ChangingReputation.pdf

On the other hand, a metric such as the expected graduation rate can be “downgraded” by a higher selectivity index. Famously, Harvey Mudd with its combination of stratospherically high SAT and a tough-as-nail graduation policy was listed dead last in that category. On the other hand, a school such as Smith with middling SAT and high admission rates benefitted from a really high ranking in that precise category.

Graduation rates have their own demons in the form of the devil being in the details!

The only ranking that should matter for a given student is one based on the characteristics that are desirable for the student.

Of course, not all students know what they want, not all students know what they should want (a common mistake is for students to pay insufficient attention to their cost constraints), and not all characteristics are those which good information is easily available, particularly by a student who has not attended any college before.

In the case of “quality of education”, people commonly use class sizes, faculty reputation, post-graduation outcomes (employment, PhDs, professional degrees), and other proxy measures. But these measures may not say much about actual course content and rigor or available course offerings and research opportunities (note that determining these may require assistance of someone with knowledge of the subjects of interest, who may not necessarily be available to a high school student). Both the proxy measures and the actual content, rigor, offerings, and research opportunities can vary considerably between different departments within a school.

@Xiggi said:

I wholeheartedly agree with xiggi that the wealth of a school should be considered. I have taught at two schools, one that is very wealthy and one that is relatively “poor.” The rich school is able to provide well-outfitted laboratories, faculty who have the money to hire undergrads as research assistants in liberal arts fields as well as STEM, extensive tutoring services for struggling students, terrific guest speakers, special programs etc. that impact on undergraduate education. The poor school has a bit of that, but to a far far lesser degree.

green678, all the things you mention are valuable to students and certainly have a relation to the wealth of a school. Again, we were focused on the US News ranking categories and factors. Those do not specifically include state of the art labs, tutoring, guest speakers, etc., but the “faculty resources” category in US News counts for 20% of the ranking. That category gives credit for lower class sizes and low student to faculty ratios, and for high faculty salaries. The separate “financial resources” category has a greater direct relationship to the factors you list in your post. That category accounts for 10% of the total ranking, and specifically rewards high spending per student and for student services.

So while agreeing with what you say entirely, I would suggest that the faculty resources category magnifies the effect of wealth (because low class sizes and student to faculty ratios receive credit separately as it is) while conceding that the 10% for financial resources does cover factors not otherwise given weight.

Is there a more satisfactory alternative to the USNWR Peer Assessment, one that focuses on overall research strengths but is more objective? I’m thinking of the publication/citation measurements baked into some of the international rankings, or maybe the “Research” part of the Washington Monthly rankings.

Combine that (or continue to use the PA, if you insist) with a second criterion for “resources”. It probably should cover more than just class sizes.

Then use graduation rates as your third criterion. Whether to use 4 year or 6 year rates is up for debate. Consider adding a cost factor, such as debt at graduation. So your 3 main criteria would be: scholarship, undergraduate resources, and cost-effective performance (or something like that).

This would be a focused variation of the “what are the best colleges?” question.
It would identify research universities that do the best job of this:
making excellent scholarship available to undergraduates, such that most of them graduate on time without breaking the bank.
The result would eliminate the double-counting Uniwatcher sees. It would eliminate some of the more questionable parts of the US News ranking (like alumni giving and the GC ratings). It would be simpler and easier to understand than the current mix of US News criteria.

tk21769, yours is an excellent idea. I might try this for the 2016 alternative US News rankings. The alternative rankings this year were aimed at showing what the result would be using US News’ own principal categories, minus the wealth-related metrics. Plugging a non-US News measure of reputation could be just the thing.

BTW, to clarify our view of the wealth metrics in US News and as an additional reply to green678, we certainly recognize the positive impact that the wealth of a school can have–but where that impact (e.g., in smaller class sizes, more faculty) AND the extent of the wealth are together given weight in the rankings, the results are problematic.

@tk21769, look at the ARWU rankings if you want an objective research ranking, though they include all research, including at med schools (which . . .who knows; may be useful to pre-meds).

^ ARWU (“Shanghai”) might work, for Uniwatcher’s purposes, as a substitute for the PA scores.
It’s fairly complex in the number of sources it aggregates and the weights it assigns. So you’d have to consider many factors to decide if it’s really capturing what you want to capture.

Of course, remember that all of these proxy measures may not be accurate for some situations, or can be gamed by schools trying to raise their rankings.

For example, even graduation rates can be gamed. A school which decreases course rigor (generally, or in specific majors) to allow students having academic trouble to pass courses and graduate may raise its graduation rates, but may end up being an unsatisfying school (generally, or in specific majors) for a student finding the rigor level to be below expectations.

Objective must be a term full of … individual subjectivity if one declares ARWU to be objective. Moving the goalposts further and further will yield a field that is simply more irrelevant to the context of this forum, which is is purportedly one debating the undergraduate education. Despite its claims to the contrary, the universe of the universities listed is much narrower than the choices available to students in the United States, and so are the rubrics of the citations deemed relevant.

This entire exercise might be reduced to its essence by simply relist the public universities by removing most of the schools listed between 1 and 25 on the USNews.

For what is worth, I have to mention that the focus of Uniwatcher’s research on the Honors’ division of public universities is worthy of a deeper look. I happen to think that as much as the “school within the school” model is a nefarious one, the opposite is true at the college level. Programs such as the Plan II and BHP at the University of Texas at Austin are remarkable creations that deserve more ink in these shores.

I also believe that the biggest “hole” in the reported information by the universities remains the ratio of “upper” faculty dedicated to the teaching of undergraduates expressed in hours per student. But asking for such information will remain the real of uber-utopia as it might expose the saddest of truths about what our universities have become.

“This entire exercise might be reduced to its essence by simply relist the public universities by removing most of the schools listed between 1 and 25 on the USNews.”

Finally, you are catching on!

Except…don’t UC Berkeley, UVA, and Michigan, especially, warrant some adjustment that reflects their value relative to other schools in the US News top 25?

Xiggi, thanks for the link to the Sweitzer paper. The best argument for selectivity as a metric that I’ve seen.

“Except…don’t UC Berkeley, UVA, and Michigan, especially, warrant some adjustment that reflects their value relative to other schools in the US News top 25”

Especially Michigan, since it us not ranked in the top 25 at USNWR.

According to Sweitzer, selectivity has a big impact on PA scores. If this is true, it may seem counter-intuitive that schools like Berkeley and Michigan would have higher PA scores than some schools with higher average SAT scores and lower admit rates. However, according to Sweitzer, the most significant component of “selectivity” (with respect to changing PA scores) is the percentage of freshmen in the top 25 percent of their high school class. At some top state schools, that number approaches 100%.

At least among professional academics, enrolling 25K very good students may be making approximately as strong a PA impression as enrolling 5K “amazing” students. If thousands of good students are enrolling in your state flagship, you notice. If some smaller college in a distant state is increasing its SAT numbers from excellent to more excellent, maybe you don’t.

Xiggi has made much out of alleged shenanigans in the PA scoring. I don’t know much about that one way or the other. I’d like to believe that intentional manipulation is fairly rare and washes out over a large number of reports. To me, the Sweitzer study is interesting for what it suggests is driving the assessments of well-intentioned, cooperating “peers”.

“At least among professional academics, enrolling 25K very good students may be making approximately as strong a PA impression as enrolling 5K “amazing” students.”

There are thousands of students who attend Michigan who have other demonstrated talents that are not necessarily reflected by the highest GPAs or test scores. Many students who enroll in Art and Design, Kinesiology, and Music, Theater, and Dance for example, as well as many recruited athletes, would never be allowed to matriculate if their backrounds were not condusive to excelling in their areas of strength. Conversely of course, you could have the highest GPA and test scores and never be accepted to any of these programs Of course these occurences happen at all schools, but not to the magnitude of a school like Michigan. I honestly cannot think of one school in this country, save perhaps Stanford, that is so consistently strong in almost all of its offerings across a plethora of disciplines as Michigan. The reason I mention this is because I feel Michigan does not get the respect it deserves by some of those here on CC in general and by USNWR in particular. .

The thing with the Sweitzer study is that it mostly concerns schools far lower down the totem pole. Only 16 research universities experienced a big change in PA ratings over the time period he looked at, according to him.