Research university ranking based off of just university quality

The “penalty” for large class sizes is one reason, but a major point of the article is that U.S. News assigns credit to schools with a lot of funding AND assigns credit for the effects of that funding–one of which is smaller class sizes. Our view is that U.S. News should only count the effects of funding and not magnify the impact of funding by counting it as an additional metric.

@rjkofnovi, thanks for pointing that out; the response was specific to the issue of Chicago’s four-year rate vs. Stanford and others.

Another interesting thing is that, even when you look at just the publics, this ranking matches up much better with tiers by per capita alumni achievements than USNews rankings.

I have
Ivy-equivalent: Cal
Near-Ivies: UMich & UVa
Good schools: UCLA, UNC, W&M, UW-Madison, UT-Austin (I have equivalent to USC/Vandy/WashU/Rochester/CWRU). Also UIUC and GTech, where (like NYU), in some fields, they are top-notch (Ivy-equivalent/near-Ivy) while in other fields, they are decidedly average.

USNews has UCLA up with Cal & UVa above UMich and UW-Madison much lower than UCLA/UNC/USC/Rochester. USNews also has UCD, UCSB, & UCI above or equal to UW-Madison, UT-Austin, and UIUC, which simply isn’t the case when you look at per capita alumni achievements. This alternative ranking has UW-Madison & UIUC higher than UCSB and UCI (UIUC the same as UCD).

Really interesting that Cal, UMich, UVa, and UW-Madison got a boost from the re-ranking while UCLA and UNC really didn’t. It’s notable that by prestige, people seem to put UCLA and UNC on a tier with Cal, UMich, & UVa (they’re listed in “CC Top Universities”) but when it comes to alumni achievements, they’re more like UW-Madison and UT-Austin.

That might be due to acceptance rate. It seems that using selectivity as a criteria is inherently flawed and you’d get “more accurate” rankings (in terms of matching up with what alums achieve) simply by using these 3 criteria.
This makes some sense, actually. All the UC’s besides Cal get an “artificial” selectivity boost because it’s so easy for CA residents to apply to all the UC’s, but that doesn’t mean that UCD, UCSB, & UCI actually have a more accomplished student body than UW-Madison, UT-Austin, & UIUC. Likewise, UNC has a lower acceptance rate than UMich, Both have about a 50% acceptance rate for in-state kids, but while UMich is about half OOS, only 15% of UNC’s student body can be OOS, so their OOS acceptance rate is very low while 85% of their student body are higher-yielding in-state applicants. This leads to UNC’s acceptance rate being lower but with a less accomplished student body than UMich (since the UMich OOS students would have to be more impressive than most of UNC’s students).

Of course, alumni achievements, like graduation rates, have some component due to selection effect. Separating that out from treatment effect (i.e. is the same student likely to achieve more after graduating from school A versus school B?) may be difficult.

Note that selection effect is not just limited to entering students’ academic credentials. Entering students’ SES can also matter, since higher SES students are less likely to run out of money before graduating, and likely benefit more from connections, or family support after graduation when making an extended job search for a career-track job (as opposed to a low SES student needing to take the first job offered, even if it is not the optimal one for future career development).

@ucbalumnus, for sure, but all else being equal, unless I’m pre-med, I’d prefer to go to school with people who are more likely to do great things in the future as well.

@PurpleTitan, thanks for your posts. Can you describe what constitutes per capita alumni achievement?

Hi,@Uniwatcher, I detail it in the link I posted in the first post of this thread, but essentially, I take 3 Forbes’ subrankings:

  1. “American Leaders” (not strictly per capita, but as the WAS LACs do very well in them, they can’t be penalizing for a small size), which is kind of a “who’s who” in business, government, the arts, and sciences.
  2. Prestigious student awards won (per capita).
  3. PhDs produced (per capita).
    and a (per capita) WSJ ranking of undergraduate feeders to elite medical, law, and business grad schools.

I’ve already posted a variant of that ranking (from the lead post) in a thread discussing merits of public honors programs.

Here, again, is the earlier post by public honors:
http://publicuniversityhonors.com/2013/11/20/surprise-public-universities-have-the-best-academic-departments/

This is worthy, imho, of wider dissemination. And I’ve edited to correct that the current table does not use the same metrics. Unfortunately, the rampant media saturation of the U.S. News rankings emboldens their respective criteria. At least they came out with the global rankings table, which tempers the original criteria (which do seem rather unstable on a year-to-year basis).

I still think we need rankings which focus on the quality of the actual schools rather than the students attending said schools. Selectivity, endowment, graduation rates, giving rates, etc… schools have little to no control over these factors. They are stuck with whatever applicant pool they receive for a given semester, which ultimately (and unfairly) affects their ranking and, in turn, school demand.

On the other hand, a school does have complete control over the quality of its campus, academics offered, and general administration. Subsequently, these are things that matter to most families sending their kids off to college.

Even better, get rid of the “rankings” altogether, and focus on “ratings” - a report card style assessment.

The alternate ranking shifts emphasis to the subjective peer assessments, which is the part of the UNSWR ranking that most favors certain large public universities.

Think of it this way. The students with the best stats are the students with the greatest freedom to choose colleges. In a free market with good information flow, collectively they will tend to choose the best colleges.

USNWR has to keep the PA assessments in its computations. Without it, they would love ALL credibility.

@tk21769, the colleges they choose would be what teenagers deem “best”. Many kids want Columbia, NYU, and USC because they are in NYC/LA. That may not have much to do with the quality of the education or even opportunities.

Furthermore, being in the Ivy League seems to boost applications beyond what opportunities at certain Ivies would dictate. I would say that high-stats teenagers and their parents are more star-struck by the Ivy brand on average than people doing hiring or academics evaluating grad school applications (the academics tend to be close to the academic peer assessment rankings).

As another example, UChicago went from being a school with a relative high admit rate and reputation for where fun goes to die to a school with a very low admit rate (in the span of about a decade) after they launched a massive marketing campaign yet I daresay the quality of the education and the opportunities from there have not changed one iota.

In short, your assumption that teenagers are not susceptible to marketing and have perfect information is, IMO, a flawed one.

As an analogy, car A may become the “hot” car among the teenage crowd (usually with the help of a marketing campaign), but car A may not be “better” than car B by any measurable metric even if it is more popular.

They tend to choose the highest ranked colleges, not necessarily the best colleges - there is a difference IMO. I would argue that rankings play a pivotal role in the college selection process for many high stat students, and the go-to ranking source tends to be USNWR, which is very student-biased.

When you have a ranking system that heavily weights student factors, it essentially lets students drive the rankings, which means colleges have little control over improving their ranking position. That’s really the problem I see with the current rankings… it stifles healthy competition among colleges (i.e. hiring good professors, improving campus quality, adding more labs etc) and promotes corrupt competition (i.e. fudging SAT scores, paying students to retake SAT exams, making unqualified students apply in order to lower selectivity rate etc).

Brave to the two preceeding posts!

In addition to excluding the wealth-related metrics, which are not “outputs” at all, we also excluded test scores and class rankings, which likewise are not “outputs.” While academic reputation is subjective, it is form of output, and UCLA’s Higher Education Research Institute has found that it remains key to student choice of a school. That students and families value reputation is surely one reason that U.S. News still makes it so prominent in its metrics.

On the other hand, U.S. News enhances the reputation of many private schools by adding the wealth and selectivity metrics; those same metrics work to reduce the reputation of many strong publics. As many have noted, the selectivity metric also drives schools to generate more applications so they have more students to reject.

We included the three most prominent metrics in the U.S. News methodology, including the percentage of classes with fewer than 20 students. Please note that this metric does not favor publics at all.

I just find it amusing to read the title of the article (Alternative U.S. News Rankings: Lots of Surprises) and find the same HYPSMCC (Chicago & Columbia) in the top 7 spots.

Interestingly enough, Chicago was mentioned of a “significant drop” from 4th place to 7th while Princeton dropped 2 spots from 1st to 3rd that is not significant. LOL, frankly, neither drop is significant enough to worth a mention, to me.

@theluckystar, well, the author of that list revised with more accurate data.

More significant drops lower down (by Wake and Miami as well as Case and Lehigh). Also WashU goes from 14th to 19th.
Brown and many big publics rise significantly.

Granted, if you argue that large moves at lower rankings don’t matter so much if the distribution is normal (so you need really big moves to matter lower down), then the only big movers are Brown, Cal, UMich, UW-Madison, and maybe UVa up and maybe WashU & Case down.

@PurpleTitan, my comment was meant to refer specifically to the beneath paragraph in the article, with a little bit tongue in cheek. 3 spots drop for Chicago should not be considered a “significant” change, in comparison to Princeton’s 2 spot drop; that’s all.

“As for private universities, there are few significant changes among the top seven in the adjusted rankings—except that the University of Chicago drops from 4 to 7. The reason: six-year graduation rates that do not match schools such as Yale, Harvard, Stanford, Columbia, Princeton, and several other public and private institutions.”

IMO, peer assessment ratings act as a faculty quality rating…albeit quality that can be easily measured such as in research, publications and research-related academic awards.

If USNWR dropped peer assessment, they need to add some objective faculty quality ratings.

While I am at it, Chicago’s six year graduation rate is 92%, Caltech’s 92%, MIT’s 93%, Stanford’s 95%, Amherst’s 95%, William’s 95%, Princeton’s 96%, etc. There is only one public university, UVa, that has a higher percentage at 93% than that of Chicago. No other public schools has higher rate in this metric. So there are no “several other public institutions”, just one, that has higher rate in this metric. Sorry to be nitpicky.