To state the obvious, the US News rankings are not really going to be very useful for you. And to be fair, some of those factors would be very hard to reliably measure in any sort of consistent and comparable way on an annually updated basis. Which doesn’t mean you can’t try to investigate those issues for colleges of interest, but it does make it hard for me to imagine a US News-style ranking ever doing that for you.
For what it is worth, as someone who is also very focused on academic and educational quality at the undergraduate level when it comes to my own kids, in terms of US News data, I am usually most interested in the US News “peer” surveys, both for general reputation and also the version they do specifically for undergraduate teaching.
It isn’t a substitute for what you are describing, but I think over time it has been useful to me in that I didn’t realize some colleges had as good a reputation in certain ways as was reported in these surveys. So in that sense it helped me broaden my notions of colleges that could be good in ways I cared about for my kids.
Including various NESCAC colleges. I will admit I had a much stronger impression of some of those colleges than others until recently (somewhat randomly, a lot depended on whether I was likely to have attended a debate tournament they hosted back in my own college days). And while the overall US News rankings still don’t mean much to me, how they did in these peer surveys has been one of many sources of information I have been considering when taking a look around those colleges.
A few comments about the peer review process (which is 20% of the rankings):
The response rate isn’t terrible:
U.S. News collected the most recent data by administering peer assessment surveys to schools in spring and summer 2024. Of the 4,665 academics who were sent questionnaires on the overall rankings in 2024, 30.7% responded compared with 30.8% in 2023. In 2024, the peer assessment response rate for the National Universities category was 41% and the National Liberal Arts category was 47%.
Surveys are sent to presidents, provosts, deans of admission and the like. Obviously, most of those recipients don’t complete the survey.
For those who do complete the survey, many give it to a lower level staffer to complete, say, an entry level AO to take one example (an example I’ve often heard.)
I’m generally of the mindset that many survey recipients have little knowledge of the current mission of a given school(s) they are ranking, how said school is fulfilling its mission, and its outcomes data. I expect that quite often, perceptions of a given school are tied to past, not current, performance of a school.
Can one draw conclusions based on the peer review data? I suppose so, but to me it seems ripe for inaccuracies and/or non-current information. I would support completely getting rid of the peer review piece and replace it with greater weighting of outcomes.
I wish people wouldn’t pay attention to USNWR rankings, but I know many do and the current way the peer review data is collected leaves much to be desired.
So I have zero doubt there are all sorts of “errors” possible with any given response. To me that then becomes a question of whether when all those individual errors are put together, is that leading to some sort of consistent bias"
And I think you are right there could be a significant trailing bias, although I also tend to think it is good not bad if these measures are hard to move quickly.
In the end, though, people should understand this is no more than what it is–a survey of opinions, not an actual prediction of the lived reality of your kid.
“some sort of proxy” is probably qualified enough for my purposes.
I do think the survey results are an imperfect measure of reputation in higher education circles. Possibly with some biases, definitely with some noise, so imperfect, but probably still a reputational measure of some sort.
And then whether reputation is actually a good measure of quality is often a complex question. But there is likely some sort of relationship, so . . . “some sort of proxy” works for me.
How can one access the peer surveys? I did a quick google search and usnews search and didn’t come up with anything, but I’m sure the data is out there.
Dang. I thought my women’s golf posts stood a chance at getting some traction; but now that we have US News, forget it!
My support of the US News rankings has officially declined by three levels, to “Moderately Supportive.”
It seems that Middlebury has joined an illustrious group of LACs that US News feels free to move around a lot from year to year. From recollection alone (I didn’t go back to several prior ranking years), that group includes Vassar, Wesleyan, Smith, Haverford, Harvey, Barnard and W&L. I’m sure there are others, but those, again off the top of head, seem to have appeared up and down the T20 or 25 for US News.
Let’s face it, the inclusion of the service academies are the only thing really messing up the LAC rankings. Imagine the gnashing of teeth if the National Universities had to make room for the Air Force, Navy, and West Point!
Malcolm Gladwell’s Revisionist History podcast did a two-parter on the USNWR rankings a few years back, and talked specifically about the bogus nature of the “peer” surveys - basically making the point that the administrators filling these out are much much too busy to have an informed opinion of each of the n-hundred schools they’ve been asked to report on, and so they’re really just basing it on the same signals everyone else has. Here’s a good writeup on it, and I remember listening to the Dillard University episode - it was great.
Unrelated:
…raise your hand if you’re stunned that @NiceUnparticularMan was a regular on the APDA circuit some years back. I’m assuming there are a fair number of you out there…
Beyond the service academies, other additions from other categories that have impacted the positions of some of the NESCACs include Harvey Mudd and Richmond.
The surprise here isn’t that Mudd is included, but that it came in below 8 other LACs (and, yeah, I have problems with Richmond; but they would probably hijack the thread.)
I’ve been waiting for this to see how demographics played out in the post-affirmative action world, especially after Amherst released data showing a big drop in the percentage of Black students. But at Wes the answer seems to be that they stayed largely the same, with Black/African American student share ticking up to a five-year high of 12%.
One other item of note: percentage of ED-admitted students in the incoming class is also at a five-year high (61%).
I love how transparent Wesleyan is with this info, including admitted student data, which not many schools share (but I wish they would.)
These are a few of the numbers I look at each year (again, for admitted students):
HS Preparation:
85% have taken calculus
79% have taken biology, chemistry and physics
77% have a fourth year (or equivalent) of one foreign language
Even though Wesleyan’s peer schools don’t divulge this info, in the absence of this data I assume they behave similarly to Wesleyan in what they are looking for in terms of HS prep. That assumption could be wrong of course.
The inherent problem with specialist institutions like Mudd is they could plausibly be #1 for certain kids, and not on a list at all for other kids. A university like, say, Caltech presents the same problem for the National Universities list. So there is basically no ranking that will get them “right”, unless perhaps it is a ranking of just colleges with that special focus.
Of course by this way of thinking, the service academies present the same problem, and in some sense so do women’s colleges. And it is easy enough to basically print out the rankings and use a black marker to strike all the colleges that do not interest you at all (or which would not allow you to enroll). But still, even if you are the sort of kid who might well, say, consider all of Williams, Wellesley, and Mudd, exactly how you rank them is very much going to depend on some very personal decisions.
For what it is worth, I tend to think this is a good assumption for at least other general admissions and broad coverage institutions like Wesleyan. I know Caltech suggests different priorities, and there may be other specialist institutions, or perhaps direct-admit specialist schools or majors, which would have different priorities. But outside of those cases I am also comfortable using Wesleyan for guidance.
Agreed. But for Wesleyan peer schools I’ll make the leap. That’s why I post those Wesleyan admitted stats on many threads where the student says I only have two years of FL, or no physics, or blah blah blah.
Also, a family I know visited Wesleyan recently, where an AO stated if one doesn’t have four years of each of the five core subject areas, it’s highly unlikely that’s going to result in an admission. Maybe they will ultimately post that stat…but that’s not an uncommon way to separate the wheat from the chaff when looking at a high volume of apps.
But this isn’t a matter of popularity. USNews ostensibly has eliminated selectivity/rejectivity as one of its key components. And Caltech ranks pretty highly in the National University poll. Harvey Mudd is under-performing vis a vis the National LACs for some reason I can’t quite put my finger on.