I have been a frequent critic of USNWR rankings on these discussion boards. At some point one should consider the source of these rankings. USNWR was a failed national news magazine before it latched upon its rankings gimmick to keep it afloat. The fact that it changes its rankings criteria annually is an admission that its past rankings were flawed. Its rankings have to change every year just to keep the gullible public reading/buying. Has anyone here bothered to check out the credentials of the USNWR ranking Guru Robert Morse? Would any credible institution hire this person to rank the nation’s colleges and universities? His only area of expertise is selling a magazine that already failed at reporting news.
Guessing another reason for the shots being fired may be resentment on Northwestern’s part that its local competitor, UChicago, has risen so much in the US News ranking in recent years (from being tied with Columbia at #8 in 2010 to to being tied with Yale at #3 in 2017), in part by UChicago being very aware of the ranking criteria and playing to them aggressively (as has been discussed exhaustively elsewhere on CC). I would guess UChicago is winning many more cross-admits from NU than formerly.
^^ I would agree with @DeepBlue86 - This is something that NW has been answering questions on for years as UofC has climbed the rankings. If someone cared, I would look at UofC closely, as my guess is a lot of the comments are targeted at them (about numerator, denominator, wait list…). USNews being the most popular rankings was the vehicle for NW to deliver the message.
Northwestern and Chicago don’t really target the same applicants, even though there’s probably some overlap. They’ve very different campuses with different vibes. One touts intellectualism, learning for its own sake, the other a more social and athletic atmosphere that accompanies academics.
“But for a college president to spend his/her time on one of the many college rankings, done by a private institution, is not the best way to go about it.”
Every president or dean spends time on the rankings, figuring out how to improve themselves and the lower ones have goals to achieve for the school (crack the top-100 for example). Obsess maybe too strong a word but let’s say very aware. This also happens at grad and professional schools.
NW president’s point is that data submitted to USNWR by many schools are suspect. He never questioned its ranking criteria.
The single largest component of the US News ranking is the peer reputation score, which is determined through a survey of university administrators. It would be very interesting – and potentially embarrassing – if USNWR released more detail about the survey responses that they get.
Several years ago, for example, the magazine “Inside Higher Ed” obtained the actual USNWR peer reputation survey response submitted by the University of Wisconsin-Madison, through a public records request. Here is how Wisconsin rated hundreds of “National Universities”, on a scale from 5 (distinguished) to 1 (marginal):
5 - Wisconsin, New School (5 = “distinguished”)
4 - none
3 - none
2 - 260 schools, including all Ivies, all Big Ten schools other than Wisconsin, Stanford, MIT, all UCs (“2” = “adequate”)
1 - Arizona State (“1” = “marginal”)
Turns out that the administrator who filled out the form had a kid at the New School. He refused to elaborate on his rating of Arizona State.
Yes, this is a true story: https://www.insidehighered.com/news/2009/08/19/rankings
@Muad_dib,
NU president wrote the piece; his opinion doesn’t necessarily represent the administration’s and certainly doesn’t have anything to do with NU alumni. Please don’t use this as an excuse to attack the school and the alums. How do you know he wasn’t just trying to raise public awareness that the rankings should be taken with a huge grain of salt and that differences in, say, 10 spots aren’t meaningful?
For what it’s worth, I agree with President Shapiro. The US News has interesting components worth examining, such as the Engineering and Business rankings, or the Peer Assessment score. But the Alumni Donation Rate, the Financial Resources Rank and the Faculty Resources Rank are completely unreliable and/or irrelevant, either because they aren’t telling or because the way the data is tabulated is inconsistent and inaccurate. President Gerhard Casper (Stanford University) wrote a similar letter to the US News editors 20 years ago.
https://web.stanford.edu/dept/pres-provost/president/speeches/961206gcfallow.html
Corbett, Wisconsin’s administrator’s Wisconsin, New college and ASU ratings were most likely not included in the Peer Assessment score because they would have been detected as outliers. If you read the fine print, you would see that the Peer Assessment Score deletes outliers. Also, the Peer Assessment Rating is now taking two-year averages to reduce year-to-year volatility. I would not dismiss the Peer Assessment Score. We are talking about the informed opinion of hundreds of university presidents, provosts and deans of admissions. There will be biases and outliers, but those are, for the most part, left out of the rating. But even if they weren’t, there are too many people surveyed for a few of those outliers to materially change/influence the outcome. Unlike other criteria used by the US News ranking, the Peer Assessment Rating cannot be manipulated by a single university since they are based on the responses of hundreds of president, provosts and deans of admissions from dozens peer institutions. All other criteria rankings are directly derived from the data provided by the individual universities and colleges.
US News ranking has tons of problems, but the problem is that the other rankings are even worse.
Among all the gaming techniques, the one that I dislike the most is enrolling a “small” cohort of freshman and then open the flood gate to internationals/transfers the second year with little quality control. The magnitude of this gaming can be very large. For example, Northeastern in Boston enrolls about 2700 freshman, but it has a total of about 18000 undergraduate students. In other words, the size of sophomore is almost doubled relative to freshman: increasing from 2700 to about 5000. A big part of the 2300 increment is international and requires no SAT. Good luck to student bonding for 4 years!
@pro2dad
Uh, no. Can you cite the source of your claim that Northeastern has 5000 sophomores? Northeastern enrolls 2800 fall freshman, 600 in January through NUIn, and 500 transfers which included sophomore and later years. Also most NU students spend five years there due to the coop program so your arithmetic is way off.
@prof2dad - As @TomSrOfBoston points out (Tom…missing the F in prof 2 dad)… 5 year math makes NU freshman a smaller percentage of the school.
Given that Northeastern is a 5 year school for the majority (6 for some programs like PharmD, any BS/MS programs with 3 co-ops), there are actually not nearly the number you suggested, which also aligns with transfer admission statistics. If you simply account for 5 classes at a time, that gap drops from 2300 to 900. Northeastern already doesn’t look at / report international test scores for freshman (certainly gaming), so there’s no need to loosen standards after freshman year. To compare, BU accepts a similar number of transfers.
The transfer acceptance rate is 39% while the freshman acceptance rate is 28%. Hard to say if that applicant pool quality changes etc, but it doesn’t appear that some huge loosening is happening. The numbers also don’t add up. The CDS shows that less than 600 transfer student enroll per year, so that still leaves 300 unaccounted for, even with the adjusted number for the 5 year program.
Edit: Looks like NU.in accounts for the rest likely.
@TomSrOfBoston My bad. My earlier estimation ignored 5th years. But my main thesis remains. The student size increases substantially, from around 2800 to almost 4000. Most international students enroll later on do not have an SAT/ACT.
As mentioned above, neither do the freshman who enroll originally as international applicants, and there’s nothing to point to transfers being any higher percentage of international students. It’s also worth noting that after the first semester, the freshman class size is effectively 3400, 3700 at the start of sophomore year (nearly to full size with 4 years left). You’re certainly right there’s gaming, but it’s not in transfer admissions. It’s mostly NU.in there.
@prof2dad @PengsPhils Many colleges “manage to the ranking criteria” aka gaming. They offer January admission to September applicants, the requirement to spend fall semester overseas, and the very common guaranteed sophomore transfer option. Schools that do this include USC, NYU, BU, Brandeis and even Ivy League Cornell. While the excuse that “everyone’s doing it” never got my kids off the hook when they were young (or me off the hook when I was a kid) the fact is that these alternative entry routes are increasingly common in top ranked colleges. Yet most seem to zero in on Northeastern when citing gaming, perhaps due to that Boston Magazine artiicle which, despite the clickbait headline, was very positive for the school.
It’s important to keep in mind that much of the gamesmanship (or dishonesty) has almost no impact. Hamilton is a school that has January admits. I believe they are included in the stats for the class they are joining (ie…Hamilton accepted all of the Jans in the regular cycle, which is 2021 for the new class that started a few weeks ago).
Even if they’re not including the January admits as a way to improve their stats…it’s pointless. The class of 2020 at Hamilton received 5,678 applications. If they excluded 50 January admits, the impact on the acceptance rate is 0.0088%. If that impact is then factored into the rankings (where it represents… say… 10%), the impact is zero.
I agree that the rankings are viewed as too important and that schools do silly things to drive rankings…but in these types of rankings close is good enough. When all of the math is fuzzy, nobody should hold up results as gospel.
OK, I read the fine print. Here’s what it said:
- For the 2017 rankings, USNWR sent out 4,635 questionnaires for the academics survey
- They reported a response rate of 39%, which means about 1,808 responses
- "To reduce the impact of strategic voting by respondents, U.S. News eliminated the two highest and two lowest scores each school received before calculating the average score."
https://www.usnews.com/education/best-colleges/articles/how-us-news-calculated-the-rankings
USNWR won’t tell us how many numerical scores each school got. It would be less than 1,808, because it is possible to provide a “don’t know” response, which doesn’t count. Some of the more honest academics answer “don’t know” to the majority of the hundreds of schools on the list (Morton Schapiro himself, back when he was president of Williams, said that he used this approach).
But let’s suppose that each school gets “hundreds” of numerical scores, as you suggest. If “strategic voting” is at all common, then removing a mere four scores – as USNWR says it does – is not going to eliminate its effects.
You may place more value on the “informed opinion” of university presidents than the presidents do themselves. The quotes below are from two different university presidents:
https://www.insidehighered.com/news/2007/05/07/usnews
https://www.insidehighered.com/news/2009/08/19/rankings
I know a lot of people like to fault the “reputation survey” used by USnews, but I personally feel that it is quite relevant as long as you know what you are getting.
I recently heard a talk show host say something on her show that can be used as an analogy here. She said that she was reluctant to book a certain guest on her show, even though she wanted to have a good conversation about some controversial topics with him, because he had a “reputation” of being “a misogynist”. She felt that inviting him on the show would taint her reputation by association and given that she did not have the time to read all his writings and opinions and research him in great detail, she did not want to risk inviting him on the show because of the risk of blow back in case something surfaced after the interview which she may have overlooked. The risk was too high and she did not think the potential reward was worth it. So he was not going to get on her show, because of his “public reputation”, even though she had no idea whether he deserved that label or not.
That is what the reputation rating of a University gives you in the other direction. If the “rumor mill” generally says that a certain University is an “outstanding institution”, well then that is good to know for somebody who values “perceived reputations” and is paying top dollar for a “private school”. It really doesn’t matter whether the reputation is deserved or not. The student who attends is betting on the fact that he/she will get a halo effect through association and that is the “signalling effect” the student is hoping for.
Should this be the way students choose their undergraduate institution? I am sure some would argue, No and there are many legitimate reasons for such an argument, but for others “perceived reputation” may be perhaps the most important metric of all.
This is the primary reason, you don’t find “West Idaho State University” show up as a top 10 school in the USnews ranking. USnews is an “exclusivity ranking” and at $250K, private education may very well be a “veblen good” for some who consume it and for those consumers an institutions’ “perceived reputation” as well as USnews’ methodology of measuring it makes perfect sense.