Gaming the USNWR rankings

@tk21769 I think you summed it up perfectly really. It really doesn’t matter what a bunch of yahoos on a public message board think on a topic that they are in most cases arm-chair-quarterbacks, it is the experts who do peer rankings and the students who can choose to go anywhere and where they pick. One kind of validates the other.

How about majors? Does it help to conceal the difference between education and computer science?

@BrianBoiler
I’ve visited boards where many posters do sound like a bunch of yahoos. IMO, this isn’t one of them.

@tk21769
it was an attempt at self deprecating humor, I include myself in the rest of the my post. I’m not a college admissions expert, all I write is from my view as a former student and current parent of students going through the process. Sorry, text doesn’t translate as well as spoken communication. The point is the people who do these rankings are experts in the field, they commit their life to their work. Most on this board only look at this issue from one of the stakeholders perspective.

^ Eh. The people who do these rankings do it to sell magazines and ads.

Everyone should find schools that make the most sense for them.

^ the people who pay the people who do the work sell magazines and ads. The actual people who sit down and actually do the work spend their career talking to all the stakeholders and come up with the process they think is best. And other then some disappointed parents, the rest of the stakeholders tend to agree with how they go about doing it.

^ I daresay most magazines don’t have people who’s full-time job is coming up with college rankings. Which makes sense considering that most of us can come up with rankings in our spare time.

^ I’m betting, and I’d bet a bunch, that USNWR has full time people that do nothing but rank colleges.

^ They don’t do a lot of vetting even if they do.

^ “Everyone should find schools that make the most sense for them.”

I’d be really disappointed if college ranking is the sole data point used in any determination to pick a school. It may be a heavily rated reason not to choose a school, but never should it be the sole reason.

However, to quote Radar O’reily “You can’t swing a dead cat around here without hitting…” a college. There are so many options and using an outside source to help you narrow your selection I think is a great idea. Additionally, if you are about to invest $75k/year on a university, having some sort of independent validation that the investment is worthwhile is a good thing.

^ That can be done by doing a little research (in to grad outcomes, surveys, incoming student body, etc.).

Yes, sure, but if someone has already done some summary ranking for you, and that ranking is widely accepted as being pretty close, why re-do the work?

Are the rankings perfect, absolutely not, there is absolutely no correct algorithm, because everyone has different criteria on what is important. But I’m not going to ever suggest somebody start with Alabama and work to Xavier and collect that information and come up with an algorithm that is perfect for their own personal criteria. Use the ranking for what it was intended to give you one data point.

It is interesting to note how well it works to “game” the rankings.
The most remarkable case is Northeastern.
Most people will never know or care about the details of their dramatic rise in the rankings.
But there it is…and it is getting more applications than ever…selects the best ones, thus raising test scores…
Many families around here live and die by those rankings and just assume the sacred validity of the methodology.
Is Northeastern actually substantially “better” than it was ten years ago? It might be now…
It would be interesting to fake a rise in the rankings with a somewhat random school over a few years and watch the results.

Northeastern went to incredible lengths to game the USNWR rankings, they did it openly, and they reaped the benefits. In 20 years of concerted effort to move the needle they went from #162 to #40.

To read the full article, https://www.bostonmagazine.com/news/2014/08/26/how-northeastern-gamed-the-college-rankings/

Such a claim requires that (a) students—and, presumably, their parents—have good, solid knowledge about what constitutes the best college, (b) the affordability of all the options is equivalent, and (c) there’s a single definition of “best”.

In order:
[ul] The fact that we’re having the discussion in this thread—and the folks here are pretty well-informed on this sort of topic!—as well as arguments about college quality elsewhere on CC makes for some pretty good anecdotal evidence that nobody really knows what constitutes the best (however defined) colleges. Where information about the market is incomplete, distortions in market choices abound.
Different colleges provide (especially after financial aid of all sorts) different price points. To the extent that affordability is an issue for any of the highest-ranked students, that introduces a distortion in the market independent of quality of institution (however defined), and thus means that you can’t really rely on student choice to distinguish college quality.
[li]{c) This is, arguably, the big one. For my D17, no college over about 4,000 students could fit her definition of “best”; for my D19, bigger is better. For my D19, a college can’t be “best” if it doesn’t have a student orchestra than non-majors can audition for; that wasn’t even an issue for my D17. (And for both of them, those aren’t issues of fun or somesuch—they show, to them, dedication to aspects of educational quality.) For some students, there has to be a really amazing musical theater program, or an engineering program, or a strong pre-med track. For some, a location in a “college town” is a big deal, or a huge city, or the middle of nowhere. And so on.[/ul][/li]Not to mention that the issues of perceived quality vs. actual quality can be a pretty big avenue for distortion, as well.

I find the Parchment head to head matchups interesting. I’m not sure I trust Parchment, but a reliable comparison of yield for schools that admit the same kids would be another data point. I’m sure it would be easy to come up with a computer-generated ranking of desirability based on these matches.

https://www.parchment.com/c/college/tools/college-cross-admit-comparison.php?compare=Princeton+University&with=Yale+University

My ideal ranking would be one with 50 or more factors which the user could weight any way they wanted, resulting in personalized results. I’ve seen this attempted but whenever I’ve tried it I didn’t get results that made sense. You’d probably have to do some pre-weighting of the factors, e.g., academic quality mattering more than good food even if a student said both were of highest importance. Of course this wouldn’t sell magazines.

you can probably trust Parchment, but not the ‘data’ since it is anything but data. (As they teach in AP Stats, self-reported anecdotes maybe ‘anecdata’, but not real data from which you can use for forecasting.)

@bluebayou, that’s true to a large extent. GIGO :wink: You phrased it better than I. I should have said “Parchment as a data source.” also wonder if the Parchment standards for statistical confidence are high enough, specifically the sample size. Playing around with Parchment’s comparison tool I found a matchup in which they predict that of students accepted to both schools 67% will choose Central Michigan University over University of Chicago. I have no doubt some students have made that choice, but I can’t imagine that it’s a common matchup or that the average student admitted to both would choose CMU over UC.

^ Go, Chips!

Parchment is definitely garbage, there are people who go on there and knowing post false data just to skew the data and there is no quality control on it.