Gaming the USNWR rankings

Cornell’s 2017 CDS says that in the fall of 2017 they enrolled 3,349 first-time freshmen and 614 transfers, so transfers represented 15.5% of the new umdergrads on campus that fall. That’s a substantial percentage, certainly enough to move the needle on SAT/ACT medians. It’s not clear how many of those were guaranteed transfers, but the effect would be the same either way—they aren’t counted in the SAT/ACT medians.

In contrast, Harvard enrolled 1,687 first-time freshmen and 12 transfers, with transfers representing 0.7% of the new undergrads.

Students can try to game the rankings, too. Make some of the easier-to-get-into colleges their targets. Be selective about it. Look for colleges that are much better than their acceptance rates might imply or that have an upward trajectory in rankings. They may be so-called “hidden gems.” Or they may have a long-established record of producing future PhD’s, i.e., they are great ACADEMIC institutions but less attractive than many prestige-seeking applicants (or their parents) may deem them to be.

My son did that when he put the University of Chicago on his list years ago. He was a National Merit semifinalist (later a winner). Chicago wasn’t yet playing the game of maximizing selectivity by spending a ton of money selling the school to high school students. Their national ranking was then around #10 or #12. Not shabby by any means. Admissions director Ted O’Neill was reluctant to do what other universities were doing to maximize applications and thus increase their selectivity and prestige buzz.

Well that was then. This is now. Chicago moved on up in applications and prestige rankings. My son graduated from what Is a top 5 college today: #3 in US News. Did the college really improve? Would he be admitted today?

If US News and other rankings are using flawed data and arbitrary methods, then they should be producing very different results. In fact, many different rankings and metrics year after year all seem to expose a rather similar set of 25/50/75 top schools out of 3,000+ 4 year institutions. Most of those top schools must be doing a few things right besides gaming the US News numbers.

The major college rankings depend on massive amounts of data. Certainly, there must be many errors in that much data, ranging from simple transcription errors to different (sometimes self-serving) interpretations of reporting instructions. Now and then, we see cases of outright fraud. However, IMO, the major stakeholders (colleges, standard-setting bodies, the ranking services themselves) all have (or should have) an interest in compiling accurate metrics. Most ranking formulas are complex enough that it would be difficult through gaming alone to move the needle very far (let alone to move the needle in the same direction equally far across multiple rankings, year after year).

So the biggest issue for consumers probably isn’t the margin-of-error in the data. Even with perfect data integrity, the rankings still would depend heavily on averages (of test scores, class sizes, faculty salaries, alumni earnings, etc.) The rankings (and the averages they use) obscure variance. Within any given university, there is likely to be much more variation among its own departments/professors/courses/students than the overall measurement differences between it and other, closely-ranked schools.

And that’s the (mathematical) point.

Yes, and no. GTO comes directly from the app for Undergrad Admission, while a regular transfer shows up with an AA/AS…

Cornell has articulation agreements (see below for example) with instate community colleges. It has been transfer-friendly for a long time. I’m not sure its fair to ding Cornell for gaming (for such transfers) while it may just be fulfilling its instate mission (and political pressure). It may be no different than the transfer programs of every highly-ranked public flagship.

https://www.oswego.edu/admissions/articulation-agreements

^ @bluebayou, You make a fair point about Cornell which is an interesting hybrid, part purely private, part quasi-public. There can be many reasons for a school to accept large numbers of transfers, and accommodating students who start out in the state’s community college system is one of them. Another is simply to make up for any shortfall in the number you enroll in your freshman class as a consequence of incorrect yield projections, though the freshman waitlist can serve this function as well. But because at many schools including Cornell, many transfer decisions are made after the deadline for admitted freshmen to accept or decline offers of admission, transfers can be another convenient way to fill empty seats.

It is not the case, however, that all transfers are either GTO or arrive with an Associate’s degree. Cornell’s transfer admission website makes clear that they’ll accept transfers at any stage, for any reason. In fact, Cornell’s website currently shows that of 619 newly enrolled transfers, only 118 transferred from 2-year institutions. Presumably the other 501 came 4-year institutions, with or without a prior guarantee of admission. And only 218 are from New York State, suggesting most transfers are not coming through articulation agreements with New York community colleges.

It’s not my purpose to “ding” Cornell in particular, and I don’t know what all their motivations are for having, as they describe it, a policy that “welcomes transfers to a degree unmatched in the Ivy League.” But it certainly is possible for schools to use transfers to game the US News ranking. And gaming or no, the fact that there’s such wide variation in transfer policies means that US News and those who rely on it are not comparing apples-to-apples in using enrolled freshman SAT/ACT scores to judge the overall strength of the undergraduate student body.

@bluebayou the guaranteed sophomore transfers at Cornell I am referring to are not the ones from NYS community colleges.

In theory, transfer students can be accounted for in student selectivity measures.

In practice, transfer student selectivity data (primarily prior college GPA) may not be as readily available to USNWR, especially in a standardized recalculation (USNWR’s emphasis on SAT/ACT scores for frosh student selectivity is probably due to the difficulty of getting a common recalculation of high school GPA that can be compared). So that may be why they just punt on transfer student selectivity.

However, it may be less difficult to include deferred admits (e.g. spring admission for fall applicants, guaranteed or preferred transfer that is offered only to some frosh applicants (not other transfers), etc.) in frosh admission stats for ranking purposes.

I question this. Most college and university administrators actually hate the rankings, unless they’re at the very top of the heap year after year. But many feel they have no choice but to “play the game,” because a less-than-stellar ranking can keep top students from applying, or from accepting an offer of admission if the student is also offered admission at a higher-ranked school. It can adversely affect faculty recruitment and retention. And it can even cause complications with the trustees and alumni donors who might rely in part on the rankings to assess how well the school is being run.

For most, playing the game isn’t a question of outright fraud, but of “teaching to the test”—tweaking policies that will improve their ranking, or at least prevent their decline relative to peer institutions that are themselves playing the game aggressively. And there are plenty of ways to do that, short of outright cheating. Many of those things don’t actually improve the quality of undergraduate education, but they make the school look better given a well-defined and predictable set of metrics. And no, it’s not that hard. It won’t get you all the way to the top, but some schools have moved pretty dramatically in the rankings as a result of all-out effort.

As for US News, they do tweak their metrics from time to time, but many believe they’re constantly rejiggering the metrics to produce the result they want and their readers expect, namely HYPSM at the very pinnacle and all the Ivies in the top 15 or thereabouts. And notice that ranking systems other than US News, using their own metrics, do not always parallel US News’ results.

Undertand, Tom, and that’s why I inquired upthread if anyone knew the actual numbers that actually accept the GTO? 5, 50, or 500? There are plenty of kids who enjoy their Frosh experience at a school that accepted them in the Senior Spring and choose to forgot the GTO and stay put.

“If US News and other rankings are using flawed data and arbitrary methods, then they should be producing very different results. In fact, many different rankings and metrics year after year all seem to expose a rather similar set of 25/50/75 top schools out of 3,000+ 4 year institutions.”

Actually that’s one of the criticisms of it, and people have already mentioned it I think, once a certain school achieves a ranking (say 50), the GCs, university admin officials who fill out the survey and ranked them lower or higher, now rank that school at 50, so you’re in this self-fulfilling kind of loop where it would now be hard to break out of that 50 rank unless you use the methods mentioned (ED, spring admits, test optional).

“Most ranking formulas are complex enough that it would be difficult through gaming alone to move the needle very far”

The US News would be straightforward to model and wouldn’t be a stretch for a college to grab some math or comp sci major to model it. All the info is available either publicly or can be purchased from them, so all the input data and weightings, and outputs are available. Now the raw data is not available, I agree, but you don’t really need that since US News has taken that data and provided a score for it as they do with the gc ranking.

One of the problems with these rankings, especially the USNWR ranking, is that some of the measures (such as selectivity and reputation) these rankings are based on are influenced by the rankings themselves, creating a feedback loop. For example, higher selectivity will result in higher ranking and higher ranking will in turn cause higher selectivity. As a result, colleges would be hard-pressed to advance their rankings without playing games with the rankings.

The person who designed this for usnwr was on NPR today. They built it to be interesting and to sell some subscriptions. Never intended it to be used as it is today. It is not predictive of excellence for any individual student. And the top ten are pretty hard wired into the mix based on the formula. Aka feedback loop. He said some of the major specific listings are more useful. But in the end designed to be interesting not a statement of fact.

Despite all the above criticism the very same people if constructing a list of the top 25 would include all the same schools in a very similar order. No system is perfect but there is widespread consensus about the top schools with only a small variation.

People that think rankings do more harm than good won’t build rankings, that’s kind of the idea.

There may be widespread consensus among this forum, but many groups outside of this forum have different opinions. For example, when USNWR first started ranking colleges, the rankings were based entirely on assessments of top colleges by college presidents, and the rankings looked very different. Stanford was first. Berkeley, Michigan, UIUC and UNCH all frequently appeared in the top 10. It wasn’t just shuffling the order of HYPSMC… In the year USNWR changed the ranking methodology to include more than just peer rankings, then list had huge changes. Public colleges no longer appeared in the top 10, and HYPSMC became the top 6. A similar pattern occurs among peer rankings today, with some flagships being near the top.

Employers also often have very different perceptions. In this year’s QS Employer rankings survey, Employers ranked Berkeley, UCLA, and NYU in the top 10. Purdue and Penn State made top 25, but some of the Ivies do not make top 25. If you ask employers in specific industries or specific locations, the rankings of top colleges will look even more different from USNWR.

Data no public university is remotely competitive on a student body basis with the top privates. This is just a fact. Does it really matter? Who can say. But if you took 100 random kids from the dorm at UCB and 100 kids from H/S there is very little cross over. The grad schools are a different matter and Mich/UCB among others have premier programs on par with the privates though still not quite equal with the very top private school. For instance all SCOTUS come from H/Y and every president of the past 30 years has a degree from H/Y except Trump(Penn).

Not everyone believes that college rankings should be primarily based on the selectivity of the student body or rate of producing SCOTUS/Presidents in previous decades. It seems that college presidents and employers are in this group whose ranking is largely influenced by other factors.

That said, the publics listed above often have a larger number of high scoring students than top privates. For example, Berkeley’s CDS suggests they get about 1600 kids per year who have an ACT math/English of 35+ or SAT math/CR of 780/750. Harvard’s CDS suggests they get only ~600 kids with similar stats – a small fraction of the total at Berkeley.

Data the reason why the admissions process is so very cutthroat is precisely because elite admissions is about far more than objective stats. As everyone knows S/H/Y/P could fill their entire classes with students with perfect grades and scores. Everyone is welcome to their opinion but across the country there is a broad consensus that H/S/Y/P are special and offer their graduates unique opportunities other schools do not. In a few select fields this is clearly true especially if you include graduate degrees. Keep in mind that it’s quite likely that not a single enrolled student at UCB that was admitted to these schools since the FA is usually far better at the privates though there are always exceptions.

Which is better- Marriage of Figaro or the Mona Lisa? What should be ranked higher- Frank Lloyd Wright’s Fallingwater or the choreography from West Side Story?

I think most people can hold two ideas in their heads simultaneously-

1- The ranking system is quite imperfect, since in many instances it’s like comparing architecture to dance or opera to a painting-- i.e. many ways to argue that the basis for comparison is either wrong, or weighted incorrectly, or just plain impossible to do with any validity.

2- That being said, it is helpful (sometime, in some circumstances, for some people, in some situations) to try and assign some measurements to various categories in order to figure out-- all things being equal, where various institutions stand relative to other institutions. Is Williams better than U Michigan? Is Pomona better than Middlebury? I don’t think informed people believe that the rankings answer these questions. But it is helpful- in some situations- to have a rough idea of various groupings. A guidance counselor suggests Stonehill or Wheaton or Providence College for your kid- and you live in St. Louis and have never heard of these colleges, and have never met anyone who went there. The rankings can help you suss out “oh, Stonehill is the XYZ institution for people in New England, now I get it”.

Selectivity is also a double edged sword, and I think most people who are well informed on the subject of colleges and higher ed and admissions understand both sides of that sword. To wit- there are kids majoring in beer pong at Stanford and Harvard who will coast, try not to work too hard, have a fabulous social life, and graduate with whatever GPA they can eke out with minimal intellectual engagement. And there are kids at Southern CT State College who are burning the midnight oil, grabbing hold of every academic and intellectual opportunity they can find, and generally working at peak capacity to learn, grow, expand their analytical skills and ability to think and write. Well informed people know that JUST going to Stanford doesn’t make you a genius, or JUST going to Southern doesn’t make you stupid and that there are people who coast and people who work hard at every institution.

BUT- all things being equal, Stanford is often, or frequently considered a “better” institution on many dimensions (whether you think these dimensions are important or not is another question) than Southern. And if you want to know why, there are many "ranking"systems which will be happy to explain to you that size of endowment matters. And number of full professors vs. adjuncts matters. And mean SAT scores matter. And on and on.

You are free to reject the findings and rankings and do your own explanation- and I’m sure once in a generation, a kid gets admitted to both Southern and Stanford and rejects Stanford. So good- no ranking system is perfect.

That’s no doubt true at high selective, holistic colleges. However, objective stats present an easier way of comparing different schools that trying to measure how well their respective students bodies excel at more subjective, holistic criteria. It’s also besides the point. My point was not everyone believes the best college is the one that is the most selective, not Berkeley is more selective than Harvard.

In the Harvard lawsuit’s example year, they could only have only admitted ~half the class with perfect scoring applicants. Yield tends to be lower among perfect scoring applicants, so I expect they’d be lucky to fill 1/3 of the class in the example year, less if perfect stats + perfect GPA… certainly not “their entire class.”

That is only true for a small minority of fields, such as elite investment banking. In certain other fields. Berkeley is likely to have the advantage. For example, Berkeley grads have a far higher rate of working at most desired SV tech companies after graduating such as Google, Apple, and Facebook than HYP. I’d expect there is better recruiting, more special connections, a stronger alumni network, etc. While location is a key factor, it’s not the only reason. In the recent USNWR survey, engineering dean and faculty ranked Berkeley among the top 3 best colleges in undergraduate engineering. HY both were ranked ~30th.

Most students who favor UCB over HYPS do not apply to HYPS as a backup, so they do not show in cross admit results. That said, all cross admit surveys I am aware found there were more than 0 who chose UCB. Among students entering HYP, ~half do not claim FA, which is associated with an income near $250k… CA families with incomes this high would have a lower cost at Berkeley, suggesting that Berkeley would be less expensive for the majority CA residents who attend HYP.