How will test-optional policy and termination of standardized testing impact on college rankings?

Bowdoin currently reports a composite SAT middle-range profile, which doesn’t appear much different from that reached by combining section scores:

Composite

1340–1512

Combined

1330–1520

Bowdoin started listing a SAT composite in 2019-20, so it is not available for the reference years being discussed. It’s also not entirely clear how composite was computed. It at least involves a superscore. What is more clear is the ACT percentiles for the reference years, which was before Bowdoin did any kind of superscore on ACT. The combined and reported section scores were as follows. How significant these differences are is debatable.

2015-16 (only test submitters)
ACT Composite: 31 to 34
ACT Math: 30-34
ACT English: 32-35

2016-17 (both test submitters and test optional applicants)
ACT Composite: 30 to 34
ACT Math: 28-33
ACT English: 31-35

@Data10 , #39

My mistake concerning Duke. Here are the percentages of reporting for the last four CDSs which have been produced:

2015-16: 123%
2016-17: 112%
2017-18: 101%
2018-19: 125% (% that you reported from IPEDS => accurate reporting)

I have no idea what Duke is doing with its wildly different percentages.

For USC (assuming University of Southern California) these are the percentages inclusive of 2019-20 for the last five:

2015-16: 116%
2016-17: 116%
2017-18: 116%
2018-19: 113%*
2019-20: 110%

  • For 2018-19 113% from the University's CDS is different than your listed 99% from IPEDS. However USC had the following freshmen admits:

Gender…Male…Female…Total
Degree Seeking, 1st Time…1,678…1,721…3,399
Other First Year…332…310…642
Total…2,010…2,031…4,041

1. Recompute percentages taking SAT and ACT from 2018-19 CDS for Degree Seeking, 1st-Time Students.

The percentages and numbers reported taking either or both tests were:

SAT: 61% and 2,059 students
ACT: 52% and 1,778 students

(a) The 2,059 who did take the SAT flows from the Degree Seeking, 1st-time students of 3,399 = 60.6%

(b) The 1,778 who did take the ACT flows from the DS, 1-T students, of 3,399 = 52.3%

© The total percentages of both SAT and ACT = 112.9% which matches the University’s reporting on its CDS.

2. Recompute Percentages taking SAT and ACT from 2018-19 CDS including Other First Year students.

(a) 2,059/4,041 = 51.0%

(b) 1,778/4,041 = 44.0%

© Total of Percentages = 95.0%

IPEDS reports 99% for USC per your numbers, though the University reports 113% on its CDS, but if the additional students are included, the reporting would be 95%. I believe the additional students could be spring admits.

As far as what I was saying earlier, I don’t think that Harvard would need to boost its numbers by culling scores to make the upper and lower medians seem higher. I think that it would be the other schools – not named MIT – on the eastcoast that might do this. I don’t think that Caltech would care either, because its scores are the highest in the US.

IPEDS and the CDS should match since they both use federal reporting. I misread University of South Carolina earlier from the IPEDS database as USC, rather than University of Southern California. South Carolina = 99% in IPEDS. Southern California = 113% in IPEDS.

The colleges with sums near 100% often have a statement like below on their website, about choosing whichever test is better when a student submits both SAT and ACT. This example is from Penn’s website. College with totals well over 100% usually do not have such a statement. This implies that some of the colleges could have a difference in how the scores are listed internally for admission decisions, rather than just how they reported externally.

Among highly selective colleges, there appears to be only a loose correlation with selectivity or highest test scores. For example, the table below shows how the rate of colleges with percent submitting ACT + % submitting ACT ~= 100% compares to the ACT scores. In all listed score ranges except for the top 3 colleges, the rate was very similar. In all 4 listed groups ~25% of colleges used choose the submit best score format with ~100% of submitted, and the majority used the submit both scores with >100% submitted format. I did not list a group for the 3 highest scoring colleges with >34 due to the small sample. All 3 submitted both scores with the >100% format. I only included colleges for which ACT information was not blank in IPEDS, which excludes many test optional colleges.

Colleges reporting 100% Submitters by Average of 25th/75th ACT Composite
ACT 33.5 to 34 – 4/17 = 24% of colleges reported ~100% students submitted tests (Hopkins, Rice, ND, Penn)

ACT = 32.5 to 33 – 6/22 = 27% of colleges reported ~100% students submitted tests (Dartmouth, Webb, Colgate, Emory, Tufts, W&L)

ACT = 31.5 to 32 – 6/22 = 27% of colleges reported ~100% students submitted tests (Grinell, NYU, Stevens, Tulane, Richmond, Vilanova )

ACT = 30.5 to 31 – 5/19 = 26% of colleges reported ~100% students submitted tests (Brandeis, Lehigh, RPI, College Park, USAFA, )

Dropping down several tiers to the colleges with lowest reported ACT scores, there was little difference in the rate… I am not familiar with any of these colleges and did not review. Some may be 2 years or incorrect reporting.

ACT <17 – 6/25 = 24% of colleges reported ~100% students submitted tests ( Edgar Waters, Tampa Bay Med Prep, SUNO, Le Moyne, Louisburg, Gupton)

@Data10 , #43

I previously gave an example re the bold of the University of California at the height (literally, I guess) of its campuses’ reporting a combined percentage of ACT & SAT scores, how they desired to see an upward trend in improvement, regardless of whether the highest score was considered particularly outstanding – most likely not, and most applicable to the subset of students who attended underfunded high schools. Now of course, we’ll have to see what they do in going test-blind in 2023 & 2024, and then in 2025 to see what standardized test(s) is/(are) required if any at all.

But if the trend of reporting is any indication on the colleges’ CDS forms, e.g., grade presentation, then I would expect some campuses to continue “to game” the numbers for a highest and best appearance with respect to standards of admission. Some campuses report weighted grades when the CDS asks for unweighted, and some just bypass the presentation completely.

I added some things in my square brackets, just for self-explanatory notes. If they’re not in line with your thinking, feel free to edit my thoughts.

So you’re saying that it doesn’t matter what level of admissions selectivity in which the colleges reside, ~ 25% +/- 2% of them will still present a one-student, one-score percentage, and ~ 75% will present > 110%. And you used ACT as your modus of selectivity instead of SAT.

I did state something to the effect that higher-tiered colleges are going to effectively “game” their CDS (and therefore their IPEDS) numbers to appear to be even more selective (and also stated later that Harvard and Caltech probably wouldn’t care effectively to game. I also stated that I thought that the ~100% presentation of SAT and ACT percentages would seemingly be the correct way. But one of the ways to present a higher SAT would be a “sweetspot” of ~ 110% (but it can be higher).

This way, when students present both scores, and if they’re, say, a 35 and a 1,590, a college can include both scores. Without looking, the ~ 100%-presenters would choose the 1,590 SAT. But being that both scores are over the medians of all colleges, the keeping of both scores would be at least tempting.

Again, colleges game the CDS-gpa presentation all the time. There are numerous colleges that present weighted gpas, skewing the new metric added of those who have 4.0 gpas as seen in CDS, C11. The college in which you are most familiar presents 95.4%, when the instructions for C11 state:

UCLA presents 47.7%, which is high, but this is because UC considers an “A-“ grade for a class as a 4.0, in addition to UC only “marking” 10th and 11th grades (but looks at all grades obviously).

Most other colleges don’t even fill out any percentages in the various tiers and leave the average gpa blank. So I think there is sufficient evidence that colleges, elites, semi-elites, etc., try to game the CDS and IPEDS.

@firmament2x, @Data10 : Speaking of Duke and USC, for some reasons, these two private institutions obviously have been the up-and-coming in recent years. And the other one is NYU in my opinion.

Due to marketing efforts? Expansion thru M&A? As this was the case for NYU. The university “completed its merger with the Polytechnic Institute of NYU on January 1st, 2014, officially bringing the discipline of engineering back to the University for the first time in four decades.”

By checking the newest issue of US News, I found that there’s no significant change in 2021 college rankings (i.e., national univ. category)

In other words, there’s no impact on college rankings whatsoever with test-optional policy and termination of standardized testing.

@CalCUStanford Most rankings that use test scores in their calculations take the data from IPEDS (basically, Common Data Set data). The test score data for the current seniors, high school class of 2021, will not be available at IPEDS until *June 2022/i. That is a long way away.

With many rankings publishing in Sept, I don’t know whether they typically use the new IPEDS data from the recent summer or whether it’s from the year prior, i.e. even older. It’s possible that any impact on rankings - if the ranking people don’t change their formulas before then - would not occur until Sept 2022 or maybe even Sept 2023.

Test optional is nothing new and is not expected to have much impact on rankings. I think the more interesting question is how well ranked test blind college, such as Caltech, UC Berkeley, and UCLA will be handled. The new USNWR ranking methodology states,

If the future USNWR rankings assign Caltech as having the lowest test scores of any ranked college on the “National Universities” list, it is likely to hurt Caltech’s ranking, even with the small 5% weighting. Seeing a notable drop in the ranking of Caltech, UCB, UCLA, and other test blind schools would cause some readers to doubt the reliable of the rankings, so I suspect that USNWR will modify this methodology in the first year for which Caltech provides no score information.

For the most part, the rankings are based on relative prestige (i.e. HYPSM caliber schools will forever remain at the top, given the public’s perception of them.) As @data10 points out, US News will modify their ranking methodologies to continue to rank the Ivies/Ivy+ schools at the top b/c otherwise people (and probably the schools themselves) would see the rankings for the utterly useless popularity lists they are, while still benefiting those schools that play their games.

For prospective applicants reading this thread, what does this mean for you? It means that the rankings are utterly useless for anything but the college search process: looking for schools offering your intended major and are the correct combination of location and student-body size. There’s not going to be a noticeable difference in the quality of education between a school ranked 35 and 40—if there is, it’s likely because of other factors, such as school resources, student body-size etc.

That’s a pretty serious ethics charge against USNews. Any evidence to support this?

@RichInPitt See: https://www.bostonmagazine.com/news/2014/08/26/how-northeastern-gamed-the-college-rankings/ (I have nothing against Northeastern, it’s a great school, but like many others, it did focus on maximizing their stats to raise in the rankings.)

““You can love us or hate us, but we’re not going away,” says U.S. News editor Brian Kelly. “University officials realized we’re much more valuable to them than not.” He deflects criticism, saying, “It’s not up to us to solve problems. We’re just putting data out there.” He does, however, admit that the rankings system can be gamed.”

US News Rankings: https://www.usnews.com/education/best-colleges/articles/how-us-news-calculated-the-rankings

“20% of the ranking is undergraduate academic reputation by a peer assessment survey” i.e. no wonder the same schools are always ranked the highest. While I’m not disputing that any of the schools ranked at the top aren’t good when it comes to academics (HYPSM definitely deserves its T5 rank, b/c the caliber of the academics there just TENDS to be that much higher than the rest of the schools,) the ranking list’s changes among the T20 or outside of it (of which range most schools don’t move into or out of) IMO seem to be more for show than actual proof that school x is REALLY better than school y in any given year.

Just my 2 cents.

Grades/class rank is important and now becoming more so. At the most selective schools, absent a hook, you pretty much have to be top 5 or even 3 in your class. And I mean students, not % . The GC LOR is checked for key words like , best in class, best Student in 10 years, our top student, etc. Also ECs and interests are taken into account.

So, yeah, maybe a kid who doesn’t not test well but is still Val/Sal or 3rd in his class , taking the most difficult course load school offers and then some, also gets rave LORs from GC and Teacher saying this student is top of the top, has a great essay and interest in a field that is intellectual, shows academic curiousity outside the norm —I don’t think too many of such students would be low scorers on tests. And if they are, these schools got some prizes there that they would ordinarily would have missed out.

When the USNWR rankings first came out in the 80s, there were no weightings for different categories. The rankings were 100% based on the survey given to academics in which they rate colleges on a scale of 1 (marginal) to 5 (distinguished). The top 7 on this survey were:

  1. Stanford
  2. Harvard
  3. Yale
  4. Princeton
  5. Berkeley
  6. Chicago
  7. Michigan

The results of the marginal/distinguished academic survey still look similar today. HYPS and Berkeley are often among the top 6. Chicago and Michigan also do well. It wasn’t until 1989 until they started added in weightings, which caused Berkeley to plummet in from #5 to #24 and Michigan from #7 to #25. The additional weightings categories are well correlated with endowment/spending and selectivity, which hurts the publics more than HYPS.

The specific weightings used to generate the rankings are arbitrary. There is nothing scientific about the “best” colleges occurring only if you choose 10% weighting on financial resources per student, 7% on faculty salary, 3% on alumni giving, … I very much doubt that USNWR chooses these weightings purely because these weightings are ideal for identifying the best college. Instead I expect one of the primary goals of the weightings selection is to generate profit – originally through selling magazines, now more so through selling College Compass. USNWR college rankings have become far bigger and more profitable since the original rankings back in the 80s. As such I expect that 2 contributing factors to the weighting assignments are choosing weightings that will result in the largely familiar and expected names at of colleges at the top, so readers are more likely to trust the rankings as accurate. And having minor differences from year to year, which encourages readers to not substitute the previous year’s list.

For example, USNWR recently added a small 5% weighting for “social mobility.” If you truly wanted to measure “social mobility”, you might emphasize the portion of students who are lower SES and look at a good metric of their success. If a college has a large portion of students who are lower SES and those many low SES students are successful, that college would rank well in “social mobility” However, this would hurt the ranking of the HYPSM familiar names since they have few lower income students. Instead of ranking based on percent lower income, USNWR ranks on how the what may be few lower SES kids do after being admitted. HYPSM… and other colleges can and do rank well in “social mobility”, even if they have very few lower income kids.

It’s a similar story with the new small 5% weighting on graduate indebtedness this year. The top ranked schools in the new category are HYPSMC… Had USNWR considered portion of kids who were lower income rather than just the portion who took out federal loans, and penalized for institutional loans instead of just federal loans, then you’d see some different colleges ranked near the top. However, as they have it now, the indebtedness ranking is similar to overall USNWR ranking, so it has little impact.

Along the same lines, I doubt that USNWR is going to let Caltech have a large drop in the rankings because they are test blind. Instead I expect USNWR is going to change their methodology for test blind colleges.

A related segment on Adam Ruin’s Everything about USNWR is at https://www.youtube.com/watch?v=EtQyO93DO-Q .

Few admitted students submit class rank at “the most selective schools.” Some example numbers from the 2019-20 CDS are below. The HSs with the largest number of admits rarely submit rank. Even if they did, rank may be based on goofy HS weighting systems that require doing unusual things beyond just getting all A’s while taking the most rigorous core classes to be val/sal. Many HSs also have widely varying degrees of selectivity and HS qualifications. For example, the top x students at a magnet like TJHSST are not the same as the top x students at random non-selective HS; so it’s not reasonable to set an arbitrary limit like must be in top x.

WUSTL – 19% submitted rank
Cornell – 22% submitted rank
Brown – 24% submitted rank
Northwestern – 24% submitted rank
Stanford – 25% submitted rank

The Harvard reader guidelines do give examples of rating well partially due to phrases like “best in career” or “best in many years”, but do not suggest “best” means highest rank.

Thanks @Data10 for the extremely informative post about USNWR, and for linking to the Adam Ruins Everything video—I had forgotten about that show! :smile: