You should definitely let them know how wrong they are.
Both can be true.
My fact-free hypothesis is that the SAT-M score matters more at a school like Cornell which has a large STEM program. Quant ability can also be important in some social sciences, such as Econ, which is becoming more quant heavy at the highly selective schools.
If we return to the question of whether the war on the SAT is misguided, Iâd say the case for the âyesâ supporters continues to grow.
Even if we take the reasonable stance that the SAT matters more for some schools vs others, the maligning of the test itself just overblown. It has nothing to do with the test if ECs, rec letters, and general preparation favors the privileged but the test is the focal point for all this class war ire. It seems more and more that at least at the Ivy+ level (and others including the likes of Conn College), there is strong value in testing and hence this ire for testing in and of itself is misguided. I do feel itâs about time to stop using the test as a punching bag and get to the real work of fixing K-12.
Yep. Anytime another school reverses the test optional experiment, a bunch of people come out and twist themselves into knots trying to dissect the decision and find some âfaultâ.
Itâs dealing with a very strong component of the human condition, though: admitting when youâre wrong. People hate that.
This is not odd at all. The data were collected from a new student survey described in the report, copied below:
âWhile Cornell did not receive test scores from most applicants, data from the Fall 2022 administration of Cornellâs New Student Survey indicate that 91% of matriculating first-years took either or both the SAT and/or the ACT. Indeed, 70% had taken the SAT (or the ACT) test multiple times.â
âThis survey of first-year students had a 79% response rate and ask students how many times, if any, they had taken the SAT and the ACT. It also asked students to self-report their scores. The comparison of self-reported test scores with official test scores when possible indicates that the score self-reports are accurate.â
No voluntary surveys and self-reported data are ever perfect but this one seems credible.
The report and others similar fail to take into multiple factors: 1) in the test optional period, what criteria were admissions staff using to evaluate applicants and did anyone think to consider âflawsâ in the approach taken. 2) What impact did âonlineâ learning have on these students duringt the test optional period. Did those who submitted and prepared for SATâs HAVE MORE resources available to them perhaps than others during the covid period?
The joke is to say testing optional didnât help diversity WHEN the admissions people were the ones picking the students and chose the student profile that they got.
These studies all gloss over underlying variables at play. The real assessment of test optional would the students who were not in grade 10 to 12 during covid period and charting their performance. Again, you donât run a test optional policy for two years only and then reverse course.
While all students were impacted by online learning during covid, I am sure some were impacted more than others. I am sure some students, regarding both their coursework and their SAT prep, had better resources than others.
Still, while every effort should be made to make sure standardized tests are as fair as possible across the board, there reaches a level of granularity regarding the data that reaches the point of being impractical, if not impossible, to obtain. I donât think it is fair for Cornell to look at every possible impact of a once a century pandemic on every student who applies.
An argument can be made that, in the essence of fairness during a unique situation like the pandemic, that ONLY GPAs and standardized test scores should be considered. Who was participating in impressive ECs, getting great letters of recommendation and getting essay tutoring while remaining at home during the pandemic? The wealthy.
Right it makes sense to make policy decisions on a sample of âself reportedâ scores without any context. IF they actually submitted that study for peer review based on the data they used, they would be laughed at. Why not just admit that the admissions office was overwhelmed and unprepared to deal with the increase in applications during the test optional period and here are what the results were.
They specifically reference context. If you think they are intentionally being dishonest, then nothing will convince you, regardless of what they say. I would wager you feel the same dishonesty is coming from the other highly-selective schools that have reinstituted a requirement (in some way) for standardized testing.
The best thing to do for someone from your position of belief is to vote with your dollars and to not attend (or let a child attend) one of these institutions. There are plenty of other options.
Basically changing a major policy based on a survey. They can do whatever they want. It is amusing to watch them try to rationalize a decision they were going to make anyways.
Just want to remind everyone that four Cornell schools (CALs, arch, Dyson, Nolan) have been test blind/free (not test optional) since CovidâŠso applicants did not have the choice to send test scores. Obviously this impacts the proportion of applicants who submitted scores. These four schools will remain test free for Class of 2025.
CAS, engineering, HumEc, Brooks, and ILR have been TO since Covid, are test recommended for Class of 2025.
Itâs like any other research: when the data clearly indicates a pharmaceutical is dangerous , the study ends early. Same with cancer drugs that are found to be far and away better than placebo early in the study period: they often get offered to people on the placebo track with the understanding that they are not fully vetted and could have side effects.
Reversing a policy early sends more canary-in -the coal mine vibes than anything else: the data is likely clear and potentially concerning so why keep a bad policy?
Unfortunately there is also a jump-on-the -band wagon vibe to all of this, both when the top schools(minus MIT, Georgetown) were quick to rabidly endorse and extend TO after the real covid closures were over, AND now, to pop right off the policy just because your peer schools did.
The way all of this has been handled by colleges the last few years is poor. Some clearly game their ranges. Some have let whispers out to high schools that they really prefer scores(from the wealthy private high school kids) .
That is where any war should be, not on the SAT.
Before declaring Cornell is right, and the 2000+ test optional colleges are wrong, you should read the specific statements in the context of the report and consider whether they are consistent with previously published studies or draw any new or unexpected conclusions.
For example, studies that are previously linked in this this thread have found that SAT adds significantly to prediction of first year GPA beyond HS GPA in isolation, but adds little to prediction of cumulative GPA beyond HS GPA + a good measure of course or strength of schedule. The quoted statement and preceding sentence about âfirst year GPAsâ implies Cornell is looking at something similar to the former â first year GPA at Cornell after controlling for HS GPA, without a control for HS course rigor or strength of HS schedule.
The report provides a lot of specific numbers, with detailed tables and graphs. For example, showing there is a table with dozens of entries showing the specific test optional rate by college and year at each college. There are graphs showing the specific rate of test optional applicant by SAT score. There are graphs showing detailed information about the demographic changes with specific percentages.
However, when it comes to the GPA difference for test submitters and test optional above, there are no specific numbers. Instead the report says both groups performed well here, but test optional averaged âsomewhat weaker GPAsâ than test submitter. What does âsomewhat weakerâ mean? Why do you think this partial report that is publicly published on Cornellâs website as explanation for their switch to test required chooses to not list any specific numbers about the performance differences in first year?
I do not find it surprising that an analysis that seemingly does not control for course rigor / strength of schedule found âsomewhat weakerâ first year GPAs.
There were a lot of âfluffâ adjectives used in the report which was odd and raised my alarm bells.
BTW. In April 2020 researchers at University of Chicago published a study which concluded that GPA is more indicative of college âsuccessâ than SAT. The study even addressed âgrade inflationâ. I will try to find and post. GPA has become a boogeyman for some schoolsâŠI mean why let years of real school work get in the way of someoneâs bias.
That study was about just the ACT and one school district. I agree that GPA is great when comparing students in the same school district. It is comparing students from different districts and states where things become tricky.
According to their report, Cornell did control for studentsâ high school GPAs as well as other personal and high school attributes. It is not clear what personal and high school attributes they looked at, but they did account for more than just high school GPAs.
Only one school as far as I know published hard numbers on this. UT-Austin stated that the average GPA of test optional students is 0.86 lower than that of test submitters, which is quite remarkable.
Itâs not so remarkable when one realizes 75% of the UT Austin class is in-state auto admits. They will still have to accept the top 6% students from the same under resourced HSs, regardless of how low their test scores are (same as they did prior to being TO.) I hope they publish the GPA difference between the top 6% from well resourced vs under resourced schools going forward.
So they knew their conclusion was based on âtaintedâ data.
Dozens of schools have published hard numbers, maybe hundreds. However, I think you are only considering the small handful of schools that have switched to test required in the past few months.
For example, the Bates test optional study linked above found the following hard numbers, for cumulative GPA. Bates chose to focus cumulative GPA, rather than first year GPA, because they wanted the difference to look as small as possible.
SAT Submitters â 3.16
SAT Non-submitters â 3.13