The Misguided War on the SAT

I’m reopening on slow mode. I’ve cleaned up some of the more egregious ToS violations. There are several that are a bit questionable, but I’m going to err on the side of keeping the conversation moving.

As I’m still verklempt over the demise of the Golden Bachelor marriage, I will not be entertaining questions on why I deleted your post but not so-and-so’s. If you view a problematic post flag it.

With that said, I’ll further remind members of the forum rules: “Our forum is expected to be a friendly and welcoming place, and one in which members can post without their motives, intelligence, or other personal characteristics being questioned by others."

Examples of phrases to avoid (and this list is not exhaustive):

  • Let me explain it to you in words you can understand

  • You clearly don’t understand

  • That shows how little you know

http://talk.collegeconfidential.com/guidelines

The conversation tends to work better when one does not attempt to put words in another user’s mouth.

I’d further suggest that users familiarize themselves with Godwin’s law; bringing the word “nazi” into a conversation unrelated to early/mid 20th century Germany generally won’t end well.

5 Likes

“Hundreds of million” of data points in statistics prove you wrong. Stats is a discipline of meaningful correlations. Undoubtedly if you ran multivariate analysis on any factors that correlate with privilege like brand of car mommy drives, square footage of parents’ house, parents’ zip code, SAT scores you’d find a small albeit meaningless change in the correlation with your endpoint.

Maybe we should require those factors too. Girls score lower in math on the SAT and even more at higher levels of math, than boys. Should we include SAT scores, gender, height and hair length to the assessment of math students? That would definitely add your version of “statistical power”. It’d also be meaningless.

1 Like

It’s not “my” statistical power, it’s that of the University of California, which has some of the top researchers in the world. If they can’t conduct a decent study with millions of data points over decades, no one can. But note, their conclusion – the SAT adds value in addition to GPA – is the opposite of what they hoped to find). Thus, we can be assured that their findings are sound.

Not to mention, why would the world class researchers at D, Y & H have concluded something similar? As richly endowed schools, they could afford to require any application item that they thought best for thier institution.

5 Likes

Just to clarify, your strong evidence from such illustrious scientists ended up with the SAT being rejected as an admissions criterion right?

Most have been awesomely impactful and convincing. They didn’t reject grades or extracurriculars right? Before you say they’re completely against so-called “meritocracy” in the service of DEI or whatever you call “fairness”

If you are talking about UC, this was settled in court. It’s not completely clear what UC would have done in the absence of the court case.

2 Likes

Again, not my evidence, it was the evidence of the University of California’s admissions over decades. (UC began requiring test scores back in the dark ages until COVID.)

But UC made a (political?) decision to ignore (or “reject” if you prefer) its own data findings: 1) the extra value, even though statistically significant, was not worth it to their holistic admissions given the cost/time of the test to the state’s residents (the ‘juice is not worth the squeeze’ in today’s lingo); and 2) UC claims that testing hinders accessibility.

The UC research reports and decisions are all available online, and have been posted in up thread.

Schools make a different management decision with the same data based on thier individual needs. (MIT, Dartmouth, Yale, Harvard, Caltech…chose differently.)

Disagree. If UC truly believed in something different, they would not have settled the case. They have plenty of lawyers on staff to appeal.

I’m happy to agree to disagree on this point. I simply don’t think we can know this for sure.

I’m in agreement with your other point about the value of UC research on the utility of tests in admissions, though!

1 Like

It is fascinating how data about the SAT are being interpreted to validate the narrative that schools made a mistake abandoning the test leading to underqualified applicants and now they are realizing their mistake.

First, skepticism of middle and upper middle class privilege has been increasing at elite colleges for decades - during which time SAT’s were very much in play. We’re just not going back to the time where random upper middle class suburban high gets 1/3 of their kids into ivy league schools. If anything, the admissions stance and departments themselves have become more diverse and more focused on inclusion.

Second, the SAT was made optional by schools to protect the number of underprivileged applicants they could attract during COVID. There are reams of data showing that underprivileged access and education was disproportionately affected by COVID-19, and schools specifically cite this rationale in their press releases. At our committee meetings locally, we talk about how in our decidedly privileged area, no one was admitted without standardized test scores during the so-called test-optional period. Zero. That was always an accomodation for the underprivileged, not an excuse for grade-inflated privileged kids who didn’t “test well”.

Now these same schools are very publicly announcing that standardized testing is again required. Same scenario. It’s not about you, privileged people. It’s about announcing to schools and teachers in underprivileged contexts that this is a path to opportunity. Those are the people that schools are really focused on attracting. Ever hear of “likely letters”. That’s the idea.

And about this false narrative about students being so underprepared after SAT-optional (or DEI). Harvard’s undergraduate dean just spoke last week about grade inflation, and their internal review showed that a significant factor in grade inflation was that current students are simply better prepared and perform at a higher gradable level than students 10-20 years ago. So, in part, they deserve better grades. Still, due to “grade aggregation” (too many people receiving an “A”), some efforts are being made to lower grades overall.

Test optional / SAT optional was never about the privileged. And going back to looking at test scores is also not about the privileged. And it’s cringey to see people say “see, we knew we were better all along” about a test that so strongly correlates with money.

1 Like

Nobody has claimed that SAT does not add value beyond HS GPA. . No study has found this, and UC did not expect to find that SAT adds no value. Instead the relevant question is how much value it adds, and that answer differs depending on the specifics of the study. For example, the amount value SAT adds decreases if you consider more than just HS GPA in isolation, and also consider things like rigor of HS courses and which HS courses are taken, which HS courses received lower grades and how relevant they are to planned major, degree of grade inflation and grade distribution at particular HS, etc.

For example, the Ithaca study found the following result. SAT did improve the degree of variance explained in cumulative GPA, but only by 1 percentage point beyond the other reviewed factors - 44% of variance explained with SAT and 43% without SAT, Had they just looked at HS GPA in isolation, I’m sure they would have found SAT added a much greater degree of value.

Variance in Cum GPA at Ithaca Explained By…
First Gen + URM + Gender – Explains 8% of Variance in Cum GPA
Demographics + SAT Score-- Explains 25% of Variance in Cum GPA
Demographics + GPA + HS Course Rigor + AP Count – Explains 43% of Variance
Demographics + GPA + HS Rigor + AP Count + SAT – Explains 44% of Variance

Regarding the UC study, they found SAT in isolation explained ~20% of variance in college freshmen GPA, which was greater than HS GPA in isolation. They also found the combination of SAT + HS GPA explained ~10% more variance in college freshmen GPA than HS GPA in isolation. SAT in isolation also explained ~5% of variance in graduation rate and 5-12% of variance in GPA in specific UC courses, depending on the course type. Note that none of these findings conflict with the Ithaca study above. Like the UC study, the Ithaca study also found that SAT in isolation explained a notable portion of variance in college GPA However, Ithaca arrived at a different conclusion about added value, when they added controls for measures of HS course rigor and strength of schedule.

2 Likes

It’s commonly implied on these forums that the UC task force study recommended keeping the SAT. It did not. The UC task force authors recommended dropping the SAT and replacing it with a different assessment system. The UC study authors disagreed about whether UC should go test blind or not during the period before the new assessment system was available. A quote is below.

Members of the Task Force differed on the question of whether to recommend that UC cease consideration of standardized test scores sooner — in all likelihood before availability of the replacement suite of assessments.

3 Likes

right, but its helpful to understand the context. The Task Force was charged with addressing five questions (paraphrased since I can’t cut and paste):

  • Does standardize testing assess student readiness?
  • how well does testing predict success in conduction with holistic admissions?
  • Should testing be improved, changed or eliminated?
  • Does testing promote diversity and opportunity of students?
  • Does testing enhance or detract from eligibility to UC.

And the answers were:

  • Yes;
  • Small statistical addition;
  • Improved;
  • No; and
  • No (detract)

So the latter two Noes led to the (unserious) idea that UC could find something else (Improvement under #3), and upon failing that, which concluded no tests.

1 Like

For those worried about grade inflation, I can share a different perspective based on the system in the province of Quebec (Canada). Students graduate after grade 11 and then do (2 years usually) of CEGEP which is a mix of Grade 12 and university, in essence a pre-university program.

Once in CEGEP, students come under the R-Score system. This was a system designed in France. Basically, it scores a students “score” based on a mathematical formula relative to “class average”, "what their high school grades were vis-a-vis overall high school averages, and course difficulty. The maximum RScore is a 40. When these students apply to University in Quebec, their RScore is the prime driver of whether they get in or not. Universities post what their minimum RScore range is for specific programs.

This RScore system receives a lot of criticism from parents who (besides not understanding it) also realize that grade inflation adjusted for under the system. So if Johnny gets 90 in math and the class average is 93, his RScore will be below his peer group etc…Two of my kids graduated through this system and I found it was very good approach for ensuring comparablity of kids grades both in CEGEP, TAKING into account where they attended high school.

Taking a step back. Quebec high school students have to write “ministerial” exams in certain subjects which helps the government evaluate individual high schools.

I think it’s murkier than that. Consider this quote from page 85:

Likely Impacts of Dropping Admission Tests:
The average student admitted would have a lower first-year GPA, a lower probability of persisting to year 2, a lower probability of graduating within seven years, and a lower GPA upon graduation. The reason for this is that UC would no longer be able to use admissions tests to identify, within socioeconomic groups, the students most likely to succeed.

The numbers of disadvantaged students who would lose guaranteed admission if UC dropped SAT tests is surprisingly large. In 2018, about one quarter of low-income, first-generation and underrepresented minorities who were guaranteed admission to UC earned this guarantee solely by virtue of their SAT scores. African-American and Native American students would be especially hurt by dropping the SAT: among the students guaranteed admission to UC, 40% of African-Americans and 47% of NativeAmericans won their guarantee because of their SAT scores. Figure 3B-1 and surrounding text explains these surprising facts.

Meanwhile, the STTF was basically given two options by the BOARS, and chose neither (page 89):

The Standardized Testing Task Force (STTF) evaluated two possible reforms to the University’s admissions process that have received public attention: 1) adoption of the Smarter Balanced (SBAC) Assessment Consortium’s high school assessments in place of the SAT and ACT, and 2) giving applicants to the University of California the option of whether to take the SAT or ACT rather than requiring the tests. We do not at this time recommend either.

In the substantive recommendations (pages 99-116), they do recommend, as “long term reform,” the creation of a new test that would be better than the SAT/ACT. This was never going to happen and the BOARS has not advanced it at all.

So while it is true that the STTF did not recommend keeping the SAT/ACT, they found that the SAT/ACT — as actually used by the UC system — 1) improved predictive validity for all and 2) advanced access for low-income and URM groups relative to test-blind or test-optional, and proposed a future test that would somehow (through further study and innovation) be better. They further model (on page 101) that going test blind/optional before that new test would (unexpectedly) reduce access to and diversity in the system.

In other words: the STTF — in my view, anyway — functionally recommended keeping the SAT/ACT until a new test is developed, without saying that specifically, but illustrating it clearly because all of the interim outcomes of going optional/blind would be worse.

9 Likes

The comment is referring to a specific segment of admissions – statewide eligibility index. If 1/4 earned their their guaranteed admission via SAT scores, that means a much larger 3/4 who were guaranteed admission did not earn their guarantee via SAT scores. It does not mean SAT advanced access to low income, first gen, and URM groups overall. Their answer to the question of how SAT/ACT impacts diversity is more nuanced. Some quotes are below:

In sum, mean differences in standardized test scores between different demographic groups are often very large, and many of the ways these tests could be used in admissions would certainly produce strong disparate impacts between groups. However, UC weights test scores less strongly than GPA, and comprehensive review appears to help compensate for group differences in test scores. The distributions of test scores among applicants are very different by group, but the distributions of test scores among admitted students are also very different by group, and in almost exactly the identical way

Yet this is not to conclude that consideration of test scores does not adversely affect URM applicants. If standardized test scores must be compensated in order to achieve the entering class sought by UC, that is reason to question whether it is necessary to use the tests at all, and/or whether it is possible to design an alternative instrument that does not require such compensation.

We can also look at the actual demographic changes that occurred while UC has been test blind. Specific numbers are below for entering freshmen classes, as listed in IPEDS. IPEDS does not include mixed race as Black/Hispanic, so I am excluding mixed from URM % below.

Pre-COVID (2019-20)
Berkeley – 17% URM
UCLA – 19.5% URM

Median of 3 Most Available Post-COVID Years
Berkeley – 23% URM
UCLA – 23% URM

The report says the task force was asked the question, “Should UC testing practices be improved, changed or eliminated?” My interpretation is a valid answer to this question is no, the UC testing practices should not be changed. That was not the task force’s answer. Instead the report lists 6 actions that the task force recommends UC take to improve/change their testing practices, and the report list 2 actions that UC should not take – the 2 alternatives that you highlighted (SBAC and test optional). The quote you listed says they “evaluated” those 2 options that they did not recommend, not that they were given a choice of only those 2 options, or those were the only 2 options they evaluated.

One of the task force’s 6 recommendations for improving/changing testing practices was replacing the SAT/ACT with a different and original assessment system. This was not just a trivial and “unserious” (quoting a different poster) recommendation. The report dedicates ~10 pages to describing the details of the new assessment system, advantages of the system, steps of implementation with timeline of months/years, and has a section describing why it is possible that lists other types of assessment systems that have been successfully implemented.

in my view, anyway — functionally recommended keeping the SAT/ACT until a new test is developed, without saying that specifically, but illustrating it clearly because all of the interim outcomes of going optional/blind would be worse.

As listed in my earlier post, the report states that the task force members had different opinions about whether UC should become test blind prior to a new assessment system becoming available. Some of the task members thought test blind was desirable without replacement system, and some did not.

1 Like

This is a compelling statement. Now, I’m going to find & read the whole thing.

1 Like

The whole issue of “student readiness” is also a bit of “cop out” because there is another variable at work- the professor. You cannot make the argument the students are less ready because of test-optional selection when you don’t know two important variables- 1) What would those students, who didn’t submit scores would have received, if they knew they had to submit them 2) What about the role of the actual professor?

Year 2 retention rate for 2022 at UCLA = 97%.
Year 2 retention rate for 2022 at UCB = 96%.

The important detail is how much. Is it a 1 percentage point difference, like the Ithaca study? Or a huge 20 percentage point difference? I’ll try to evaluate what actually happened at Berkeley. I chose Berkeley because it is the flagship and the UC I see most often on these forums, rather than cherry picked. The pattern so far, appears to be little change.

  • First Year GPA. Unfortunately only course by course GPA is available, so I will compare mean course GPA for selected lower courses . I chose lower numbered courses that have an especially large enrollment I am comparing fall 2019 to most recent available year unless otherwise noted.
  • Bio 1A: 2.78 → 2.97
  • Chem 1A: 3.07 → 3.04
  • Math 1A: 2.97 → 2.71
  • Math 10A: 2.93 → 3.01
  • Math 16A: 2.85 → 2.99
  • Physics 8A: 2.92 → 2.88
  • Psych 1: 3.11 → 3.13
  • First Year Retention: 96-97% (pre-COVID) → 96-97% (post-COVID)

  • Graduation Rate in 7 Years – TBD

  • Graduation Rate GPA – TBD, but GPA so far, suggests little change

2 Likes

Yes, and/but, one of the things I found interesting in my research is that even the people who are really access-driven have some kind of standard of “ready for college.” It’s not often articulated, but if you eg look at the Kidder and Gandara review of race-neutral alternatives, they have whole sections on academic enrichment programs that have turned out to serve some goals, but not help more students be ready for college. I really don’t know how you measure it…