UC slams the door on standardized admissions tests, nixing any SAT alternative

If UCLA’s merit based admission standard where SAT and ACT played a big role has produced the best outcomes in the last quarter of a century why change the practice now?


UCs emphasized SAT/ACT less relative to HS GPA than most other highly selective colleges. A comparison between UCLA and USC shows that UCLA admits had higher HS GPA, while USC had higher SAT/ACT.

1 Like

That might work if we didn’t have high schools graduating students with a 3rd grade reading level. These universities are going to get a glut of applications with half-baked 4.0 GPAs to sift through. The only way for a selective school to get around that is to rank high schools, which would effectively shut-out lower class students. That’s the reason the SAT exists in the first place…to PREVENT discrimination!


Food for thought, Georgia Tech requires scores this year and just reported a 12% increase in EA apps, including a 52% increase in first gen.

I suppose this could merely be a shift from RD to EA. And perhaps tests may have been more available in GA than in CA.

1 Like

First, as @ucbalumnus mentioned, the school already deemphasizes test scores in admissions decisions.
Second, neither of the two programs I specifically mentioned (top 9% local eligibility and CC access) are dependent on test scores, and these two account for a substantial portion of the student body.
Third, while UCLA is much better than most elite schools at providing access to qualified students regardless of income, there is plenty of room for improvement.

Somehow I doubt that these schools will end up accepting too many kids who are reading at a third grade level.

GPA and rigor are already by far the most important factors in admissions, so schools are already having to “sift through.” This is particularly true at the UC’s given their top 9% programs, and their emphasis on gpa and rigor. It can be done.


The admission system has changed significantly several times over the last quarter century. The report at https://academic-senate.berkeley.edu/sites/default/files/hout_report_2005.pdf summarizes system in the early 2000s. At this time, the regression analysis found the following approximate most influential variables for freshman admission. The coefficients do not appear to be standardized, so I am listing mag/SE. Note that grades appeared to be tremendously more influential for freshman admission than anything else, including scores. There is no comparison. SAT I appears to have less influence than ratings of some personal/character qualities after controls, although SAT II writing was more influential. I expect UC’s extreme emphasis on grades and lack of emphasis on SAT I scores contributes to why UCLA had the income distribution that you reference. Transfer admission also is quite relevant for SES distribution, which had even less influence from scores.

UCB L&S Admission Read Score Regression Mag/SE (Estimated Influence in Admission)
Grades – 26
Perfect 4.0 GPA – 10
Eligible in Local Context (ELC) – 8
SAT II Writing – 6 (Overall = 6, >750 = 6)
Number of AP 5 Scores – 6
Difficulty of Senior Year – 5
Grades Trend Down – 5
College Prep Coursework – 4
“Contributes” – 4 (personal/character quality relating to contributing to campus)
“Active” – 4 (personal/character quality )
“Limits to Achievement”-- 3 (personal/character quality relating to overcoming obstacles)
“Good Job” – 3 (employed at job unrelated to academics)
Number of AP 4 Scores – 3
SAT Verbal > 750 = 3 (SAT Verbal overall = 1)
SAT Math > 750 = 2 (SAT Math overall = 1)
SAT II Math = 2 (Overall = 2, >750 = 1)
Low Income – 2
Disability – 2
Gender – 2
Asian – 2

Optional Subject Test – 0

1 Like

All of these data that supposedly showed SAT/ACT could not have played much of a role were collected during a time when SAT/ACT was required. In another word, if a kid with a low score thought his chance of success was low in admission and in college he may not apply to UCLA to begin with. And with 100k applications UCLA had to go thru in a short time it wouldn’t surprise me there was a cut off score below which the whole application wasn’t even read. All of these selection biases could have skewed the data and led to conclusion that SAT/ACT did not have much impact in the outcomes.


I think that we have discussed the issue of whether standardized tests are objective. I do not think that there is an agreement here on CC, despite the fact that the research weighs more heavily on the side of “not objective”, than on “objective”. However, these studies do not seem to convince many smart and reasonable people here on CC (and elsewhere), so I will set that discussion aside, and assume, for the sake of the argument, that SATs and ACTs are, in some manner, objective measures of college preparedness.

However, even if these tests are objective measures of college preparedness - how much “prepared” is required? How does one translate the numbers from the SATs to “preparedness”?

If we look at HSGPA, we can figure out that a student who is getting As knows how to do classwork, homework, and tests. They (high school work) may (or may not) be easier than college ones, but it is, essentially, the same type of work. So if a student was good at it in high school, it stands to reason that they will be good at in in college.

But SATs? How does one translate the ability to remember enough of the material to select the correct answer from four possible answers to the ability to write essays and essay exams, to participate in class discussions, or to do undergrad-level research?

So if we are selecting a student with a higher SAT course, all other things being equal, are we selecting a student who will do better in college?

I do not think so, because I do not think that it is possible to extrapolate clearly from one to the other (test score to ability to do college work), but I am willing to be convinced otherwise.


Both SAT in isolation and HS GPA in isolation are correlated with various metrics associated with college success, such as college GPA. While a correlation exists, they still often only explain a small fraction of variance in college success. Both metrics are especially bad at predicting chance of graduating.

You touched on some of the reasons why SAT score in isolation is often a poor predictor. Others have touched on reasons why GPA in isolation is often a poor predictor, which include not being standardized. A 3.7 HS GPA while taking a highly rigorous schedule of advanced classes is not the same as a 3.7 HS GPA while taking less rigorous classes at a HS with extreme grade inflation. One of the reasons why SAT + GPA is better than GPA alone is that SAT score can serve as a standardization for varying levels of rigor and grading standards. SAT is far from the only metric that help standardize varying levels of rigor and grading standards. Things like looking at the level of rigor, LORs, and external achievements out of the classroom (awards/ECs) also can serve this purpose.

Using some specific numbers, the Ithaca study referenced earlier (https://web.archive.org/web/20181012020332/https://www.ithaca.edu/ir/docs/testoptionalpaper.pdf ) shows the following R^2 for different combinations of GPA, SAT, and Rigor metrics:

R^2 for Cumulative GPA at Graduation
URM Alone-- 1% of variance explained
Gender Alone – 7% of variance explained
Gender + Race – 8% of variance explained
SAT Math Alone – 8% of variance explained
SAT Verbal Alone – 14% of variance explained
Num AP Credits Alone – 14% of variance explained
Strength of Schedule Alone – 17% of variance explained
SAT Writing Alone – 18% of variance explained
Scores + Gender + Race – 25% of variance explained
HS GPA Alone – 38% of variance explained
HS GPA + AP Hrs + Strength of Schedule + Gender + Race – 44% of variance explained
Everything including GPA + Scores + Rigor + Demographics – 46% of variance explained

Note that scores + demographics explained 25% of variance in cumulative GPA, yet scores only increased the variance explained from 44% to 46% above GPA + rigor + demographics. This occurs because the information gained by the scores was largely duplicated by other analyzed criteria. The part coefficients gives some clues about what those criteria are, as summarized below.

The part coefficient for gender had little change between test require and test optional, suggesting little overlap with scores. Whatever contributes to women tending to have higher college GPA at Ithaca seems to be largely different from what contributes to kids with higher SAT scores tending to have higher college GPA at Ithaca. However, the part coefficient for num AP hours nearly doubles when scores are removed, suggesting a larger degree of overlap with scores. Whatever contributes to kids who take more AP classes tending to have higher college GPAs appears to have a good deal of overlap with whatever contributes to kids with higher SAT scores tending to have higher college GPAs. These underlying factors that overlap between num AP hours and SAT scores could include things like both AP hours and SAT being correlated with wealth + quality of HS attended, in addition to the more direct and obvious factors.

Part Coefficients For Everything Model (Test Required)
GPA = 0.36
Gender = 0.15
SATW = 0.075
AP Hrs = 0.07
Strength of Schedule = 0.06
SATM = 0.06
SATV = <0.01

Part Coefficients for Everything Except Scores (Test Optional)
GPA = 0.40
Gender = 0.15
AP Hrs = 0.13
Strength of Schedule = 0.075

As noted earlier in the thread, Ithaca had a high >70% admit rate at the time of the study, so matriculating students had a good range of all the stat input metrics (HS GPA, scores, AP hrs, …), as well as a good range on the cumulative GPA output. Highly selective colleges with a more restrictive range of inputs and outputs tend to show a much weaker predictive ability. This restricted range contributes to why highly selective colleges tend to place more focus on non-stat criteria than less selective colleges.

UC has an especially restrictive range on input GPA for reasons discussed earlier. The UC Faculty Senate report found both HS GPA and scores in isolation only explained ~17% of variance in graduating GPA and the combination ~23%. The combination only explained 7% of variance in 4-year graduation rate (didn’t look at 6 year). While the combination of scores + GPA was better than GPA in isolation, the combination of just GPA in isolation and SAT score in isolation wasn’t a particularly good predictor of college success. This contributes to why the faculty senate report recommended developing a new “assessment system” to replace existing SAT/ACT, and mentions wanting the new assessment system to be better at predicting graduating in particular. I expect that there would have different results with different controls, such as better controls for level of rigor, more like the Ithaca study; or if they controlled for more of the non-GPA/score factors used in the existing UC admission system, such as the ones listed in the Hout report linked above.


I don’t think teacher unions are the cause or the solution to this problem. What unions are supposed to do, is prevent poor working conditions and I can only think having happier teachers would improve teaching. If it weren’t for unions, you’d still have child labor.
Someone above mentioned choosing a surgeon on school/ test results. Who does that? If you are lucky, you have managed to get a doctor within your insurer’s system.

1 Like

That Ithaca study is 8 years old, and much of the data was accumulated from an entirely different era, pre-GPA grade inflation. It might as well be from the 19th century.

The anti-test people conveniently ignore the massive grade inflation that is occurring nationwide at high schools when arguing GPA is all that is needed to make admissions decisions. It is very difficult to effectively assess differences along a 4.0 scale when so much of grading is subjective and when increasingly larger percentages of schools’ students are within the last 0.25 of that 4.0 scale.

There are studies and data referenced in both of those articles.


I don’t think anyone has argued that SAT/ACT by themselves are sufficient to demonstrate college academic “preparedness” (on the contrary, plenty of posters have argued that HS grades by themselves are sufficient for that purpose).

I can’t speak for other people, but my objection to eliminating standardized testing for most (but not all) colleges are primarily based on the following two reasons:

  1. HS grades vary significantly from region to region and from school to school. Having a simplistic composite HS profile without much granularities is far from sufficient to determine a student’s readiness for college, especially for majors that require more robust quantitative and/or reading comprehension skills. A student who couldn’t even do well on the really basic math portions of SAT/ACT, regardless what her/his HS grades are in math classes, isn’t prepared to major in a more quantitative field. Similarly, a student who can’t comprehend basic English paragraphs on these tests isn’t ready for a college where others are much more prepared (Has anyone ever objected to requiring international students to take tests to demonstrate their basic English language skills?). Throwing a student with signficant gaps in math and/or language skills into a group of much more prepared students isn’t a recipe for success. It’s likely to be costly to everyone, particularly the student who isn’t ready (in both time and money). Some posters would probably argue that HS GPAs are correlated with standardized test scores. They are, but the correlation is far from perfect. Because of grade inflation and uneven grading, there’re a large number of students with high HSGPAs around the country who don’t possess the basic quantitative and/or language skills.

  2. Elimination of standardized testing will lead to further HS grade inflation. Without these tests, HS grades will obviously become more critical in college admissions. Most, if not all, HS’s in the country will be interested in sending their students to better colleges. They’ll likely inflate grades to make their students look better to the colleges (even if they don’t, they’ll be pressured by parents to do so). Even if one believes that HS grades are sufficient to evaluate students’ academic potentials, they won’t be for long as grades will, gradually but surely, be further compressed into the top band of each school’s grading scale. In other words, grades will lose whatever discriminating quality they may still have today. What do the colleges do then?


I suggested reduced power, not elimination. It’s almost impossible to fire a certified staff or teacher, even the child molesters. L.A. teacher suspected of lewd conduct keeps benefits | L.A. NOW | Los Angeles Times. Los Angeles Teachers Union Wields Power in Fight to Reopen Schools - WSJ

Also imho they are wasting a lot of money because it’s illegal in California to hire contractors to clean and perform other duties and all of the certified staff have pensions etc.

Someone asked above about equity grading. Since UC requires C or better in A-G, it’s relevant here. Faced With Soaring Ds and Fs, Schools Are Ditching the Old Way of Grading | The Daily Chronicle

Apologies for the off topic response.

1 Like

You are 1000% correct on this statement.

High School grades have a lot of problems as the sole criteria for acceptance, much of which has been covered already. One area that has not been covered is that by putting so much weight on HS grades in a grade inflation environment, there is actually a massive penalty for students to take a class outside of their comfort area or one with a tougher teacher, even if that teacher is better.

Some of my best teachers in high school were also the toughest. Everyone knew which teachers graded harder and which graded easier, but 30 years ago the consequences for taking a couple of tough teachers was not community college. In a world where the average GPA at prep schools and the more affluent communities is over a 3.5, one C+ can be fatal to a transcript, not just at T10 school, but even at a T30. Why take the risk if you are a kid, even if the toughest teacher in the department prepares the kid the best for college in that subject?

I thought that high school was about educating kids and preparing them for the next level of life, whether college or the real world. I don’t know why anyone would advocate for a system that rewards the kids for learning less and taking easier teachers and classes.

As you point out, no one is saying to just use tests, but tests should be an important part of a truly holistic admissions process.


Why not use only tests? Why is holistic admissions the holy grail? I’m not being facetious, truly curious.

1 Like

Except no one has argued that “grades by themselves are sufficient,” or that grades should be the “sole criteria for acceptance.”

Relative context matters: Rigor matters, class rank matters, courses matter, course offerings matter, recommendations matter, APs matter, honors courses matter, academically related EC’s matter, letters matter, leadership matters, non-academic ECs and experience matter, the offerings of the high school matter, etc.


I was only talking about “for academic purpose”, if you hadn’t been selectively quoting. For most students, only their grades and test scores are measurable for that purpose. If you do away with the test scores, you’re left with only high school grades. On the other hand, no one who thinks that test scores should be kept has argued for the elimination of HS grades (or given lesser consideration) in college admissions.

How do you measure rigor of an AP course if it isn’t accompanied by a score on a standardized test known as AP exam? Plenty of students who received As in an AP course but did poorly on the correspond AP exam. Perhaps you would advocate doing away with the AP exams too? Class ranks? I thought you didn’t like ranking students. Many high schools don’t rank anyway. Recommendation letters? You mustn’t be living in the reality of most public high schools.


I am glad that @mtmind has pointed out that the UC’s have not admitted solely on GPA and test scores. They use 13 areas of criteria when reviewing applicants (14 before going test blind).


Yes, UC GPA is Very important in the application process but the other criteria listed also factors into the admission decisions and each UC campus can determine how they weight all these factors.

Based on my kids and relatives that have gone through the admission process and have attended some UC schools, their HS GPA was a better indicator of their college success than their test scores. My 2 cents worth with a very limited 5 data points.

Good or Bad, the UC’s have made the decision for now. The UC’s are only 9 Universities out of thousands so for anyone not happy with their decision, there are plenty of wonderful colleges to which a student can apply that will consider their test scores.


Out of 14 criteria the only standardized one where there was apple to apple comparison was SAT/ACT. Throwing that out, even though its just one out of fourteen, the admission loses its one and only across the board reference point.


If you were designing a standardized test or set of standardized tests for college admissions, and did not have to compete with incumbent tests (as the SAT and ACT are now), would you design something like the SAT or ACT, or something else?