The Misguided War on the SAT

According to their finding, the SAT is a greater predictor of success than GPA.

“SAT scores have significant predictive value for academic achievement over and above other measures such as high school GPA”

1 Like

The specific admit rates for affluent kids are below. These admit rates were roughly the same for both those affluent kids who submitted scores and affluent kids who did not (applied with, but requested score not be considered)
1400 – 2%
1450 – 3%
1500 – 6%
1550 – 11%

Consider why the admit rate gets so much higher as score goes up, among kids who did not submit score? This is not inconsistent with scores being correlated with first year GPA at Dartmouth.

Actually, the analysis did say something similar. The analysis found than among the analyzed variables, name of HS attended was by far the strongest predictor of academic success in college.

SAT + HS GPA + Race + Gender + Income + … – Explains 23% of variance in college GPA
All of above + name of HS attended – Explains 65% of variance in college GPA

This brings up the question about why name of HS attended is so predictive. SAT score only seems to captures a small portion of what makes name of HS attended so predictive. I’d expect things like HS course rigor to be important. However, the influence of course rigor, and whether the student took advanced courses that were a good preparation for prospective major were not considered as controls in the study. Instead they just looked at average HS GPA in isolation, which does capture such differences.

3 Likes

Im assuming this is based on students who submitted test scores but asked them not to be considered? Is this based on the we “encourage” students to submit language. Are there students who didnt submit at all?

The analysis states, “In the 2021 and 2022 cohorts, respectively, 2,416 and 2,544 applicants reported a College Board score that they later requested not be considered by Admissions.” That’s roughly 9% of applicants. One can only speculate why this 9% of applicants initially submitted score, then later requested that their score not be considered.

This does create a bias. It’s not all applicants with a particular SAT score. It’s only the small minority who initially submitted score, then seemingly changed their mind. The ones who did not change their mind may show different patterns. There are also biases at other steps with who choose to take the test and who chooses to apply. The analyses and conclusions leave a lot to be desired.

There are also another ~10,000 students who didn’t submit at all., some of which didn’t take the test. The analysis doesn’t emphasize this group because their scores are unknown (to Dartmouth), and the author wanted to draw conclusions about applicants with a particular SAT score.

1 Like

So if “qualified” (not sure how to define that) test optional students admit rates were between 6-11% (do we even know what test optional students admit rate is factoring out completely outlier applications on the low end?), 1400’s SAT (submitted, considered or not) were admitted at somewhere around 3-4%, then taking the SAT and scoring low hurt you but scoring between 1500-1550 didnt help you if test optional students were admitted at the same rate.

I dont know what the “qualified” test optional admit rates are but if they’re around 6%, then anything below 1500 could hurt you.

The analysis found that among affluent 1400s SAT kids, the admit rate was roughly the same for kids who submitted score and kids who did not (requested scores not be considered). The analysis also found that amount affluent 1500s SAT kids, the admit rate was roughly the same for kids who submitted score and kids who did not (requested scores not be considered).

Why do you conclude that this means the score influences application for 1400s SAT kids, but not 1500s SAT kids?

An alternative explanation is that there are a lot of factors that are considered beyond just score, and the combination of those factors is correlated with scores.

Affluent kids have an upper hand in almost everything. Why aren’t you also calling for GPA, ECs, and essay optional or blind (as silly as it may sound), to further assist lower-income kids?

Is it because these other factors may be viewed in context? If so, standardized tests may be as well, no? Dartmouth, Yale, and Brown appear to have said as much, although the impact remains to be seen.

Or is it because going all-optional/blind is too extreme? Sure, but why pick on, and only on, standardized tests? You commented earlier in the thread that these tests are flawed. But they are no more flawed than GPAs (grade inflation, different amount of outside help, different course rigor, non-uniform grading), ECs (opportunities through parents, pay to play), and essays (hired guns, fake stories, ChatGPT).

3 Likes

Having read some of the 1900+ messages on this thread, it would seem that the largely political points have been fleshed out - “privilege vs. achievement”, “the true societal role of college”, “how colleges should use data fairly”.

Another on topic conversation that I would find interesting and helpful would be for the engaged writers here to comment on the new SAT, its questions, its methodology, its scoring and relate that to its “validity” as something to go to “War” or “Misguided War” against.

Some observations:

1 or 2 questions incorrect on the entire language / reading multiple choice section of the digital SAT can lead to a score of 750. This is from my own taking of multiple digital SAT practice tests on bluebook. Understood that many have questioned whether the “actual test” is harder. Still, unless one questions the grading rubric, this very steep curve exists. Is this fair when many questions are arguably interpretative and potentially vague (again sidestepping the obvious criticism that the College Board writes bad questions). When SAT scores are often listed (tens of times in this thread alone) as a sum of math + english, doesn’t this curve itself suggest a push to change student studying and focus behaviors for test prep?

The math section is much more linearly graded with a more gradual curve with a single question wrong leading to about a 10-15 point deduction. Is this thought to be a reflection of the “adaptive” nature of the test? I missed a single question on “bar charts” - a seemingly basic question, but still ended up with 780. Missing 3 questions on a subsequent test led to a score of 760.

Questions are also quite clearly aimed at a small subset of academic skills (in the English / language section it’s comprehension, logical reasoning, data interpretation and syntax). On the Math (at least assuming the questions I was presented are typical), it is heavily focused on (deriving formulas for parabolas given initial conditions, and very very heavily algebra II). To me, these observations also figure into the question of societal validity of the test in terms of what it is really testing and how profound the benefit of privilege and preparation can be.

Tests are first and foremost a tool for differentiation, and in this most recent test, the College Board appears, to me, to have taken a strategy of choosing a narrow band of topics and differentiating skill within those topics.

Would be interested to hear from others who have actually sat for the Bluebook exam. Taking a complete test took me about 1.5 hours, so an investment of some time, but not unreasonable to get educated on the topic of this thread.

1 Like

From what I have read, The curve is very steep for reading/language but not as steep on the actual exam. Also some are experimental questions and your score will not decrease if you get them wrong.

1 Like

You would likely benefit from understanding more about the digital SAT, the breakdown of questions by topic, and scoring. One change is that the digital SAT results in no student ever taking the same exact test.

The test is not only adaptive, but different questions are worth different points (CB is using item response theory to scale tests). To take one example, a student might not do well enough in the first math module to move into the harder math module for the second part of the math section. In that case, this student’s math score would be capped somewhere in the low 600s. They could in theory get every question right in the second module, but would still have the low 600s cap because of their performance in the first module.

There are many resources on the web that discuss all these things. Here are two good ones:

I guess your point was 1) that the test is adaptive. I think everyone here already knows that.

  1. that the test questions are not all weighted equally. Also not surprising.

Not sure how that really addresses my question which was - to summarize it for you - it is observed that a single question incorrect on the language / english part of the SAT can be worth 50 points. My son (12yo) got 5 questions incorrect (resulting in a 650) then 3 questions incorrect resulting in (710). Interestingly, my son’s errors were in reading comp / induction and mine were in syntax. Still, the observation of a very steep curve remains.

I guess you are sure that we are always getting the “easiest” english questions wrong resulting in a large deduction. In any valid test with variable question weighting and difficulty, the more difficult questions are weighted more finely to narrowly distinguish those at the top. Or is it possible that the English section has a steeper curve at least at the top? That’s the possible issue you’re missing, and the one I wanted to bring to the community’s attention for discussion.

Single questions incorrect on the math section result in about 10-15 points deducted. My son and I took different tests and I generally made careless algebraic mistakes in the “fill in the blanks” section, and he made errors since he hasn’t learned about stats in school. Still, the flatter math curve observations stands. As a side note, knowing a little Calculus can save a lot of time on the “hard” parabola questions.

I guess judging from your articles, you’re saying that on English, both I and my son Always miss the easy questions and on Math, we Always miss the hard ones. That is your explanation for our observations. Okay. Sure. Or maybe the curves look a little different.

Questions were weighted equally on the non-digital SAT (when calculating raw score.)

For the rest of your post, I wasn’t trying to answer your questions nor did I make any judgments on your/your son’s scores. I was just giving you some resources that you might find helpful. Note that SAT isn’t curved in the common sense which is ‘curved relative to other test takers’.

1 Like

Why are you bothering with the SAT for a 12 year old? There is zero need to start prepping at that age. That is especially true for the digital test which is brand new - by the time he needs to take the SAT there will be many more resources to decode the digital SAT than exist today.

Because we al agree that these are context-specific. However, SAT keeps on being analyzed as though it were an unbiased measure of academic excellence. Many are still claiming that SATs are a measure of “intelligence”.
If there was a general agreement that test scores are:
A. a rough measure of mastery of the material, rather than some measure of innate ability,
B. something that needs to be considered in context of the student’s high school and SES,
C. accomodations were available to lower income families at the same rate as to the wealthiest
D. tests were available free to lower income families,
E. The entire format was revamped to be more straightforward AND intuitive *

I would accept test scores as a requirement.

*. I mean that students wouldn’t need to learn how to specifically take the tests. So that level of mastery of the material would be directly translated to the score, and the tests wouldn’t be recycling the same set of questions and answers. Basically, they would be built so that the only way to get a good score would be to understand the material in the manner that is was being taught. That means that test prep would be useless and classes that “teach to the test” would be useless.

1 Like

To evaluate a child on one exam that is very geared to a specific skill set makes no logical sense. And yes, the SAT is very skewed towards a skillset and doesn’t take into account the “creative” and “imaginative” aspects of what a student brings to a table. Logic and logical reasoning are great but, heck an algorithm can do that too. This is just me but, wouldn’t we want those with that creative/imaginative “mind” in our universities? Einstein would never have gotten into any elite school if he were applying today. Just saying.

1 Like

Really wish people would stop repeating this myth. Einstein did well enough on the entrance exam to enroll in ETH Zurich for math/physics at age 17.
How he would fare in US " holistic admissions" is more debateable. He was rather singularly focused.

7 Likes

Most college and universities accept most applicants. There is no shortage of higher education options for creative and imaginative applicants that don’t test well.

Many, many, many times, in this thread and others, strange assumptions are made about bright students who test well. For some reason, people think that our brightest students are just drones. I have a feeling this is going to come as a complete shock to you, but many high-achieving students who test well are also creative and imaginative.

10 Likes

Einstein was not as interested in math to be considered in the top 5% of his class in math. This would not result in an SAT score that would get him into the 1500 SAT range. That is the point. And some students are singularly focused in math and the SAT is skewed in their favor.

1 Like

The point is that there are creative students who don’t do well in math or logical reasoning. The SAT is a barrier for them as a result. The SAT is focused on specific “skill sets” and those who try to correlate it with intelligence do a disservice to those who don’t score well on SATs.

2 Likes

The poster didn’t indicate otherwise. But there are also some creative, imaginative, and highly intelligent students who don’t test well on the SAT.

2 Likes