The Misguided War on the SAT

A distinction without a difference: a NMSF is a high score on the psat. If UC doesn’t want to see the SAT score, why would they want to see something that represents a (high) psat score?

I don’t have data, but I’m “on the ground” teaching engineering at a state flagship over past 15+ years. The bottom 10-30% of my and my colleagues’ second/third-year classes either withdrew or failed too many times to the point they got kicked out, or realized they couldn’t hack it and voluntarily switched to “easier” majors. They wasted two years and thousands of dollars, as many of the credits they received won’t count toward their new, non-STEM majors. Some struggled with simple algebraic manipulation that is well within the scope of SAT math, such as solving two equations with two unknowns, or working with exponents. The worst ones couldn’t do simple multiplication/division without a calculator. And they were accepted to engineering. I had to cover less material, make exam problems very similar to past years’, and curve significantly to not fail too many.

10 Likes

20 of 50 states requiring ACT/SAT has no bearing on a claim that only 25% in the lowest quintile take the ACT/SAT?

If we look at the table you present, from the study, and scan across the bottom row for all deciles, it looks like the table claims that non-test takers are about more than half across ALL income deciles, including 59.5% for 40th-50th percentile income, and 53.1% for 50th-60th, so ~55% non takers, ~45% test-takers across those two middle deciles.

Yet even AFTER California (most populous state) very publicly repudiated the ACT/SAT, and other schools went test-optional in the wake of COVID, there are 1.9M SAT takers (not the ACT, JUST the SAT!) takers out of ~3.2M HS grads, or about 59% SAT-takers (versus the ~45% implied for BOTH ACT and SAT from your chart).

And all this is against my less strongly held position (that >25% of HS grads in bottom decile take ACT/SAT, pre-COVID disruption). My more strongly held position (you’re correct, without data), is that the number of would-be high scoring (~1300+ SAT), low income kids take the test at much higher rates.

(There are some other ways to potentially interpret the income versus test-taking chart, but it’s a dense chart and an even more dense study, and if mtmind wants to defend the broad claim or interpret the somewhat narrower chart (and its underlying dataset), I’ll let him do so).

1 Like

This is the concern. getting in is not as important as thriving. I thought I read some report regarding how well TO applicants thrive (and I don’t necessarily want to use grad rates as thriving due to what you point out). I recall another article discussing demographics in certain majors for first years and then as graduates. It’s kinda depressing in terms of diversity goals. I’ll see if I can dig them up.

sort of related, but this article talks about a Stanford Graduate School of Education study…

1 Like

I think some of the willingness for all sides to battle over admissions standards for elite universities is due to too much focus on outputs versus inputs. I don’t fully buy in to the Dale-Krueger arguments (that college selectivity is nearly meaningless - it’s all about strength of the incoming freshmen), but I’m at least partly on-board with that idea.

Admitting a kid who is at the 25th or 10th or 2nd percentile of Harvard/MIT standards, academically, won’t get that kid 50th percentile outcomes, long-term.

But I do think there are advantages to clustering high performing students, and that a noisy/leaky admissions process waters down the class, hindering basically all parties involved - the rest of the class, the faculty, and, at least in some cases, the low-quality admits (academically far below their peers).

The (quite possibly Asian) kid who is the highest ranked, non-admitted applicant on an academically-oriented stack rank system would likely benefit more from being in that class than the admitted kids who fall far short on such systems.

3 Likes

But we don’t have data showing this is happening at the highly rejective schools at least, including MIT. If schools have this data, I would hope they share it, and if they don’t share it shame on them.

I don’t need to defend anything. You asked for more info on the source of the data mentioned in the quote, so I found it for you. While I don’t agree with all of the conclusions drawn from the data, I have no reason to doubt the validity of the sample. If you want to disbelieve it based on I-don’t-know-what, then knock yourself out. I’m done with this particular exchange which seems to me to pretty pretty irrelevant, so feel free to have the last word.

Well, I’m thinking somewhat hypothetically. We can’t readily create an RCT where Harvard #1 admits under the current system and Harvard #2 shifts to a more test+GPA+rigor driven system. In theory, we might be able to see some interesting results, years down the line, from the likes of Berkeley and the other UCs, about the before/after times, but there’s a lot else changing throughout the last 5-10 years, so disentangling the testing-related effects would be hard.

I WOULD love to see outcome-based analytics that the colleges do (presumably) have (across many dimensions), but doubt they’d be too eager to make the data public.

Thanks for sharing. I have so many questions…Is the proportion of the non-performers more than it had been? Or are more students switching out of engineering than historic rates? Were the non-performers test optional applicants? Hopefully your school is collecting the data. I assume this is a highly selective school with ample student applicants to choose from?

One other point I don’t think I’ve seen mentioned in this thread.

Using college GPA (first year or full spectrum) as an outcome measure when studying HS GPA, ACT/SAT etc. has some issues of its own.

Just as D1 athletes with questionable academic skills may be pushed into easier majors like P/E, sports management, sociology, or the like (with presumably softer grading skills), even non-athletes likely sort themselves, within a college, to some degree based on their own academic ability. The STEM fields likely get a higher share of high-test-scoring kids, but I’d venture that the classes within those disciplines often grade more stiffly, particularly on an ability/effort-adjusted basis. i.e. The 1350 SAT kid may get through an Ivy with a 3.6 GPA, but more likely in a humanities major, skipping organic chem and diffEq.

A stronger analytic method would adjust college GPAs by classes (combining a per-class curve with, perhaps, an Elo type system) to try to arrive at a measure of college grades adjusted for class difficulty. A college could probably do this internally, but my guess is that in many/most cases the output data would not be politically correct, and thus I’m not holding my breath waiting to see such studies…

4 Likes

Some schools have made the data public after they went test optional, including Ithaca, Bates, and DePaul, as I think data10 mentioned above. Generally no meaningful or practical differences in outcomes between the groups.

I was thinking of a lot more than just performance outcomes between test-reporters and non-reporters, at, for example Bates.

FWIW, while Bates’ info is interesting, I think we should be careful about drawing too strong of an inference from that data versus someplace like Cal-Berkeley or the Ivies going test-blind/optional.

Yes, I believe in fit :slight_smile: An acquaintance was a late bloomer that kind of sauntered through HS and went to a not too “prestigious” LAC. He found himself there and excelled. All the way to a stanford phd. His story, and others like it, make me support fit over prestige, especially if there is a hint of being underprepared. There was a PBS documentary discussing inner city kids that ran over a decade (or two!) ago. One boy endured being picked on for being bookish, all the way to great grades and a spot at MIT… where he floundered due to being underprepared.

I still have trouble thinking about his story. I still think the solutions lie in the leadup to college apps and testing (i.e. K-12 education), not the application process or the acceptance to a “top college”.

2 Likes

Class difficulty is not the same for each student. There could be some math majors who would find an English literature class to be more difficult than a math class, for example.

2 Likes

Indeed. But I think an Elo-style analysis would be helpful. How do the students of HIST207 perform relative to each other, and how strong can we assess the fall 2023 cohort of HIST207 to be, based on how those students have performed in their other classes?

But yeah, varying performance by subject would dampen the accuracy somewhat (Johnny who rocks term papers in his writing-oriented classes, but bombs math-stuff, versus Bobby who does the reverse.)

Anyways, I suspect a more accurate (IMO) measure of college GPA, along these lines, would strengthen the after-the-fact measured accuracy of both HS GPA and ACT/SAT in predicting college grades (moreso for ACT/SAT would be my hunch).

I was reading his comment as required classes for a major, e.g. biochem at UCB taking physics 8 series (is it still called that?) vs the physics major taking physic 7. I wouldn’t treat their respective grades on equal footing.

(eh, MWDadOf3 clarified, but yeah, it’s interesting to me to see how well different cohorts do given the exact same classes. But most studies down shift to just GPAs or grad rates across all majors. Even within MCB, whether one takes the molecular bio or biochem emphasis alters the difficulty for most MCB students)

Well, we can look at some data. Let’s take a look from UC Berkeley, before and often test blind admissions. Here’s a math class - Abstract Linear Algebra. The graph shows us average grade over all semesters, then I selected Fall 2019 (prior to test blind) and Fall 2022 (test blind). Yes, there are some uncontrolled variables here (such as different instructors), but I want to look at the assertion that grades are plummeting and students are failing now that test blind admissions have been in full swing for a few years. From this chart, I don’t really see that.

Let’s try an engineering course. Here’s Introduction to Computer Programming for Scientists and Engineers. Still not seeing a dramatic rise in failures before and after test blind admissions.

Well, unless you want to believe that along with test blind admissions, UC Berkeley decided to begin inflating grades and passing along students with little mastery of the material. I suppose that is possible, it appears that grades are going up. However, should we conclude that part and parcel of this grade inflation is passing and graduating students who have failed to grasp key concepts and are wholly incompetent in their fields? That would be some really irresponsible grade inflation. I would imagine the more likely inflation - if there is any - is a B gets bumped to an A and that may be so. But that is still different from accepting and passing along students incapable of doing the work. I don’t really see evidence of that here.

3 Likes

Agreed. It seems like the easy solution would be to encourage scores, but not report admitted or enrolled ranges. This would allow for schools to readily admit students below the 25% but those admissions would not affect their metrics.

There is nothing inherent that I am aware of that would require their reporting their admitted score ranges other than being blanks on the cds.

Is the proportion of the non-performers more than it had been? It’s anecdotal, but yes, some faculty complained about it. College-level administrators saw numbers and asked us about it. Covid disruption appears to be a plausible explanation, because those in high school during covid are now in our classes.

Are more students switching out of engineering than historic rates? I don’t know. I know some faculty who failed a few more students than they used to/wanted to, and some who lowered the letter grade cutoffs to maintain a target average class GPA (I did this). Since students can repeat the same class a certain number of times, the switch out rate may not increase noticeably.

Were the non-performers test optional applicants? My school may have the data but because test optional started at the same time as covid, even with data it’s hard to say how much of the non-performing was due to test optional and how much of it was due to covid. Might have to wait a few years (after the impact of covid flushes out) to have a clearer read.

I assume this is a highly selective school with ample student applicants to choose from? No, it’s your typical, non-selective, big-sport state flagship. One whose 75th percentile SAT is in the same ballpark as UC Berkeley’s 25th percentile before it became test blind. (So @worriedmomucb I did not write a separate reply to your post because we aren’t talking about student bodies of the same caliber.)

I believe that with both SAT math score and GPA, AOs at a school like mine can make a more informed assessment on whether a STEM applicant has the minimum math proficiency to succeed (e.g., 650 or above in math? You’re good to go. Less than 550? Chances are you will struggle so you will need to take some remedial classes if we admit you). A high GPA alone is too risky to place complete trust on.

5 Likes