The Misguided War on the SAT

Yeah, that’s my son.
Get one more math question right and he scores 720 instead of 690. That was the curve on his specific test. I can only imagine the number of careless errors on the ACT, where speed and efficiency were measured (for him) more than knowledge.

So Student A and B write the same day. Both get graded but, both will have (possibly) written different exams. How do grades ever really become “comparable”?

1 Like

At a more macro level as the SAT goes “adaptive”. Who or what firms are involved in writing the algorithm and what security features are in place to ensure no one can game the system. A reminder that the SAT has long been plagued with security breaches in its administration of the exam. How do we know for certain that someone “selling” practice exams or a service to help prep, doesn’t have some access to the inner workings of the exam itself. Someone has to write the questions, don’t they?

Students A and B apply to the same college, but with different readers for their application. Using your logic, since each reader will look at an application differently, how could holistic admissions ever be fair?

4 Likes

I am not sure what you are getting at.

Cheating is much less likely on the digital SAT. There is no paper test that can be procured before test day, which has historically been a common cheating method, especially overseas.

Every digital SAT test that is generated is different, using questions from the large pool of existing questions (tens of thousands of possible questions to choose from, if not hundreds of thousands). Each test taker get sdifferent questions in each module, and the second module is the ‘easy’ or ‘hard’ version, again with different questions.

The digital SAT also randomizes the order of each question’s answers each time the question is used. So…there is no way to anticipate what questions one will see. One also can’t cheat by looking at their neighbor’s answer sheet because the neighbor isn’t answering the same questions and even if they were, the answers would likely be in a different order. Students will still be able to cheat by providing fake IDs/hiring someone else to take the test for them.

I am not much of a CB fan, but the digital SAT is in many ways a vast improvement over the paper and pencil test. Students like it better too because it’s nearly an hour shorter.

Here are some resources for you that discuss how questions are created/test validity established, digital SAT research, and compass prep’s summary. Happy reading.

2 Likes

So how is scoring compared between two students? Students A and B, in theory, start at the same point, but, if A struggles with the first part, A now gets a different set of questions and lower ceiling to work with. So basically, the first part of the exam, of an already smaller exam, is make or break for a student.

The scoring is the same as it was for the paper test (200-800 per section, digital sat validated against paper test). https://research.collegeboard.org/media/pdf/Digital_SAT_Score_Relationships_with_Other_Educational_Measures.pdf

One difference in the scoring for the digital test is that different questions have more weight whereas in the pen and paper test all questions were worth the same. If you read the various studies and validity info I linked above that should help you understand the test more. I am not an expert on testing.

Right. And I see no problem with that because they would have had the same/highly similar score on the paper SAT test.

You can think of the first part as containing questions with large variance in difficulty—several very hard ones, several very easy ones, and the rest somewhere in between. If a student gets almost all very hard questions wrong, almost all very easy questions right, and a mixed bag for those in between, the second part will contain moderately difficult questions aimed at fine tuning the assessment of the student’s skill level. If the student gets nearly every question right in the first part, the second part will contain additional very hard questions aimed at fine tuning the assessment at a higher skill level.

It’s like how optometrists perform eye exams—if you couldn’t read the tiny letters during initial screening, there’s no point to keep asking you to try. Nearing the end of an eye exam when your optometrist asks “is this clearer or that” while placing slightly different glasses in front of your eyes, that’s the second part intended to get an accurate prescription after having a rough estimate from initial screening. Hence the term adaptive. There is no magic.

3 Likes

That is one of the problems - that it’s a snapshot. You don’t need to know how well a student does on one test which they may have focussed on for years. They need to know how will a student does in the weekly readings and homework, in the midterms and finals and quizzes.

It’s like testing how well people will run a marathon by having them do a 20 yard sprint.

I will reiterate, though, that my issue with the SATs has always been because:
A. I don’t think that the scores are demonstrating what its proponents claim that they are demonstrating, and
B. At “elite” colleges, they are assigning way to much importance to differences that really don’t mean anything.

The SATs are a fairly crude indicator of the mastery of the material, not an accurate predictor of a person’s college performance. I agree that person who has a 4.0 GPA but cannot get over 1000 despite repeated tries is either unable to have more than average mastery of the material, or has an undiagnosed reading or test-taking disability that need to be addressed before they can consider starting a more challenging college/major.

However, you cannot actually assignin differences in mastery of the material between two applicants with high GPAs and SAT scores of 1300 and 1400. Moreover, the higher the scores are, the higher the chances that any differences in SAT score are th eresult of day to day variation in a person, or to luck (which questions were not included by the College Board in th efinal SAT calculation). The narrow distribution of SAT scores in admitted students in “elite” colleges, and the fact that they are all crowded n the 1500+ range (prior to the pandemic) indicates that these colleges were considering differences of 20 points to be actionable for admissions.

PS. The issue with testing methods that I think will actually measure mastery of material is that the conditions that would allow this are also conditions that would make it practically impossible to eliminate cheating, and would be labor intensive to score. Things like short written answers, questions that require understanding of the material, allowing more time to think, allowing applicants to explain their answers, etc.

1 Like

Every test, quiz, and assessment is a snapshot in time. The SAT is a standardized snapshot common to all students. The grades they received in their HS (weekly readings and homework, in the midterms and finals and quizzes) are snapshots as well - but they differ in rigor from school to school.

The more data points - the more better :stuck_out_tongue_winking_eye:

4 Likes

Right a 5 year snap shot is the same as a 2 hour one. :roll_eyes:

Managing a full course workload with ECs is way different than practicing for a particular test. I love the sports analogy of drafting a basketball player based on his free throw percentage on a given day. The SAT is like taking 30 free throws and being evaluated on that. Whereas a GPA (and courses taken) represents game film.

1 Like

How about both? Plus, is a 5-year snap shot where kids can get help from tutors/parents/friends/ChatGPT, can post assignments on chegg and wait for solutions in hours sometimes minutes, can get numerous second chances for turning in late or redoing after asking teachers, where an algebra II test can be drastically different at different high schools, and where most everyone who tried gets an A at the end, really better than a 2-hour sampling under controlled conditions?

2 Likes

No one is saying use the SAT alone to draft players. We’re saying use SAT along with everything else.

Well, today is the NFL draft, so why do teams look at players’ game films AND their combine stats, especially their 40 time for skill position players? You want as much data as you can get.

1 Like

So you’re saying that colleges shouldn’t accept GPA at all?

Then again, if that is how kids get their grades in high school, they would also likely get their grades in college in the exact same manner. That would invalidate any correlation between, I don’t know, first year college GPA and, say, SAT scores.

You really cannot have it both ways. You cannot claim that grades in classes are the result of "help from tutors/parents/friends/ChatGPT, can post assignments on chegg and wait for solutions in hours sometimes minutes, ", all which they can do in college as well, and still use GPA, college or high school, as “proof” of anything.

Moreover, in many colleges, students can “get numerous second chances for turning in late or redoing after asking teachers” or professors, in this case, grading is vastly different between colleges, even between the so-called “elite” colleges, and grade inflation is a bigger issue in colleges, especially “elite” colleges than it is in high school.

So first year college GPA says little, and therefore the fact that there is often a correlation between first year college GPA and SAT scores doesn’t tell us much either.

So a student who gets a good grade because they

Has, in fact, nailed the requirements for getting good grades in college, and that is really what “being prepared for college” is all about, no?

Moreover, as the Varsity Blue" scandal and the huge difference in th epercent of affluent kids who have test accommodations have demonstrated, those “controlled conditions” are a lot less controlled if the family has money.

1 Like

I said “how about both” on the very first sentence.

1 Like

But you also are making a case that GPA isn’t reflective of a student’s actual mastery of the material.

I said why not both and proceeded to ask a rhetorical question to state that GPA is as flawed a measure as SAT is, like all other measures. However, they complement each other so taking into account as many of them as possible would reduce the level of uncertainties when making admission decisions. There is no reason to single out GPA just like there is no reason to single out SAT as flawed.

2 Likes

GPA is not like game film. It is like stats with no context as to whether the points per game were earned against teams with 6’6” future D1 players or 5’9” people who can barely dribble.

Actually, that’s too generous. GPA is the equivalent of the coach giving the player a grade for each game and then averaging those grades.

2 Likes