The Misguided War on the SAT

Interesting. Wonder if this is a change to address advantages of “prep” where there are various techniques to deal with long passages, like scanning the questions first. Although this may discount the real skill in reading complex passages and answering a question on one part in the context of the whole.

Currently waiting for posts decrying the level of the cap - whatever it is…

1 Like

Looks awful. It seems to me that, once again, higher-ups somewhere are trying to incorporate the “newest thing”, based on theories that are unsupported by evidence, and based on the belief of some “consultants” in the infallibility of “science”, which, in their mind means - code in a computer.

One of the biggest issues with the use of AI, after the sheer magnitude of its carbon footprint, is that people in administrative positions have this weird belief that “algorithms” are some sort of scientific facts or objective natural processes, and that, despite reems of evidence to the contrary, AI is somehow “impartial” and “fair”.

Algorithms for AI are written by humans, and they are tested on specific databases. If the person is biased and/or the databases are biased, the result that the AI spits out will be biased. Since people who write algorithms are just as biased as anybody else, and the databases are extremely biased, I don’t blindly trust the College Board when it claims that everything is determined by “an algorithm” as though it solves all problems.

So long as that algorithm hasn’t been tested, or at least the College Board isn’t telling us how it was tested for bias, I do not see how the results can be trusted.

Once again, people (in this case the people at the College Board) keep on believing that the newest technology is a magical solution for all that ails their industry. The less they understand the technology, the more magical they believe that the technology is.

THIS is an issue as well:

If practicing taking these particular tests increases the score that one gets on the test, that means that the test increasingly measures that skill that one has at taking these particular tests, versus mastery of the material of the test.

Well, some people seem to believe that the fact that Harvard is reinstating the SAT requirement is proof the use of SAT is Objectively Good for low income applicants. That seems to be proof that some people believe that Harvard is making decisions for the good of low income students, no?

I’m waiting for posts claiming decrying the fact that there is an “easy” version, “Kids today have are soft, with easy versions and caps. When I was young, we had to do our SATs walking uphill in the snow, and still had to do all the most difficult questions!!”

Not necessarily. If one practices the types of math and the structure of questions that are covered on the SAT, for example, they may get a higher score (plenty of research shows this to be true)

My guess (I don’t know for sure) is that CB specifically says that being familiar with the content and practicing the test and its pace/structure can impact one’s score because for years they maintained that was not the case :rofl: Which of course was ridiculous, even when CB called the SAT an ‘aptitude’ test :rofl:

As for “AI”…I don’t necessarily see the digital SAT as AI, it’s just adaptive, adaptive (and digital) tests have been around for a very long time (GRE went adaptive in 1993, GMAT in 1997.)

Adaptive Testing is new to the SAT, but has been a staple of admissions testing for decades. In 1993, the GRE introduced a computer adaptive format, making it mandatory in 1997, the same year the GMAT shifted exclusively to computer adaptive testing.

The reality is that many HSs, states, colleges, scholarship providers, and employers value SAT/ACT testing, and no amount of anti-testing sentiment is going to change that.

1 Like

True, but the use of algorithms that are untested for bias still holds the same risks. All one has to do is to look at the dismal story of COMPAS.

That is true, and that, along with the fact that SATs make it simpler for AOs will drive the decisions regarding the use of SATs for admissions.

I’m kind of surprised that they say that the use of computers in GREs was already prevalent in 1993. It wasn’t really an option for me in 1994, but I wasn’t taking it in the USA, but I know that it was being used for quals in many departments until the end of the 1990s, and they were using paper versions. I guess that the move to computers kind of passed us by. Of course, the use of GREs for admissions to PhD programs is really ridiculous, since it doesn’t test for what is basically needed for a PhD program.

Not much of an algorithm here either. Sounds like a glorified If-Then-Else statement: If you score above a certain cutoff for the the first part you get steered to the more difficult second part - if not then you take the “easier” path.

Of course posts may decry many things that are not true (it wouldn’t be the first time). But to be clear, there is no easy version of the test. It is designed to yield the same scores as the old test, but take less time because it is adaptive. Whether that actually happens, we won’t know until more results come in.

1 Like

I will still hold my opinion until they demonstrate that it works.

BTW, when I write “bias”, I mean any bias. Computer algorithms can result is some weird biases. Actually, human algorithms also regularly result in weird biases, but we are all very aware of those biases. We simply assume that that computers are impartial and objective in their “decisions”.

GET OFF MY LAWN!!!

Hmmm…. Here is what I don’t understand about this statement: if my kid studies and does more practice problems for his AP Physics exam he will also do better. Same is true for Honors Pre-Calc or whatever. Are you suggesting the test is trash because he did better than the classmate that never did the HW assignments?

Asking @MWolf not @Mwfan1921

5 Likes

Excuse me as I adjust my tin foil hat, but if all forms of bias are your major concern, there are many more admissions points where bias is more likely. AO’s for instance - they are surly biased. LORs from teachers? …

1 Like

AI is created by a human and just a super fancy marketing way to hide the fact that it is totally subject to the underlying bias of whoever created it. And we should all fear it because a human created it!

In a world that craves greater transparency, this new adaptive SAT has less transparency than ever before. Think of it this way, a student going in will know that if they are struggling in the first phase, they will get an easier second phase which will only “shaft” their score anyways.

When someone tells you “adaptive testing is not something to be feared”…guess what!

The are specifically writing about “practice SATs”, not “studying for the different subjects on which the SATs are based”. For AP tests, you study the material for that particular subject. As far as I understand, the AP tests use the same format of exams as do the courses, but I may be wrong. However, It is well documented that the SAT format does not match the formats of the exams for th evariaous topics which the SATs cover. If practice in use of the format of the SATs is a strong determinant of the SAT score, that reduces the ability of the SATs to test for mastery in the topics which it includes.

Of course, AP tests have their own issues, and I have heard from many faculty members that students who use AP test scores as prerequisites are not as well prepared as students who take the prerequisites that the college offers. But we are talking her about SAT tests, and AP tests are another story entirely.

A billion dollar industry. And who is trying to save it suddenly?

Pretty dated Covid material.

Is the argument that since the SAT reveal’s shortcomings in our k-12 education system that it should not be used……to place students into rigorous, and for some extremely rigorous, academic environments?

So you’re saying “admissions are already biased, so what a little more bias between buddies?”

As for bias in algorithms. I am married to a nationally recognized expert in CS, Data Science, and AI, and she speaks extensively about the problem of bias in computer algorithms and how it affects our day to day lives. But I guess that heeding warnings from experts is just being paranoid, based on your personal opinion, right? After all, the people who actually produce those algorithms and who analyze the results, and who actually understand SC, math, and AI have absolutely NO idea about whether we should worry.

1 Like

I think you are looking at this the wrong way.

If there are multiple elements of bias in the admissions process, aren’t you doing more harm than good if you attack (and remove) the least-biased aspect?

6 Likes

Actually, I think that I have enough depth in those areas to engage in a discussion. :stuck_out_tongue_winking_eye:

I maintain that a simple adaptive test like the SAT has little chance for SW induced bias. If there are other AI or algorithmic issues involved with the SAT please let me know. We are talking about the adaptive SAT, right?

I have no doubt that this version of the SAT will be derided by the same people that were opposed to the paper and pencil versions - and for the same reasons. I think it gives a good snapshot of students and helps rather than hinders good admission decisions.

2 Likes

It seems people look at the scoring results and for those who score poorly, claim bias.

If for example, instead of multiple choice questions, students were asked to solve math equations - if certain people didn’t do well, then people would try to scrap that and claim there’s bias in that test because people didnt have the same math teachers in 3rd grade.

If everything is biased, are there any Math or English (or any academic) problems that are unbiased?

The earlier link makes it sounds like the “AI” in the new SAT essentially uses the following logic. If the student does poorly on part 1, they are directed to an easier part 2. If the student does well on part 1, they are directed to a harder part 2. if so, calling this “AI” is purely for marketing reasons. This isn’t AI. It is a trivial if/else decision.

if (exam1score < threshold) exam2 = examA
else exam2 = examB

In earlier posts in this thread, I’ve mentioned one of the problems with the SAT is a low ceiling, particularly on the math section. This can result in large drops in score for careless errors, if the student happens to get an easy exam (some SATs are easier or harder than others). For example, in another post I looked up the raw score conversion for an easier math SAT year. On that test, making 3 careless errors dropped score down from 800 to 720 – less than the 25th percentile for typical Ivy+ colleges (MIT’s 25th percentile was 790 pre-COVID), and low enough that some admissions officers might question how well the student is qualified for a math-heavy major.

If implemented well, an adaptive SAT has the potential to greatly improve this problem. However, I am quite skeptical about it being implemented well for this purpose, and there are potential issues with students not being directed to the examA/examB option that would result in their maximum score (try a few games on Lumosity for an example). I think a test with a higher maximum ceiling would be more useful, which is not what the digital test does.

Impact on biases is unclear. A new computerized format with an adaptive decision threshold would probably favor students who have more practice with the new computer exam format, and have more instruction about how to increase chance of getting the better 2nd part exam for their score. How significant an effect is unclear.

1 Like