The Misguided War on the SAT

Again, evaluating transcripts/rigor/high school profile without test scores is not difficult. Can it be difficult to ‘shape’ a class? Absolutely.

Again, I don’t understand why some posters think they know better than enrollment leaders at each school. It’s obvious they don’t, especially as most have zero experience reading college apps.

Signed, someone who reads apps all day.

6 Likes

I don’t really know better than Stu, and Stu says he needs the scores. :man_shrugging:

Yep, that’s for MIT and does not apply to other schools. Seems simple.

5 Likes

I was replying to a comment specifically about MIT.

Then not sure why you tagged me, I didn’t mention MIT.

I don’t believe I have; I was responding to bluebayou’s comment above.

The world is not so simple as to say that “tests are always useful” or that “tests are never useful”. And that’s not what Vigdor’s model is saying either.

Backing up a bit, people who have worked with data extensively understand that it’s usually messy. You rarely get absolute truths that apply across an entire population. Instead, you often get really good insights on a few subsets, and from that you try to come up with reasonable rules to be applied to a larger segment of the population.

One of the key things that Vigdor’s model said is that the school attended matters a great deal in predicting success in college. I have long said something similar in lay terms, in that the tests are not very useful for students already excelling in a rigorous school, as the transcript tells the AOs what they need to know.

We also know from his model that while the explanatory power of the SATs reduced, they reduced to about the same level as HSGPA. But consider what that actually means, because for the students who excelled in a rigorous school, that explanatory power is probably close to zero. Given that, it means that the SAT had more explanatory power for students who did not excel in a rigorous school. Colleges like MIT have found that one such category is the Diamond in the Rough.

I believe colleges like Bowdoin when they say that tests have no explanatory power for their students. But my question, because I better want to understand, is why that’s the case. Because again, the more pockets you have where it has no explanatory power, the more pockets you have where it has increased explanatory power. How do we identify those pockets?

1 Like

It’s true that MIT chooses to require scores, but MIT is the exception, rather than the rule. The overwhelming majority of other US colleges with a rigorous, math-heavy curriculum choose to not require scores. Caltech takes the opposite approach and chooses to go fully test blind, citing internal research finding that the testing had little to no predictive power in evaluating success in Caltech’s core curriculum. Upon going test blind, the portion of Pell grant kids at Caltech near doubled. While this isn’t the same as lesser known high schools, it’s certainly not suggestive of kids from lesser known HSs not being able to get in to Caltech anymore, now that they are test blind.

Unlike Caltech, MIT was never test blind. In a test optional system, kids from lesser known high schools who have high scores are likely to submit those high scores. The college sees their high scores in both a test optional system and a test required system. However, I doubt those high SAT scores alone are good indication of being qualified for either MIT or Caltech.

For MIT or Caltech level students, I’d expect the math SAT is essentially a series of short & simple multiple choice algebra/trig type questions that has little direct relationship to the types of questions they’ll see in their math classes, which will generally be complex, not multiple choice, and emphasize higher level than algebra/trig level math, such as calculus+. A high score alone on math SAT does not indicate a student is well prepared for MIT/Caltech, but a low score could raise questions about whether the student is well prepared. For test optional/blind, it’s important to ask whether anything else in the application is likely to flag the kid who had not mastered algebra/trig type math from being admitted, or would it only be the relatively low math SAT score?

Regarding what specific things an admission officer might look for besides SAT score, there are many options. I was accepted to MIT from a HS that I’d expect to be unknown to MIT. While I did get a high scores on the math SAT, I doubt that’s the key area MIT noticed or what highlighted my math interest and ability. Instead it included things like continuing to take math at a nearby college, after I had exhausted the math offered at my HS and having glowing LOR from my professor, as well as one from my HS GC describing my background and the effort he and teachers made to accommodate . Or participating in Math League and interschool math competitions. My HS’s team didn’t win anything of significance, but it showed interest and ability. Or a similar pattern in math adjacent fields, like CS and science, with showing interest and ability, often both in and out of the classroom. Or the discussion about interests and motivations during the interview, which it’s my understanding is typically longer and more influential at MIT than the vast majority of other colleges. There are many things a college can review besides just the test score or average HS GPA.

1 Like

Again with the same red herring. No one is claiming it is black and white. I’ve made this clear to you repeatedly. The questio is whether the small gain in predictive value is worth it

I think it’s more important to discuss why MIT’s and Caltech’s data are such that we reach conflicting conclusions. It is unclear why both sides think one disproves the other. Realistically, it appears we’re left with the slightly unsatisfying “it depends on the school” as the answer.

Which is why JonB (whom I used to follow and tried very hard to understand) falls short. He is too obstinate and dismissive that “only grades matter” or that SATs don’t matter… everywhere.

Similarly, if it depends on the school, and MIT, Yale, Brown, and Dartmouth have opinions on the utility of test scores, why are some so dismissive of the possibility that this could occur at other places? The data presented on UCs are interesting (albeit requiring some level of scrubbing). That’s the stuff we’re looking for here. Sadly, the above institutions have not all disclosed their data. But it is apparently convincing enough to announce it to the world: “ the SAT or the ACT is the single best predictor of a student’s academic performance at Yale”. For those that do this for a living, please consider how convincing their data must be for them to say this and with this conviction.

If we allow these results to convince us that indeed “it depends” then important question arises: what institutional characteristics imply that scores are useful?

Another point: even if the utility of testing drops off dramatically outside of the above schools, they are (and I believe do represent a class of schools that are) amongst the most popular sought after colleges out there. Being dismissive of “one school” or “just a few schools” is turning a blind eye to an important phenomenon that many are invested in.

Finally, this does feel like a power struggle. In countries that have national tests really hand the “shaping of the class” to the applicants and not a committee. Decades ago, UCBerkeley was similar if only for half the incoming class. Now applicants with 1600/4.0+ are left scratching their heads regarding their UCBerkeley admissions results.

Nowadays, while committees are very very intentional in what they are doing, more and more applicants feel the process is quite “random” or reminiscent of a lottery. We have swung fully away from the national test model and strongly away from UCBerkeleys policies in the 80s. Why? Institutions have chosen (or some have possibly felt obligated to) have more control over the execution of their societal role. It is their prerogative, just like it is their prerogative to burden some with helping improve access financially for others. For those opposed to all this, they also do have a choice not to attend or support these particular schools.
But they still apply… again and again.

This is not an obsession with testing. It’s an obsession with these schools. I believe this should be our foe not testing. Again: the founders of Google did undergrad at (great) state schools. Stop giving Stanford credit for their development. Former Stanford prez Hennessy is a Villanova/Stonybrook product. If you’ve met the man, you’d be proud to say you went to the same schools he did. But no such academic praise for ‘Nova or Stonybrook.

7 Likes

Caltech’s lowest math course is calculus with proofs (closer to real analysis), while MIT’s lowest math course is like regular calculus, but accelerated (one semester instead of the more typical one year). This means that the minimum level of academic strength at Caltech is higher to the point that the SAT is not relevant in determining if an applicant meets it (other indicators are necessary and sufficient for this purpose). The minimum level of academic strength for MIT is at a level where the SAT can be relevant.

That does not mean that peers have similar opinions. Indeed, back in 2009, Harvard’s admission director said that the SAT was less predictive than AP or IB, SAT subject tests, or HS GPA.

That was also a time when UCs were generally much less selective.

Also, countries with national tests tend to have those more tied to school curricula and with higher ceilings than the SAT.

While it’s always interesting to dig into the data and hear why specific schools do or don’t find value in test scores, I think my larger concern is captured by @Vulcan’s post, which has been rolling around in my mind all day. The devaluing of SATs are just one way that we as a society are moving away from objective evaluations and measurement of subject mastery (grade inflation and resistance to standardized tests in K-12 more broadly are also examples of this).

Increasingly, when we don’t like the results—who is scoring high/low on tests, which types of students are being admitted to college, what the results reveal about the state of education in our country—we blame and discredit the instrument or the process and claim that the results themselves are thus invalid.

The statement from Vulcan that “…the gap persists downstream” seems to get lost in the shuffle. Denying the results of tests doesn’t change the outcome any more than refusing to take a pregnancy test means you aren’t pregnant. It just makes denial easier and doesn’t empower us as a society to address the problem.

It’s much easier to disparage the SAT and the value of the scores than it is to address the deeply complicated issue of why some groups consistently out perform others and what to do about it.

I don’t think most people are claiming that SATs should be the only factor or even the most important factor for students. Regardless of their use in college admissions, by dismissing the results as “SATs just measure wealth” and “rich kids prep so they do better”, we risk missing the critical discussions about education and what the differences in scores (SATs and other standardized tests) are telling us about the state of education and student readiness for college and job performance in a service-based economy.

12 Likes

I agree but most schools would fall on MITs side of the fence here including Bowdoin and Bates. Something more is needed to differentiate when tests matter or not.

That does not mean that peers have similar opinions. Indeed, back in 2009, Harvard’s admission director said that the SAT was less predictive than AP or IB, SAT subject tests, or HS GPA.

It also doesn’t preclude them either nor preclude them from changing their stance. In this latest era of grade inflation, it may be time to reassess.

That was also a time when UCs were generally much less selective.

Also, countries with national tests tend to have those more tied to school curricula and with higher ceilings than the SAT.

Agree with both. I would say most schools were generally much less selective back then compared to now. And yet UCBerkeley was arguably considered above certain schools that are “ranked” higher than it now. It certainly wasn’t lacking in very well regarded/ranked departments then.

Wrt national tests: yes… and to remove spite for the SAT itself vs testing in general… would a different test like the national test allow anyone to support using testing here? And SAT ceilings were lowered so would raising the ceiling on it change anyone’s mind?

1 Like

In context I don’t think that quote is saying what you suggest. From the same paper, slightly above:

Most models show a slight gain in predictive power with tests.

A slight gain in predictive power. That’s exactly where we are after this nytimes article. A big nothing.

Go back and look at what I was addressing. It is consistent with what JonB has written with regard tot he predictive power of the tests.

1 Like

Similarly I have no idea why you feel the need to characterize JonB differently than his own words. Does “meaningless practically” truly resonate as holding the SAT’s utility as practically anything other than zero? If we’re arguing over that, then I’m going to bin this as being in the weeds. Sorry if my commentary interrupted your exchange with hebebebe.

Wrt: nytimes article, I’d suggest that a few schools disagrees with JonB. At this stage, I’m not convinced he has the goods over them, at least for their schools.

(Hm you edited your posted before i finished my reply…. It’s feels a little disjointed but I’m just leaving it)

Slight gain in predictive power.

Goods over them? You seem to be mischaracterizing his position and MIT’s :roll_eyes:

1 Like

Yale and Brown (and Dartmouth too). I think I’m gonna leave be this exchange with you. I don’t discount JonB because of his background. Just his behavior and arguments.

Shall I list all the schools that are TO and/or Test Blind? It’s a longer list. :roll_eyes:

1 Like

It’s not as simple as black and white – either tests matter or tests don’t matter. The more relevant question for test optional is what SAT/ACT adds beyond the combination of alternative measures that would be used to evaluate test optional applicants for admission.

Suppose one study says SAT in isolation is the strongest available predictor of first year GPA in isolation, including stronger than HS GPA in isolation. Another says SAT adds little to the prediction of college GPA beyond the combined rest of application. These statements do not conflict with one another. Both statements can be true for the same college, and both statements are not far from what the Ithaca study found in their analysis.

This also makes the importance of SAT in a test optional system dependent on what evaluation system is used for test optional applicants, when SAT is not available. Is it primarily based on GPA? Is the college doing a holistic review with course rigor (AP/IB), LORs, ECs/awards, essays, …?

It is also depends on how you are defining “matter” – adds at least ??? to predicting first year GPA? final GPA? whether the student switches out of planned major? whether the student graduates? grad school, income, or other post college outcome?

The influence of these measures will also vary from student to student, so it depends on which groups of students the college chooses to admit test optional, which varies from college to college.

It also depends on the application pool, including things like self-selection and restricted range. It also depends on what the college does to support students who come from varying HS backgrounds, with varying degrees of preparation.

I could continue, but it’s easy to see how different colleges could give different statements about the value of SAT/ACT. More relevant is to review what specific comparison or evaluation is being made and result, rather than a simple grouping as either matters or doesn’t matter.

3 Likes

Very fair. I might have agreed to it above. I will say that if the weighting coefficient for testing was 0.01% over the rest of the application components, Yale probably would not have made their statement public unless they desired to create consternation at levels seen at times in this thread.

(A the black/white point might be an artificial/judgment trip point where an admissions department feels it is beneficial to encourage/recommend score submission? Just throwing it out there. It would be a digital wrapper around an analog value).

Although the thread is about the SAT/ACT, I wish it was more about testing. I’d like to separate out the sentiments towards the tests, college board etc and focus on what kind of testing would work… if any. I’d wager most of the SAT/ACT supporters would support some kind of testing. A test of content vs (something else) is fine as would one with a sufficiently high ceiling. I have a feeling there is a camp that dislikes any testing at all, but it would be interesting of some of the “centrists” would go one way or the other if we changed things vs remove things.