The Misguided War on the SAT

…and different coaches (teachers) grade differently.

SAT/ACT is the same opponent for every player. Its grading is objective. With so much subjectivity in GPAs, the standardized test is a way to compare otherwise equal applicants (and applications).

“otherwise equal applicants”!!! :shushing_face:

College admissions offices can see GPA, courses taken etc… People who work in college admissions and who have handled “test optional” for years don’t seem to think there is a problem with it. It seems only the “elitist”, um, pardon, “selective” schools have an issue with it.

Categorize them however you want, but yes, schools with low admit rates would logically require more info with which to evaluate applicants than schools with higher admit rates.

1 Like

Colby and Williams are test optional with an acceptance rate south of 10% this year.

In the latest IPEDS year, there were more than 30 four-year academic-type colleges with <10% admit rate. ~80% of these low admit rate colleges are test optional/blind and ~20% of them are test required and/or have stated they will be in the future.

However, this doesn’t tell you much of anything relevant to whether SAT should be required or not, aside from there not be a universal consensus, regardless of the college’s admit rate. As has been discussed in other posts, there are a variety of reasons why different colleges may come to different conclusions about standardized testing requirements.

1 Like

Only since Covid. The claim was that schools had no problems being test optional for years. That may wind up being the case, but it is too early to know. Schools have just started reversing policy.

Bowdoin would be a highly selective school that has been test optional for a very long time. But there are not many like that.

Weirdly enough, I tend to agree with you, though I still think that true TO is better than an SAT requirement.

Colby had a <10% admit rate in 2019 (pre-COVID), but Colby was also test optional pre-COVID. While the majority of highly selective private colleges were test required pre-COVID; a significant minority were test optional including Chicago, NYU (test flexible) Bowdoin, Colby, Wake Forest, Bates, Weslayan, Pitzer, Brandeis, etc.

Pre-COVID test-optional was the minority for highly selective colleges; in a similar way to how post-COVID test required is the minority. Pre-COVID it was big news on this sub when a selective college went test optional, much like post-COVID it is big news on this sub when a selective college goes test required.

1 Like

You keep saying this.

The universities that have gone back to standardized tests (and their own thorough analysis) say otherwise. I have yet to see evidence these universities are being deliberately deceptive.

Just because you personally want something to be true doesn’t make it true.

2 Likes

“SAT and ACT scores are highly predictive of academic performance at Dartmouth. This is consistent with previous research (e.g., Comeaux and Sánchez, 2020; Bettinger et al., 2013; Saboe and Terrizzi, 2019). It is also a key finding of the White Paper and a result we have reproduced in the data provided.”

1 Like

Why would you not want the 1 piece of data that measures something common across all students? Schools that were test required were already “contextualizing” scores (not a new thing). They could give scores the weight or place that they felt was appropriate – I suspect it was part of an initial hurdle decision along with GPA, rigor and maybe a few other factors. Harvard certainly listed some score targets for their various Academic ratings.

The answer seems to me is that they felt that they could identify their desired students without that piece and rely on other indicia, but more importantly, TO gave them a competitive advantage in gaining a wider pool of applicants and a lower admit rate to make them look more selective.

1 Like

It helps to use specific language rather than use vague language like “not an accurate predictor of a person’s college performance” or "highly predictive of academic performance. " For example, how much does SAT predict what specific measure of college performance?

In the Dartmouth example above, the metric they are evaluating is first year college GPA. SAT in isolation explained 22% of variance in first year college GPA. 22% is certainly significant, and you could argue that 22% qualifies as “highly predictive”, although that’s not the wording I would choose . However, “college performance” includes more than just first year GPA. All similar studies I am aware have that evaluated other metrics of college performance found SAT in isolation is most predictive of first year (particularly first semester), notably less predictive of cumulative 4+ year GPA, and tremendously less predictive of graduation rate. Some have found explaining <5% of variation in graduation rate in isolation.

It’s also important to emphasize the numbers above are for SAT in isolation. The relevant question for test optional is not how much SAT in isolation predicts first year GPA. It is instead how much SAT adds to the combination of other metrics used to evaluate test optional applicants. At highly selective colleges, this combination of other metrics used to evaluate test optional applicants is not just HS GPA in isolation without considering course rigor, classes taken, grade distribution at the particular HS, etc. How much SAT adds to the combination of other metrics used to evaluate test optional applicants is far less than how much SAT predicts in isolation. Specific numbers depend on the specific criteria used to evaluate test optional applicants at a particular college. SAT/ACT can add negligible additional information (see previously linked Ithaca study where explaining 25% of variation in isolation dropped to 1% additional information beyond combination of other metrics) or can add significant additional information.

1 Like

Very rudimentary analysis. Interestingly, none of the authors actually work or worked in college admissions. They used two data points 2017-18 and 2020-21 (a covid year) to make an assertion, why not more years? Also, they make the assumption that that “less privileged” kids at 1400 SAT+ score have a better change of being accepted because those who did got in at a higher rate WITHOUT considering that you maybe cannot extrapolate that data. You also have to consider how many in that category of underprivilged submitted scores because they had to for NCAA purposes and made it in, less for their score, but, more for other reasons. Again, a nice overly simplified study that seems very quickly put together to support a predetermined outcome.

I realize that there will always be folks on one extreme or the other of this issue, though I would like Data10 to comment on some of the numbers we’ve seen.

Previously, Data10 presented some numbers that said 0.1% of those SAT takers in the lowest income quintile scored 1400+. Perhaps Data10 can estimate how many students that would be - how many total SAT takers were there from that year and what percentage were lowest income quintile.

In recent years, I think there have been about 2 million SAT takers. Even if students from every level of income took the SAT in equal numbers, that would be 400,000 low income SAT takers. Then if 0.1% of those 400,000 got 1400+, that is 400 students. We know that it is fewer than 400, though I’d like to understand the actual number.

All of the speculation here of the low income kid who gets a 1490, but could have had a 1550 if they didn’t have to take care of siblings while the parents worked two jobs - I’m not saying that kid doesn’t exist, because he probably does, but just that a kid like that is a unicorn.

Looking at the Dartmouth study, which I admit I did not read all of it, they define “less-advantaged” as 1) first gen or 2) from a neighborhood with median income below 50% percentile or 3) attended a HS in the top 20% of College Board’s index for challenge. They estimate that there may be 1,000 less advantaged students in a given year that score 1400+ but did not submit their score.

I was previously confused as to how the schools say that there were lots of these disadvantaged students with 1400+ scores that went TO, when we knew that there were no more than 400 low income students with 1400+ scores. I think we may have assumed that less advantaged students were a much smaller group than the Dartmouth definition.

I am still in the camp of require the SAT, as I think it helps colleges that receive a lot of applications to quickly identify those who truly have no chance of admission. However, TO is here to stay at quite a few schools, so I would propose keep TO, but TO means that the student opted not to take any tests. Some colleges used to require that you submit every test attempt, and that is what I would suggest. Realistically, kids can easily know - and most probably do - in what range they will score. There are plenty of practice tests available. You will not get an 1100 on practice tests and then magically score 1400 on the real thing.

Also - maybe I am imagining this because I can’t find my kids’ old SAT score reports - but I seem to recall on the detailed student score report, College Board reported how you scored as compared to others in your HS. Since College Board has this information, why not put it on the official score report to colleges, so no need for your HS counselor to provide the information.

1 Like

That stat is from the Chetty study. It’s my understanding it refers to the 2011, 2013, and 2015 test years. The threshold is 1400 for SAT or 32 ACT. The stat is based on the total population, rather than just those who took the test(s), so you can multiply by number of students in sample years. A crude estimate might be 3.5 million students * 20% per quintile * 0.1% receiving score threshold = 700 lowest quintile income kids with 1400+/32+.

Chetty notes that a relatively large portion of those estimated ~700 kids do attend Ivy+ colleges. Among kids above the 1400/32 threshold, 14% of lowest quintile income kids attended Ivy+ colleges, compared to 10% of middle quintile income kids, 42% of top 1% income kids, and overwhelming majority of top 0.1% kids. While some Ivy+ colleges do give a boost for estimated lower income, I suspect much of this relates to other selection effects beyond just at the college level. For example, the 0.1% of lowest quintile income kids who score 1400+ on SAT/ACT may be more likely to attend selective HSs such as public magnets, where kids are encouraged to apply to Ivy+ colleges.

The specific stats are below:

Rate of 1400+ SAT / 32+ ACT scores:
Lowest Quintile Income – 0.1% of students
2nd Lowest Quintile – 0.3% of students
Middle Quintile Income – 0.55% of students
2nd Highest Quintile – 1.4% of students

98th Percentile Income – 11.2% of students
99.0 to 99.9th Percentile – 13.2% of students
Top 0.1% Income – 18.4% of students

If one wants to challenge a user on a past statement they made, please link the quote. Because if such a statement gets flagged, I’m deleting without researching.

It’s called equating. This is no different than kids taking the October test today and other students taking a different test in January. Completely different test forms, but still scored similarly.

To be clear on concept, I do not mean provide a quote and then put words in the quoter’s mouth. One can request a clarification without violating ToS. I deleted several posts that did just that.

Additionally, I am temporarily putting the thread on slow mode to allow users the time to compose meaningful posts in compliance with the rules.

1 Like

That would make sense since the College Board makes tests for college admissions. Who better to have on the Board of Trustees.

It is not as if Board members are compensated. Some of the executives are well compensated, but members of the Board of Trustees at the College Board are not compensated, as is typical with not-for-profits.