Stanford, Harvard, Dartmouth, Yale, Penn, Brown, CalTech, JHU, and UT-Austin to Require Standardized Testing for Admissions

We also know they can qualify for generous aid at these sorts of colleges.

Like, Yale publishes a handy chart:

https://admissions.yale.edu/affordability-details

In the $150-200K bracket, 94% qualified for aid, and median net cost was $30,500.

For a household with the adults in their 40s, that is roughly in the 70-80 percentile range.

If you define actual middle income as around 50th for households in their 40s, you are looking at around $90K. Doing a little interpolation, it looks like the median net cost at Yale would be around $4000 at that income.

3 Likes

Thanks for posting the data. That is the conclusion I am reaching as well.

Agree, the hypothetical is meaningless, they just have lower admission standards for candidates from what they deem “less advantaged” backgrounds but who are in the top cohorts in their high schools. Asking everyone to submit scores just makes it easier to rank those less advantaged candidates within their subgroups.

This depends on the college. The Harvard lawsuit analysis showed that Harvard flagged students as “disadvantaged” who the application reader believes to be financially not high SES, and those students were given a relatively small boost in chance of admission. This “disadvantaged” flag was loosely correlated with having below median income. I’d expect other Ivies, including Yale and Dartmouth to do something similar. In the document linked earlier, Dartmouth defines “less-advantaged” as, “students as those who are any of: U.S. first-generation college going, from a neighborhood with median income below the 50th percentile for the U.S., or attended a U.S. high school in the top 20% of the College Board’s index for challenge.”

However, at many colleges the reverse is also true. Full pay kids are favored over high need kids, as full pay kids are important to meeting financial balances. This is particularly true at need aware colleges, but can occur at others as well. When you don’t have an endowment gains earning an average >$100k/student per year and a student body that is naturally highly concentrated with full pay kids paying $90k/year, you may need to take further steps to improve the financial balance.

There are also less direct ways that more or less advantaged students may be favored. For example, legacies are associated with a high income. Favoring legacies tends to favor higher income students. More relevant to this thread, having high 1400+/1500+ test scores is also well correlated with income, Requiring test scores tends to favor higher income students overall, even though as Dartmouth/Yale note, there are some low income students whose relatively lower score would have helped them had they chosen to apply and submit that score instead of apply test optional.

Regarding whether your what it sounds like upper middle income would help or hurt you, it depends on the college, but probably would have little direct effects. If you are high enough to be full pay, it might help in a need aware system, but Ivies (subject of this thread) are not need aware and often give substantial FA for families with as high as $250k/income. The upper middle income might have indirect effects on things like being more or less academically prepared, more or less knowledgeable about college admission process, etc.

1 Like

As they would put it, their review of academic qualifications is contextual. And really it was even under test optional, but these two colleges think it will work better with test information from everyone.

2 Likes

You’ve changed the premise. Strawman argument.

They look at zip codes.

High SAT test scores are also highly correlated with doing well in college which seems like a good reason to admit kids who do well on SAT tests.

4 Likes

Just because it is handy, I note the Dartmouth data they published found an R-squared of 0.222 between SAT and first year GPA. I deal with a lot of modeling, and that is not what I would call a “high” correlation. Indeed, as a rough rule of thumb, usually we would call anything below a R-squared of 0.4 a “low” correlation.

But that is really beside the point. Multivariate models are tricky business, and things can only be weakly correlated and still get included in multivariate models, and in fact sometimes things can be more strongly correlated but get excluded once all the interactions are accounted for.

So Yale and Dartmouth don’t necessarily need to believe SAT is a strong predictor on its own to want to include it in a multivariate admissions model. And it does not appear to be.

3 Likes

If the tests are not a good predictor of success in these colleges why are they going back to requiring them?

1 Like

That was the point of me bringing up multivariate models. I deal with these quite a bit.

Conceptually, you might look at, say, 50 different variables you think might help predict something due to positive, or indeed negative, correlations (you can always put a negative sign on a variable, so negative correlations are just as useful).

After running your regressions, you might end up including 25 of them in a multivariate model. What you are finding is the 25 you excluded, while individually correlated, actually reduce the accuracy of your model. This can happen when the 25 you are including basically already contain almost all the predictive information in the remaining 25, and there is noise in those remaining 25, and so including them would actually add more noise and very little useful information.

OK, then someone proposes a new variable for the model, so you test what happens when you include it. Maybe you will find it helps the model to add it. Maybe you will find it hurts. Maybe it helps AND it knocks out some other variables. Maybe it helps AND it knocks out some other variables AND now some of the 25 you originally excluded are back in. And so on.

These things are very tricky and different people can approach the same sort of question and come up with very different models just based on relatively subtle differences in data, what they are seeking to predict, what other variables they consider, and so on. So, one model might include a variable, and another model might exclude it.

OK, so SAT on its own is weakly correlated with first year GPA at Dartmouth. Just to begin with, other colleges might not think that first year GPA is what is important to predict. And then of course they have different internal data, they may consider different other variables, and so on.

But then Darmouth finds including SAT on a long list of variables in a multivariate model helps its model, so it includes it. Yale finds any of SAT, ACT, IB, or AP help its model, but not all are necessarily, so it goes text flexible. Some other college finds SAT does not help its model, so it excludes it. And so on.

I know this is a bit long already, but I think a lot of confusion in these conversations arises from not really understanding how complicated, and sometimes counterintuitive, the choice of which variables to include in a multivariate model can end up being. And it is not at all a surprise to me different colleges would come to different answers.

5 Likes

That is an interesting question: is this SAT vs college performance in the realm of “soft” or “hard” science (as that helps determine “acceptable” r^2 values). On the one hand we have numbers as input and output…

On the other not everyone who is 6’7” and makes a college team averages the same points per game….

Without utilizing “context” the schools lose an important knob, one which allows them to set the level that they favor “unrecognized promise” vs “preparedness under better conditions” within their filtering process.

I don’t think it needs to be pegged maximally in either direction. And I’m sure the schools really value that ability to influence the filtering.

Your earlier post described an admission system that only looked at GPA and SAT. I expect your comment about being “highly correlated” with doing well in college is also based on looking at SAT score in isolation.

Again that is not how college admissions work at the college discussed in this thread. They don’t just look at GPA and SAT score in isolation. This makes the question more about how much SAT score adds to the material they would use to admit students if the score was not available. That would not be described as “highly correlated” in any analysis I’ve seen. Instead, analyses for selective colleges typically show little difference in either cumulative GPA or graduation rate between submitters and non-submitters.

As an example, some specific numbers are below, from the 25 years of test optional Bates study. Measures of success in college were correlated with SAT score in isolation at Bates, yet there was no significant difference between cumulative GPA or graduation rate between admits who submitted scores and those who did not.

Bates Test Optional Analysis
Mean SAT Score: Submitters: 1240 , Non-Submitters: 1070
Financial Aid: Submitters = 34% received FA, Non-submitters =42% received FA
Mean Graduation Rate: Submitters = 89%, Non-Submitters = 89%
Mean College GPA: Submitters = 3.16, Non-Submitters = 3.12

1 Like

Slightly tangential question but in a post-Chetty world, should we continue to leverage Bates/Bowdoin’s results for discussing Ivy+ schools?

Does the Chetty study have similar information to Bates/Bowdoin comparisons of outcomes between test submitter admits and test optional admits? If so, I haven’t seen it.

The Chetty study does provide some more detail about portion of students scoring 1400+/1500+ SAT by income. Some specific numbers from the study are below. I’m guessing the percentages are even more unbalanced today, with a larger portion of high-achieving median/low income students choosing not to take the test, particularly in states like CA.

Portion of Kids Scoring 1400+ on SAT by Parents Income
99.9 Percentile Income – 19%
99th Percentile Income – 14%
98th Percentile Income – 11%
97th Percentile Income – 10%
96th Percentile Income – 8%
95th Percentile Income – 7%
90-95th Percentile – 5%
80-90th Percentile – 3%
70-80th Percentile – 2%
60-70th Percentile – 1%
50-60th Percentile – 0.7%
40-50th Percentile – 0.4%
20-40th Percentile – 0.3%
0-20th Percentile – 0.1%

Portion of Kids Scoring 1500+ on SAT by Parents Income
99.9th Percentile Income – 7%
99th Percentile Income – 5%
98th Percentile Income – 4%
96-97th Percentile Income – 3%
90-95th Percentile Income – 2%

Median Income – 0.2%
Low Income – 0.0%

6 Likes

For now, only a few colleges have chosen to go back to requiring tests. The vast majority of four year colleges, including most highly rejectives, are test optional or test blind.

1 Like

Yes, it is a bit long…I have a graduate degree with a data analysis focus so I’m quite familiar with multivariate statistical models.

I have to say, I am impressed by your efforts to argue against the simple, logical point that standardized scholastic aptitude tests are the best and easiest predictor of - you guessed it - scholastic aptitude! Look at Fig 1 on page 9 of the Dartmouth study…seriously, what does that suggest other than a strong linear relationship between SAT scores and first-year GPA? Clearly, Occam’s razor applies here.

4 Likes

It might have been more in the domain of secondary observations, but I was looking for the data at one point and didn’t find a source. Perhaps others know more. What both Friedman and Deming said indicates they are strongly in the “testing results are symptoms, not the cause of disparity” camp, and find value in testing for the Ivy+ arena:

Harvard Gazette, Nov 2023:
GAZETTE: Some experts say the SAT test has become a sort of “wealth test.” What’s your take on this?

DEMING: I think that’s a little bit misleading. And the reason is that everything that matters in college admissions is related to wealth, including the SATs. I think when people call it a wealth test, they mean to delegitimize it as a measure of who can succeed in school. And the reality is that the SAT test does predict success in college. The SAT does capture something about whether you’re ready to do college level work.

I would urge us to create conditions under which there are more low- and middle-income students who can do well on the test, not to get rid of the test. Getting rid of the test doesn’t make the disparity go away. It just makes it invisible in the eyes of the public. For me, that’s the wrong direction.

Also, if you get rid of the SAT, as many colleges have done, what you have left is things that are also related to wealth, probably even more so. Whether you can write a persuasive college essay, whether you can have the kinds of experiences that give you high ratings for extracurricular activities and leadership; those things are incredibly related to wealth.

My worry is that if we get rid of the SAT, you’re getting rid of the only way that a low-income student who’s academically talented has to distinguish themselves. Getting rid of the SAT means those people don’t have the opportunity to be noticed. I don’t think the SAT is perfect, but I think the problem isn’t the test. The problem is everything that happens before the test.

IHE, Jan 2024

Friedman:
“I think since the pandemic we’ve learned a lot more about how the test-optional policies are operating in practice, with data to support, instead of just anecdotally,” said John Friedman, […] “And what we’re learning is, without the test scores, there’s a tremendous amount of uncertainty about whether that student is really at the level that [highly selective colleges] require.”

Plus just as importantly:

Friedman said his research is not meant to be extrapolated to a broad swath of colleges and universities—only to the highly selective ones. In those cases, he said, test scores do add a significant measure of aptitude.

“The data is certainly not dispositive for more open-access institutions. In many cases, at those colleges grades are a better predictor of success than test scores,” he said. “But at a certain level of extreme selectivity, getting a 4.0 doesn’t mean much.”

1 Like

I think it’s kinda ironic that the non-test crowd frequently mentions the correlation of SAT scores with income, and touts Bates history as TO, but ignores the fact that Bates not very diverse economically (<10% Pell Grants, (nearly half of Y or D), nearly highest tuition cost in the country, 76% from highest income quintile…). The latter number, btw, is higher than Dartmouth and Yale.

So yeah, Bates does really well graduating its TO students as they start out advantaged.

That said, I’m not convinced that Ggrad rate is the proper measure.

11 Likes

What do you think: Incoming SAT, intended major, and grad rate and gpa in that major? As well as percentage that switched out? I agree that the measures used are not as satisfying as one might like.

2 Likes

The post emphasized the relative difference in measures of performance and income between submitters and non-submitters – not that the grad rate or GPA was high in an absolute sense. Bates non-submitters had the same graduation rate and average cumulative college GPA as submitters, even though a larger portion of test optional admits received FA, a larger portion were URMs, a larger portion were first gen, … than test submitter admits.

Bates was an arbitrary example that was chosen because they provided more detail than most other colleges. I’m not aware of a selective college that did not find similar grad rate and average cumulative GPA between submitters and non-submitters. I’m also not aware of a selective college that did not find that test submitters admits averaged higher income than non-submitters admits.

1 Like