Stanford, Harvard, Dartmouth, Yale, Penn, Brown, CalTech, JHU, and UT-Austin to Require Standardized Testing for Admissions

Yes, they want a high number of applications to keep the admissions percents low. If the take the pre-SAT or pre-ACT in 8th or 9th, they start to get mail. The schools do not provide the info, the students are filling info out.

From one of the other comments, yes, they also look at the poverty levels and zip codes of high schools and make strategic decisions on where to visit and send additional info. They contact school counselors about special fly in weekends and programs. They look at schools with recognition for a high percent of minorities or underserved populations taking AP classes. Even smaller schools, with lower performing data points overall can get attention with a few outlier high scores.

I do think having the scores as part of the application submission is helpful. It is not the only factor in the process, but I also think it will provide for better data for where to apply. Right now, so many are not submitting what are really great scores because they are not close to perfect it is throwing the numbers way off and further up.

I’m sorry, but your posts repeatedly imply that the SAT scores encode a vast amount of information about applicants that are unavailable from the rest of the application. In fact, your entire analogy is based on this assumption.

Except that all studies demonstrate that this isn’t true. Even College Board doesn’t claim that it’s true.

It has been well proven that income provides a long list of benefits that improve the outcome of tests. Yes, as a general trend, students who do well on SAT tests are MORE LIKELY to have more mastery of the topic of the SAT. However, these are trends that are not some single line as people like drawing. It is a vast cloud of data points that tilts up to towards higher AT scores on the Y axis, and other indications of mastery are on the X axis. However, within that cloud, there are hundreds of points, AKA individuals who took tests, whose SAT scores are lower than those of other individuals who have less mastery.

That is because other factors are responsible for the variation in SAT scores for any level of preparedness. Yes, mastery of the material helps determine SAT scores. So does having breakfast that morning or sleeping the night before.

So what is the SAT score telling you about an individual student? Was that 1200 the result of sleeping only 4 hours because you are sharing a room with a much younger sibling who wakes up in the middle of the night and again at 4 am? Is it the result of not having breakfast because there is nothing in the refrigerator? Is it the result of having to deal with a sick parent of grandparent and thinking about them? Or is it the result of this being the level of mastery that you have achieved in the material? Is it the result of one daily hour of specifically practicing to take the SAT since freshman year?

The wealthier a kid is, the less likely it is that the SES factors will negatively affect a kid’s SAT score and the more likely it is that they will have a positive effect. However, even there, you can have a huge variation. A kid from a high income family living in a relatively large house with their own room, still may be kept awake until 2 AM by their parents fighting or by worries about a sick family member. Even high income families can be neglectful or abusive, and just because a family can afford space and time for a kid to study doesn’t mean that they will, and just because they can afford breakfast for the kid doesn’t mean that they will provide it.

Because multiple factors are affecting SAT scores, you cannot actually know what the score is telling you about the preparedness of the student, without the context of the rest of the factors. However, once you know those factors, the SAT score doesn’t ad a huge amount of information that is not already coded in the GPA and other parts of the application.

Between two kids with family lives that are conducive to studying and the resources for preparing for tests, the kid with the higher score probably has better mastery. However, that kid likely has a higher GPA as well. So what has that SAT score told you about the difference between those two kids, that you do not already know from the rest of their application?

If, on the other hand, one of these kids has a family life that is conducive to study and good sleep and eating patterns and another kid who does not, how much is the difference in their SAT scores the result of their differences in SES and family life, versus preparedness for college?

Essentially, the SATs are meaningless without the rest of the application, and are a useful, but not critical, addition. At least, if a colleges wants to use them as part of an in-depth look at every students.

All that being said, for popular colleges, the admission patterns during the pandemic demonstrated that the deck is weighed so heavily against low income students in admissions to “elite” private (and public) colleges that requiring the SAT or not will make little difference one way or another.

No, despite the claims by wealthier parents, adding the SAT back will not amake applications in more fair for low income families. It will, in fact, make it slight less fair. However, as we saw during the pandemic, colleges which went TO did not see a huge increase in the number of high achieving low income kids. As the Chetty article have demonstrated, the entire application process at popular private colleges is heavily biased against low income kids.

In fact, the entire education system in the USA is heavily biased against low income kids, from the fact that schools have to be funded by the property taxes, so the quality of education depends on how much the parents in the school district can pay, to the fact that wealthy families can gerrymander school districts so that their school will only serve wealthy families, to the fact that substantial number of people in this country believe that education is a luxury and therefore not something that low income families deserve, to the heavy homework loads which are far more difficult for low income kids to do.

Changing SAT policies is a bandaid, at best.

Just because I think that the SATs are biased against low income applicants doesn’t mean that I believe that it is the most important issue in higher education admissions. I just am tired of hearing that SATs are, somehow, a remedy for lack of fairness in admissions.

1 Like

If that’s what the person is implying, they’d be correct - IMHO.

From Harvard:

Dean of the Faculty of Arts and Sciences Hopi E. Hoekstra wrote in a statement that “standardized tests are a means for all students, regardless of their background and life experience, to provide information that is predictive of success in college and beyond.”

“More information, especially such strongly predictive information, is valuable for identifying talent from across the socioeconomic range,” she added. “With this change, we hope to strengthen our ability to identify these promising students.”

1 Like

Hopi is a brilliant scientist and a great mammalogist and I like her, but she should know better. She should know about the correlations between wealth and SAT scores. Maybe she believes that, for low income students, a relatively high SAT can help them be admitted. More Likely she’s toeing the party line - one of the prices you pay when you become an administrator. If I was still attending the meeting of the ASM (American Society of Mammalogists) I would have tried to talk to her about it (ASM meetings are like that).

She should also stop and take a look at how white and upper income the students in her own field are. Ecology and evolution are blindingly white. I participated in two conferences once, back-to-back, the first was Ecosystems Service Partnership, which includes people from a wide range of disciplines and industries, followed, on the exact same location, by the annual meeting of the Ecological Society of America. The transition from diverse ethnically, geographically and career-wise to upper-middle class White American suburban was pretty jarring.

It wouldn’t be the first time a that a very smart but privileged academic would be unaware of how their privilege affects how they think of the barriers to less privileged people.

Damn, Hopi is now the Dean of LAS at Harvard. I remember when she was a newly minted PhD. Actually, I may have met her when she was still a grad student, but it would have been not long before she defended. I really feel old now.

2 Likes

Many agree with her and many agree with you.

we all have different “takes” on this, of course, and we’ll never all agree but in the end, the schools will do as they do.

I suspect, but don’t know, that apps will go down but perhaps yield will go up (and that’s hard to do at a school with such high yield).

Guess we’ll know more over the next few years.

What the SAT fails to measure is the amount of effort to get that 1500 score. Student A with little prep scores a 1300. Student B with little prep is at the 1150 level. But, student B spends lots of money on preparation and many hours to get to 1350.

These same two students in college. Student B will need many more extra hours and resources to get better results than Student A. However, Student A is likely to be more proficient at managing a course load of 5 courses plus investing in ECs etc… In college, a student who has to spend a lot of hours in SAT prep, for one "test’ will not be in the same environment when they have to prepare for 5 tests.

Which of these two students is better prepared to handle university?

That’s why gpa is also relevant.

Given all of the grade inflation at many American high schools, GPA is largely irrelevant. That’s a big reason the SAT is again deemed necessary by an increasing number of universities.

6 Likes

The example of going from 1150 to 1350 is not typical or frequent and is not very useful. Even if it happens occasionally, the discipline of putting in that kind of effort is a virtue that will pay off in life.

Getting a high GPA or higher EC achievement also takes effort—for some more than others. Do you want some sort of “effort” measure for all of these aspects of an application?

student B doesn’t have to spend any money in prep. Plenty of excellent free resources available. Or, purchase a few prep books online for $50.

1 Like

One can always point to a specific example, or to construct one, to show the shortcomings of a standardized test.
Statistics doesn’t lie, though. Higher SAT is positively correlated with higher academic achievements. The correlation is also statiscally significant. A good and common sense explanation offers the causal effect.
What’s a good outcome is up to each college to decide, provided it complies with the laws and the constitution, so many other factors are also being considered to make the evaluation multivariable. The standardized test score is not perfect, and should not be the be-all end-all consideration, but it is the single best indicator for academic achievements across the board.

10 Likes

Again, no it isn’t. The only “Achievement” which consistently correlates to SAT scores is first year GPA. While a GPA of 4.0 in your freshman year of college is an achievement in its own right, it is not an important one.

As for the rest? Once you control for income, for the fact that many employers use SAT scores for hiring, and for the college that an applicant attends, SAT scores do not have a significant correlation. Or, if it’s significant, it is not important.

There is a HUGE difference between “significant” and “important”. If you take a million kids who took the SATs and found that, on average, kids who have over 1500 on the SAT make $100 more a year than kids who get 1100, that likely would be significant. It would, however be meaningless. Same if it was demonstrated that, for every 5 points on the SAT, a graduate would be making $2 more. It could be statistically significant, but again, it would be meaningless.

The numbers that are used in the analysis of relationships between SAT scores and various outcomes are enormous, and one of the results of that is that even very small effects can be statistically significant.

The people who are pushing their opinions tend to show beautiful graphs, with all of the variation removed, giving the false impression that the relationships are far stronger than they are in reality.

So again, the repeated claim that SAT scores have some sort of powerful predictive power on their own is provably false. It has some limited predictive power, but it only marginally improves on what you can tell from the rest of a student’s background.

Not really irrelevant in overall terms over all high school students, or all college-bound high school students, but grade inflation has resulted in compression at the ceiling so that it is difficult to tell from high school records and grades / GPA between an academic superstar and an ordinary excellent student, which can be a problem if a highly selective college is actually trying to find the academic superstars among the masses of ordinary excellent applicants. Of course, a similar problem affects US standardized testing.

I.e. this compression at the ceiling (for both GPA and test scores) is mainly a problem for highly selective colleges.

2 Likes

The SAT has percentages of 1% to 99% plus, right? I’ve seen graduation books at well-regarded schools where 30% to 40% of the students have 4.0 GPAs or higher. When that many students have straight As, grades are an almost worthless indicator for highly selective institutions.

1 Like

Higher than 4.0 typically indicates using some weighting, so many of those students may not have straight A grades. This is especially true if the weighting is heavy, like in the South Carolina weighting system.

It’s not the single best indicator, even at highly selective colleges for which GPA is compressed at near 4.0. For example, the Chetty sample found:

Variance in First Year GPA at Ivy+ Colleges Explained By…

  • SAT: – 19% of variance explained
  • SAT + GPA + Demographics + … – 24% of variance explained
  • All of above + Name of HS – 65% of variance explained (SAT coefficient weighting gets much weaker)

Name of HS was tremendously more predictive of first year, SAT score, HS GPA, or both combined. It also appears that much of the SAT predictive ability overlapped with name of HS. That is, SAT score was okay at predicting name of HS. And name of HS was good at predicting first year GPA.

This raises the question of why name of HS is so good at predicting first year GPA. I expect much of it relates to name of HS being well correlated with course rigor and strength of schedule. Some HSs are more rigorous than Ivy+ colleges and have course content that largely overlaps with first year college classes. While other HSs offer few/no AP classes, and have classes that are far less rigorous than typical Ivy+ colleges. As such, I’d expect transcript to also be more predictive than SAT, if you consider things like rigor of classes taken, distribution of grades at the particular HS, which classes had higher/lower grades and how relevant they are to major, … rather than just looking HS GPA in isolation.

Of course Ivy+ colleges don’t admit based on name of HS. And they also don’t admit based on only SAT score, only GPA in isolation (without considering rigor/transcript), or any other single stat. For test optional/required, the question shouldn’t be is it better to admit based on SAT alone or based on HS GPA alone? The question should instead be, what does SAT add beyond the metrics used to evaluate test optional applicants? Colleges that have tried to evaluate this often come to a different conclusion than you suggest.

For example, the Ithaca study found the following. SAT score may or may not have been the single best indicator of first year GPA, but removing SAT only reduced variance explained by 1 percentage point. 44% of variance in college GPA explained with SAT and 43% of variance explained without SAT. Based partially on this study, Ithaca concluded that little academic predictive power would be lost by going test optional.

Variance in Cumulative GPA at Ithaca Explained By…

  • First Gen + URM + Gender – Explains 8% of Variance in Cum GPA
  • Demographics + SAT Score-- Explains 25% of Variance in Cum GPA
  • Demographics + GPA + HS Course Rigor + AP Count – Explains 43% of Variance
  • Demographics + GPA + HS Rigor + AP Count + SAT – Explains 44% of Variance
2 Likes

It’s a very interesting statement that HS name predicts the first year GPA. Let me give it a try to decipher the myth.
Say the sample size is large, with numerous high schools from which a college admits students.
Since HS name cannot be quantified, the only way to include it into a statistical analysis is to assign a value to it. Pick any high school, assign a value of 1 to it. All other high schools get a value of 0.
Run an regression model analysis, you will probably find that, indeed, the first year GPA of that particular school is much more consistent than the rest of the pool, which consists of numerous other high schools with different locations, demographics, classes offered, etc. The standard deviation of the students from that high school is much smaller, for apparent reason that they are from the same environment.
The problem is, what does it tell us? Nothing. Pick any school, you’d get the same result. A school name alone, whether it’s ABC or XYZ, cannot be meaningfully used to predict the first year GPA of a student randomly picked from that school. Predictive power also means that everything else being the same, changing the school name will get you a different result. Without historic data to be used as backfitting, no prediction can be made to a student from a new school.
The data you cited always have HS name in combination with other metrics. They don’t validate your assertion that HS name in itself is a better, or even a valid, indicator of first year GPA in college.
If you can share the raw data and it’s analysis, we can look into the methodology and r square, t-stat, p-value, etc to have a better understanding of the research.

I was oversimplifying the definition. They created the variable by regressing differences in Ivy+ admit rates at different HSs after controlling for SAT score, demographics, and income. They limit to HSs with at least 40 non-legacy, non-athlete applicants to Ivy+ colleges. Some particular HSs had higher or lower admit rates than their model would expect. It’s a scale based on how the HS name appears to influence admit rate in their model, rather than a simple 1 or 0.

1 Like

In statistics, a p-value less than 0.05 is generally considered statistically “significant”. That’s the definition. It means that there is less than 5% chance that the correlation revealed by an analysis is caused by pure accident, viz, the null hypothesis has 5% of chance being true.
Importance is a subjective value judgement based on preference. That’s a different topic.

Okay. Essentially they created a college admission performance score for each high school, and say that everything else (SAT, demographics, and income, etc.) being the same, students from a better school has higher first year GPA than students from an inferior school.
That’s plausible. But how are colleges supposed to use this data, by recruiting more from better schools? That would perpetuate the inequality. For most students, the school to attend is not really they have control over. On the other side, a student can work harder to get a better GPA or SAT score.