I think itâs clear that Mr. Leonhardt believes tha SAT to be a predictor of the achievements listed:
These students, in turn, can produce cutting-edge scientific research that will cure diseases and accelerate the worldâs transition to clean energy. The students can found nonprofit groups and companies that benefit all of society.
Administrators at elite colleges have justified their decision to stop requiring test scores by claiming that the tests do not help them identify such promising students â a claim that is inconsistent with the evidence.
Where is the evidence that a person who scores high on the SAT, or gets good grades in college is more likely to change the world for the better in the above ways?
The data in the article shows that high scorers are more likely to be âAttending a prestigious graduate schoolâ or âWorking at a prestigious firm.â I donât think people who are âWorking at a prestigious firmâ are more likely to start nonprofits than those whose goals do not include being employed by a prestigious firm.
I hope higher education has many goals. Academic excellence is a worthy goal, but so much more than success in standardized tests/grades goes into being (for example) a groundbreaking immunologist or globally influential alternative energy entrepreneur.
This is largely a misinterpretation of the Chetty study. The Chetty study defines working at a prestigious firm as working at firm that employs a disproportionately large number of Ivy+ grads. SAT score in isolation explained <2% of variance in employment at Ivy+ heavy firms compared to 0.2% for GPA. Neither stat explains much about which students work at the firms, nor does the combination of SAT + GPA. Instead what had by far the most impact was high school fixed effects. Specific numbers are below:
Chance of Ivy+ Grad Working at Firm That Employs a Disproportionate Portion of Ivy+ Grads
HS GPA â 0% of variance explained
SAT â <2% of variance explained
SAT + GPA â <2% of variance explained
Above + Legacy + Race + Gender + Athlete + Parents Income-- 2% of variance explained
Above + cross variables â 5% of variance explained
Above + high school fixed effects â 31% of variance explained
Chance of Ivy+ Grad Attending an Ivy+ (or 1 of 4 publics) for Grad School
HS GPA â 0% of variance explained
SAT â 1% of variance explained
SAT + GPA â 1% of variance explained
Above + Legacy + Race + Gender + Athlete + Parents Income-- 3% of variance explained
Above + cross variables â 5% of variance explained
Above + high school fixed effects â 27% of variance explained
Looking at the stats above, I donât think the big takeaway is that SATâs 1-2% variance explained is a more powerful predictor than GPAâs <1% of variance explained. I think the much more notable finding is the strength of high school fixed effects. High school fixed effects are a far stronger predictor than anything else that is analyzed, including both SAT and GPA.
The author implies high school fixed effects can be interpreted as the difference in outcomes between students who have the same GPA/SAT stats, same demographics, and same parents income; but attend different high schools. Some high schools have a far higher rate of kids working at Ivy+ firms or attending Ivy+ grad school than others.
The analysis implies that if a college wanted to admit kids with prestigious firm or grad school aspirations, then they might favor students from particular high schools. Perhaps the types of students who choose prestigious high schools are also the type who are likely to seek prestigious outcomes post college. Or they might look more in to why those high schools have such high rates of students with with prestigious aspirations and figure out what non-stat parts of the application are likely to reflect that such as combination of students major + future plans, essays, interview, and life background. With SAT score explaining <2% of variance, it does not suggest test optional vs test required would have a large impact from the score itself, but might indirectly have effects by admitting more kids from different HSs or with different future aspirations.
One thing that I donât quite follow is why is it such a big deal that some schools require test scores and others do not? Why is it a problem that a school like MIL wants to require SAT/ACT but others schools do not follow suit?
Each college/university has its own institutional goals. MIT has a system that works for them. Caltech has a different system that works really well for them. Both institutions produce stellar scientists and engineers.
I do think that Mr. Leonhardt is suggesting in his article that people who excel at taking standardized tests at age 17 are more likely to benefit society.
As far as eschewing any objective measures, I donât think colleges should be forced to be test blind, but thatâs not happening outside of California, right?
I donât believe there is much disagreement on that particular point. In fact, you could measure it a whole decade earlier.
âThe American Psychological Associationâs report Intelligence: Knowns and Unknowns states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers.
âŠ
While IQ is more strongly correlated with reasoning and less so with motor function, IQ-test scores predict performance ratings in all occupations.â
Lots of other interesting correlations:
I am sure some will say that not much of anything is happening outside of California
I think everyone college/university should do what is right for them, and personally, I avoid weighing in on making prescriptions here. But our position (I speak collectively and personally) has been that test-optional is sort of the maximally inequitable. As Stu said in this piece: What Does an SAT Score Mean in a Test-Optional World?
Amid the confusion, high-school counselors are struggling to find new signposts to guide their students. One problem, counselors told me, is that the information they get from colleges isnât standardized. Some report the percentage of applicants who applied without test scores but not the percentage accepted; others report the inverse. And while colleges report the SAT and ACT score ranges of enrolled students, they donât indicate which percentage of the incoming class submitted those scores, leaving students to wonder how much the figures have been inflated by those with high scores who bothered to submit them. Two years in, counselors have no idea: What is a good score? Do I submit a score or not? And if so, should all colleges on my list get my score? Schmill tells me he gets those same questions from friends whose children are applying to other colleges. âI never had a good answer,â he said. âLike, I have no idea.â
If test-blind works for Caltech, then more power to them, especially for not hedging by going TO.