Here is an interesting twitter thread by a U of Washington professor addressing some of the NYTimes findings, it begins . . .
Much respect for @DLeonhardt 's journalism, but this thread is going on my quant syllabus as an example of how to mislead (not lie! but mislead) with statistics. SAT/ACT are way less of a deal than is implied here.
I think you may be taking those kinds of statements too broadly. Youâre right, across the board there are scads of excellent applicants. But theyâre not saying that across the board. Theyâre saying - pretty specifically as Iâve seen/heard it - that they help find and validate the potential excellence of kids theyâd not otherwise find. The hypothetical kid who scores 200+ points over the average score at their under-resourced HS. Even if the nominal score is at the (very) low range of typical for the college. They may want that kid. But, because theyâre from a HS they may not be super familiar with, and are not as confident in exactly what the report card is saying, the test score clarifies and validates. So those are the kids that they otherwise encounter a shortage of. As I read/hear them anyway.
A friend of mine just posted on her story today recommending a certain test prep with her added note âhighly recommend! A did it last year and increased his score 270 points.â
They live in an upscale MA neighborhood. Itâs hard to believe a kid without that kind of training would gain that many points on their own. Iâm sure it happens, but training does improve scores exponentially, otherwise there companies would be out of business.
Regarding scores, I thunk the NYT article is great and am not at all surprised to see the swing backâmaybeâtoward scores required. Our Private HS has been saying our kids need scores since the end of the 21-22 application season . TO was really TO only the year before, in their opinion.
Regarding writing: per my kids (at T10s that value liberal arts and writing, even for Stem majors), the writing in their HS was significant.
Our top private Hs still does APs , and the writing of 3-4 page papers is for middle school. HS is commonly 5-7-9 pages, English and History, beginning 9/10th grade, as well as detailed lab writeups in AP chem strictly graded for correct scientific writing and grammar. The emphasis on writing that the private school had has given both kids a leg-up in their college launch: they both have commented on it, as have peers fromtheir HS , that the writing prep in HS is a big factor in success, as well as entering college with the ability to read and analyze primary sources. Now when these schools have an average of a B+ or A- in required writing courses, one would have to enter way behind peers to risk a C. The vast majority handle it fine, even without the same HS background.
My son did the SAT. Did well. But was actually disillusioned because he realized it could be gamed if he had the time. Peopleâs whose kids well like it because it is âconfirmsâ their own bias. The world has changed exponentially over the past ten years. The SAT no longer reflects that reality.
These are among the biggest reasons why our kids are at the HS they attend. My partner and I experienced a very unpleasant âhitting the wallâ at our selective colleges (despite straight As in HS) owed to these issues and we didnât want a repeat of that for our kids.
Grade inflation/compression could certainly diminish the marginal GPA benefit, but tbh itâs not so much about that for us because we know the kids will still have to write a lot when they get to college, and I know theyâll have a far better time of it because theyâre extremely well prepared.
Iâd liken it to ordering at a restaurant in e.g. Italy. My Italian is passable enough to do so, but my DDâs near fluency makes the process a whole lot more fun
Having read the tweets, I agree with that the professor is mathematically correct. He found a model that, if he did a lot of work, gets rid of most of the explanatory power of the SAT.
But a key thing many people donât understand is that just because you have a model that has reasonable explanatory power for the entire population doesnât mean it is a valid model for a subset of the population. And selective colleges by definition are selecting from a subset of the population.
Colleges could, in theory, create a model that is valid for the subset of the population that is applying to their college. But that would be a lot of work, and because you would need to load the data for all applicants before coming up with each regression variableâs coefficients, that means you cannot use it for the current yearâs applicants.
In practice, this means that no college would ever do this except in a retrospective analysis such as carried out in the University of California Academic Senateâs report (linked earlier), MIT, or Brown. The analysis of the subset can also be the reason why the SAT is still found to have predictive value for MIT and Browns, even after applying controls similar to what the professor recommended.
Instead, many colleges use time-tested heuristics that, through trial and error, have proven to work for their particular college. The professor even gives an example of one, in saying that even after creating his detailed model, there is still value in the student that has a much higher SAT score than their school average.
@hebegebe , Iâm taking from your comment that you concede that Vigdor has seriously damaged the proposition that SAT is the almighty prophesier of success that Leonhardt had asserted. Itâs back to being one among many metrics. Call it a tarnished bullet rather an immaculate silver one.
While everyone is focused on the test-optional issue, the larger issue is affordability. Whatâs the point of a 1500 SAT if you canât afford what it gets you?
What he is saying is not new. I believe @data10 said something similar in the past. With enough analysis, you can find something that removes most of the predictive power of the SAT for the general population. That doesnât mean it is applicable for a particular college subset, or that the additional work is feasible for many colleges.
Not to mention that affordability at Yale is liable to be less of an issue than getting in to begin with (they are, like other schools of their ilk, very generous with FA).
The expectation isnât that disadvantaged students would have better SATs than wealthy kids, itâs that each studentâs score would be evaluated against their peer group, socioeconomically, at least that is what MIT and Georgetown are describing as their practice.
Grades alone? Which of the top colleges are basing admissions on grades alone? Seems like this whole line of reasoning is a red herring.
As for whether test scores are an âexcellent predictor of college performance,â I guess that depends on what one means by excellent. Iâm not a statistician, so Iâll ask you for clarification.
According to the @OppInsights analysis standardized test scores explain less than 1/5 of the variation in college GPA. So over 80% of the variation in college grades exists among students with similar test scores. Does that sound like strong prediction to you?
Also, is it true that if we also include other factors such as the high school attended (which has a much greater predictive value than SAT score) that the predictive value of test scores is further diminished?
Isnât this only the case if those students apply? If they donât apply because they (rightfully or wrongfully) view the tests as a barrier to admission, then what good are the test scores?
Statistically speaking, under which scenario are underrepresented âdiamond-in-the-roughâ minorities with less than stellar test scores more likely to apply to top schools: test required or no test required?
Have there been any selective test optional colleges for which this is true? That is GPA of attending students who submit SAT scores is notably higher than GPA of attending students who do not submit SAT scores?
The graph you posted compares SAT score to average freshman GPA in college. Thatâs not the same thing. The students with lower SAT scores tend to be weaker applicants on multiple dimensions. They do not tend to be students with poor SAT scores, but decent everything else on the application who would be admitted under a test optional policy Itâs also not the same thing as comparing predictive power of SAT scores to average HS GPA in isolation.
Instead the types of selective colleges that are frequently discussed on this forum generally consider a wide variety of application criteria when scores are not available. The question is not whether scores add predictive ability to average HS GPA in isolation. Itâs more whether scores add significantly to the criteria used in admissions for test optional applicants which may include a combination of course rigor, consideration of which classes had higher/lower grades and how relevant they are to prospective major, upward/downward trend, LORs, awards/ECs, essays, etc.
I am not aware of any selective college using such an admission system that found notable differences in either cumulative GPA or graduation rate between test submitters and test optional matriculating students. As an example, some stats from the Bates 25+ years of test optional analysis review are below:
Test Submitters â 3.16 mean GPA, 89% graduation rate
Non-Submitters â 3.13 mean GPA, 88% graduation rate
While there was little difference in college GPA or graduation rate, there were more notable differences in post-college outcomes, particularly in rate of entering fields for which testing can be a barrier.
For example, ~90% of MDs were test submitters compared to 62% for all students (68% for men and 57% for women). Students pursuing careers closely linked to the specific subjects of the SAT (math and English) were also far more likely to be submitters. For example, >80% of students becoming writers and editors were submitters.
âOur research has shown that, in most cases, we cannot reliably predict students will do well at MIT unless we consider standardized test results alongside grades, coursework, and other factors. These findings are statistically robust and stable over time, and hold when you control for socioeconomic factors and look across demographic groups. And the math component of the testing turns out to be most important.â
My post asked about differences in cumulative GPA, grad rate, or similar stats between test optional matriculating students and test submitters. MIT has stated that they were not test optional. They instead admitted an extremely limited number of students who were not able to take the test due to COVID issues (everyone who could safely take test was supposed to take and submit test). They do not have a good sample of students admitted under a test optional system to compare against. They also donât state that those extremely few students who were admitted without tests had subpar performance compared to test submitters. I would not be surprised if the reverse was true, as it sounds like MIT may have held students without tests to higher admission standards than typical.
Thanks for taking a look at the tweets. Didnât see your response before my post asking you questions.
If the part in bolds is accurate, then it less than a tarnished bullet. It is an extremely loud gun firing blanks. We are in the same place we were before the this supposedly earth shaking article.