I had to step away so I could catch a couple of HS games, followed by some nfl games (go niners!) with some gutter cleaning tossed in here and there. And the server was down. Overall a good weekend. 
But I also had some time to read the “Yale Admissions Director Favors Submitting Scores” thread, which I did for the first time even though I’ve listened to the Admissions Beat podcast episode much earlier. I realized that that thread pretty much covers most of the topics in this thread other than the NYT article itself, and that the article was really just claiming that the scope of schools where testing provides sufficient utility should be expanded. It appears to me that the opinions and realities of this thread can be usefully organized using this “scope of schools” idea.
At an institutional level of granularity, the admission offices of Brown, Dartmouth, Yale, and MIT (plus a few other very selective schools) have made their conclusions known that they find sufficient utility in the tests. Most made it clear that the decision applies to their particular institution. Noteworthy here is Caltech, whose test blind choice was well explained in this thread by ucbalumnus (and perhaps others) by Caltech’s need for criteria with a higher ceiling than the SAT or ACT provides. The podcast linked by beebee3 was very useful in highlighting how they achieve this, part of which was leveraging input from and participation of professors in the admissions process. They really aren’t on the other side of the spectrum from MIT, but rather further along the same side beyond where the SAT/ACT can help.
At the “Ivy-plus” level of granularity, the various Opportunity Insights researchers have made their findings clear. Deming and Friedman (and perhaps Chetty as well) have made it abundantly clear that the scope of their research is the Ivy plus schools (Ivy schools alongside Stanford, MIT, Duke, and UChicago), and that it did indeed indicate that testing is a better predictor for those schools. An IHE article quotes Friedman: “I think since the pandemic we’ve learned a lot more about how the test-optional policies are operating in practice, with data to support, instead of just anecdotally,” said John Friedman, co-founder of Opportunity Insights and the lead researcher on the study, which links test scores to academic success at selective institutions. “And what we’re learning is, without the test scores, there’s a tremendous amount of uncertainty about whether that student is really at the level that [highly selective colleges] require.” (IHE article 1/17/2024)
However, Friedman does also point out that “in many cases” GPA is a better predictor in “more open-access institutions.” Deming echoes these views.
Using an even wider scope of schools, we see the landscape change. Excluding Ivy-plus (and some other very selectives), the situation shifts more to what pilate noted in his posts, that his data points to SAT/ACT as a less predictive force (unless he restricts the data). The third person on the Admissions Beat podcast episode, the VP for undergrad admissions at Clark, Emily Roper-Doten, articulately emphasized this point as well (see below). And of course the important work done by Bowdoin, Bates, Depaul/Ithaca, etc. (why the changes in the utility of testing occurs is interesting to me, but alas not the topic at hand).
Now, there is a little interesting dependency here on majors, more specifically STEM majors. Yikkblue has noted personal and anecdotal reports of students who are less prepared in math… and my understanding is that this is not a rare phenomena (but perhaps not consistently prevalent)… this is corroborated (but not proven) by the relatively higher call for scores from technically inclined institutions as well as Yale’s statement regarding STEM major persistence. I think it’s not beyond reason to say that this should be explored further even at this wide scope so as to better understand why. (Ithaca’s report did speculate that the more experiential teaching they do is less limiting to those who might have not scored as highly on the math section. Emily Roper-Doten said something similar of Olin of all places - where she previously worked - again due to the style of teaching and learning there. Perhaps that’s a place to start the exploration of why this variation exists).
Viewed this way, the assertion that each school must decide for themselves makes the most sense, as they know best where they reside along this spectrum, and their data will reflect that. (MITChris: “I think everyone college/university should do what is right for them”). To assert that either boundary case should be true for all institutions seems counter to this bulk of opinion/conclusions (conclusions that are based on local and specific research). At this point, it feels less interesting, at least to me, to further try to drive to the boundary cases… too much work, too much vested interest headed in other directions, etc, etc.
So I think this long discussion re: the NYT article is really a tiff over this “scope”. Akil Bello put it nicely. From the IHE article: “Bello said his primary frustration with Leonhardt’s piece has been the tendency to generalize its conclusions and apply them to all of higher education. ‘In reality, places like The New York Times are almost myopically focused on the Ivies and the most highly rejective institutions,’ he said. ‘It’s like they’ve got horse blinders on. But families are reading it and saying, ‘This applies to everyone!’’” If Leonhardt overstepped, it’s by overstating how wide a scope SAT/ACT’s utility had vs cost. But in his defense, he’s didn’t say every school either: “The SAT debate really comes down to dozens of elite colleges, like Harvard, M.I.T., Williams, Carleton, U.C.L.A. and the University of Michigan.” (NYT).
In fact, Lee Coffin also opined along those lines, a fact that the YCBK noted in ep 382: “he [Coffin] speculated that more schools are gonna be going back to test required as a result of the research”
So I wanted to end this (long) post with an account of the famous minutes on Lee Coffin’s podcast episode. Careful reading does evoke a certain wisdom which is enlightening to the situation. And again, Emily does a great job representin’:
LC: Lee Coffin (Dartmouth)
JQ: Jeremiah Quilan (Yale)
ERD: Emily Roper-Doten (Clark)
JQ:
So at Yale, we were looking into this question before the pandemic just to understand how important standardized testing was in predicting how well a student would do at Yale, and it turns out actually that the SAT or the ACT is the single best predictor of a student’s academic performance at Yale. Um, and particularly the math SAT, in persistence in some of our science majors. Um, this is a bit counter to the national research, which suggests that GPA is a bit more predictive than standardized testing. But at an institution at Yale [sic], um, we find that the standardized testing is the single biggest predictor.
LC:
Yeah, I would just add to you that we’re studying the same thing and that’s the emerging storyline here [Dartmouth] as well.
JQ:
And that’s a valu… so that means it’s an incredibly valuable part of our process, um, I… the other thing I’d like to say about this is that we ground everything we do in context.
[…explains context in calibrating test scores and transcripts…]
But, in the admissions committee room we will often pull up the transcript for the 5 person committee to look at and to examine to help us understand the story of a student’s journey, we’ll never look at the testing beyond just the preliminary glance at the start of the application file, because you know I’ve never been in a committee room where someone said oh my god that collection of SAT scores is so compelling I’m wanna vote to admit this student. That’s just not how it works.
LC:
Same, and it’s, it’s… I think people are surprised by that. Emily, you’re starting to laugh.
[laughter]
ERD:
Well, I’d… I’d… you know… if I may, I’d love to say something for the non-hyperselective set , you know on this point because I think for a lot of us the transcript is the part of the profile that is up for the lead actor’s spot, if I harken back to my theatrical roots right? Um, where that is sort of the most prominent, and the testing, if it’s there, plays a supporting role for that. You know, I think, really it is there as an opportunity to… to buoy, but not to sink. At least for many of us. Um and I love that you brought us back to the idea of context and talking a bit more about what context means for the student. I think it’s also important for families to understand that there is institutional context. Right, we’ve talked about, at our individual institutions, how testing plays a role. How rigor plays a role. And so I… it is important to understand that there isn’t… one categorical answer for many of these things and it’s really our responsibility and I… I hope listeners take away from all three of us that we’re all doing the work on our campuses to understand these things, so that our policies are aligned with who we are as an institution, how we teach what we value…