If you’re talking about MIT, then that’s not going to matter, because, as we said in our posts on the issue, for those students who we admitted without SAT/ACT, they had something else that we could “lean” on, like other standardized scores (e.g. a BC 5) or strong college college level coursework. We could run our admissions SAT/ACT blind if we did this instead, but that would be even more exclusive/regressive than the SAT/ACT.
The other factor is that I would anticipate from all evidence worse outcomes at every college because of the substantive disruptions of the pandemic. That’s not me signaling anything about MIT (all my advisees have been doing well), but just signposting that we did have a material disruption on education that you can see in every available indicator and which it will take a long time to resolve. As someone who graduated college in 2009, I think about those studies that show people who graduate into recessions with comparable grades/majors/colleges never gain the wealth compared to their similar cohorts on either side of the recession. The ghost of world history haunts us all…
I do think that Mr. Leonhardt is suggesting in his article that people who excel at taking standardized tests at age 17 are more likely to benefit society.
You could of course take it the other way (that people who work at “prestigious firms” or attend “elite schools” are making society worse, and standardized tests predict that).
I am mostly joking…
But that was my least favorite part of the article, because I don’t think those are good indicators of social outcomes to anyone who isn’t elitism-pilled, and in any case, they are the things most influenced by other forms of social and economic privilege. The linked study on outcomes at college controlling for socioeconomic factors are much more persuasive for why one might require testing.
Not that I’m aware of. We did have some folks propose creating an MIT Admissions specific math test, sort of like the math diagnostic used to place students into appropriate first year coursework. When we pointed out that this would be more accurate, but also way more regressive (because who would have the time and resources to study for the unique MIT math test?), we returned to the SAT/ACT.
This is also why we went blind on the subject tests in 2020, btw, a decision we made before Covid: what they added at the margin in predictive utility was not worth their exclusive impact, in terms of how few students were taking the subject tests.
100% and it’s very frustrating to me, personally, that we get used as a club to bludgeon other schools. The MIT education is super weird and specific and essentially begins with a series of high stakes math tests in the GIRs, so of course math tests have predictive utility. I can imagine that for some other schools this may not be true; indeed I’d be surprised if it weren’t at least somewhat different elsewhere!
Great post. A lot of this depends on what happens in the admissions read + committee room. We work pretty hard to train our readers out of the unconscious habits of selecting for the reproduction of privilege qua privilege absent substance. It’s very easy to be enchanted by the interesting and dynamic lives of privileged teenagers with many opportunities some small part of your psychology may even envy; harder to recalibrate yourself and say: okay, but what are we exactly trying to do here? What is the goal of our process and our education? The discipline to tie yourself to the mast of your own institution’s education and mission, and not be wowed, cowed, or otherwise dazzled by a shiny application, has to be developed in people like any other form of discipline, and tailored to any given institutions desired outcomes.
I’m going to bow out of the discussion. There’s so much sarcasm here that it’s hard to tell if people are being serious or not. Thanks to the OP for sharing the article link. It’s an interesting topic.
Not sure why you want to turn this thread into a deep dive into my (apparently ineffective) use of sarcasm, but since it seems important to you.
The first post was an attempt to sarcastically emphasize something similar to what @MITChris expressed . . .
It was addressed to him because he had just said something I agreed with that I felt was worth highlighting. I’m glad he said it again as it means more coming from him than me.
If you didn’t get my point of the second post, I am sorry for that, but it would take us too far afield to explain, so I’ll leave it at that.
Actually, you should both assume that the suggestion to move on is not optional.
And since I’m forced to intervene, might I also “suggest” to everyone to refrain from sarcasm and from calling people names, even if directed at yourself. The former rarely translates well in written form, and the latter violates ToS. Thank you.
A lot of this discussion has centered on the Harvards and MITs of the world. But, there are so many great schools out there. Moving away from test-optional, basically means kids have to go the great lengths and cost to get their SAT scores to that “sacred” level. Is this what we truly want for our kids? And then if a child gets “heaven forbid” a 1400, is that a failure? Of course not. Test optional forces the admissions office to work a little harder and dig a little deeper. Remember what the SAT actually fails to test. It is a lot more than what it actually test.
And how do you suppose that deeper dig is occuring at schools that receive 100k applications? Even buying more reviewers, at minimum wage, doesn’t solve the problem
If UCLA, which receives the most apps of any college, can guarantee to read every app twice, so can every other school. Reading apps and pulling out relevant details (as defined by each school) is not difficult (that’s why these jobs don’t pay much whether AO or just a reader).
And we have anecdotal evidence from UC professors that the admissions officers are worse than ever at this and more unqualified and unprepared students are enrolled than before. See the NYT comments from professors
Yes I agree. But that’s a consequence of a combination of factors such as poor K-12 education, Covid and the academic interruption many students experienced (Cali might have been among the worst just to stick w/ the UC example), and grade inflation…it’s not (generally) a consequence of poor/lacking (whatever word one wants to use) admission reading. Admissions folks don’t have complete info from any school.
Not all students were equally impacted by covid ( charters, private, Catholic, and some states remained open). Enrolling and then failing students isnt helpful.
Eta: some students made serious efforts to keep up despite adverse circumstances. Given grade compression,it is really not possible to identify those kids or their superior subject knowledge without test scores.
And again, not saying what works for one school works for another. Schools can read apps and focus on whatever factors they want to.
ETA: I get some profs might be seeing lower academic prep from some students, but I am not sure that’s showing up in any data yet…like lower average grades, or dropout rate, etc. If there is data, I would love to see it! And i do hope over time that colleges publish said data.
But if the tests scores are going to be used “in context” (as @hebegebe put it in the first post) then requiring the tests doesn’t change this one bit. In fact it may make it even more important that colleges fully understand each applicant’s individual circumstance.
If, on the other hand, the tests are being used as a substitute for other factors, then all the problems with the test come to a head.
I think the reality is, there will be an extreme pressure to not use the tests in context. Parents and kids who scored 1550 will expect to be admitted over students with 1350. They will argue that it is unfair to not reward their kids for their great accomplishment,
Or, more likely, the kid with the 1350 will self select out of the process, not fully understanding the complicated factors that might make their 1350 stellar.