Or they could get familiar with Caltech’s own internal analysis that found similar performance in first year Caltech math/science core classes between students who submitted tests and students who did not, prior to becoming test blind (stated in podcast). A quote from Caltech’s website below.
. The study, conducted by members of the Caltech faculty supported by professional staff, indicates that standardized test scores have little to no power in predicting students’ performance in the first-term mathematics and physics classes that first-year students must take as part of Caltech’s core curriculum. Further, the predictive power of standardized test scores appears to dissipate as students progress through the first-year core curriculum.
A non-zero portion of faculty at Caltech having a differing view does not prove that Caltech’s internal study is wrong.
“Out of ~300 professorial faculty members at Caltech, the petition was signed by over 140, as of the Tech’s January 26th interview with Dabiri. While there is clear support for the petition, it is unclear how many of the remaining faculty oppose vs. how many simply did not respond to the petition.”
My comment was in response to your quote that “They need to get familiar with the work of CC analysts.” The point was that it’s not just CC posters. Caltech’s own internal analysis found similar performance between kids who submit scores and kids who did not submit scores, as did many other college’s internal analyses, some of which have been linked earlier in this thread. Every such study I am aware at a selective college found similar average cumulative 4-year GPA and graduation rate between students who submit and do not submit tests. The Caltech internal study also found, "standardized test scores have little to no power in predicting students’ performance "
The article says, that the petition “requested a shift from being test-blind to test-optional” and “While the petition presented anecdotal evidence for a decline in student preparedness, it did not offer statistical evidence.”
None of this proves that Caltech’s study is wrong. There is a reason why such studies generally involve a regression analysis with numerous controls and/or compare performance of test submitters to non-submitters; rather than just share some annecdotal examples of students being worse than previous years. Suppose students as a whole were indeed worse than previous years. How would you determine whether it was related to being test blind instead of test optional? Related to COVID? Related to remote classwork? Related to grade inflation? Related to change in admission goals with less focus on academic curriculum and more focus on diversity? Or related to something else? You need to do an analysis to evaluate, which is what was already underway prior to the petition.
Caltech’s previous study doesn’t mean “case closed,” nor does a portion of faculty signing the petition.
In their podcast they explicitly say you must submit all of your AP scores, with the implication being that if you don’t, then your application will be rejected or considered incomplete. Further, if you submit sat or act scores as your qualifier for this criterion, AND your transcript shows that you took AP or IB courses then they expect to see those scores as well. If you don’t provide those AP or IB scores (even though you submitted a sat or act score), they may (probably likely) draw a negative inference about those non-provided scores.
So students who are taking the most advanced math courses available are unprepared because they didn’t do a math test which barely covers pre-calc? I find it hard to believe that any of the tiny number of students who are accepted to Caltech have not demonstrated their math abilities in far more ways than the SATs can do. If Caltech relied primarily on the SATs to tell them whether a student is ready for engineering-level math, they would be in trouble.
They require good grades in AP Calc, AP Physics, and AP Chem.
They also had a 99.1% and 98.1% retention rate freshman to Sophomore year for 2020 to 2021 and 2021 to 2022. The year that saw the real dip in retention rate was 2019 to 2020, the year of the Pandemic, and these students were required to submit SAT scores.
It’s almost as though the issues with students being prepared had to do with a global pandemic and multiple disruptions.
How many faculty was that? How many faculty signed the petition to begin with? You are writing as though the petition was signed by all Caltech faculty, and they all demanded to reinstate the SAT requirement.
In fact, none of them demanded that Caltech return to requiring SAT scores, only that they move to Test Optional.
I have known 2 past Yale admissions chiefs —Quinlan’s predecessor Jeff Brenzel and Inky Clark back in the 70s. Inky Clark led the way beyond the traditional prep school student. The diamonds in the “rough” were highly sought after—URM, rural/small town,FGLI etc. Test scores were viewed as a critical tool to sort the diamonds from the trinkets. Yale was one of the first to go need blind in its pursuit of the diamonds in the rough . It was unashamedly elite and knew that an insular applicant pool would actually be fatal to its reputation for excellence in the long run Quinlan is continuing in that tradition. This is not about social justice primarily. It’s about finding the diamonds
So, what, a quarter think that the students are unprepared? That many faculty are always complaining that “students these days are unprepared, unlike when we were their age”. Sometimes they’re even right.
That Caltech faculty behind the petition are apparently not up to CC intellectual standards - unlike the Caltech students that are struggling in their classes.
I shared that link as news, not as vindication. I don’t know what the Caltech study will find, and to be honest, as I said in the other thread, I’m always wary of faculty thinking the kids aren’t good these days.
Last year I had a conversation with a tenured MIT faculty member. He’s now elsewhere. He told me the kids weren’t as smart these days and admissions had to do something about it. I asked him as deferentially as I could what evidence he had of that, because all of the empirics we had in both inputs and outcomes showed the students were better than ever. He indignantly and heatedly insisted they were not and that I should listen to a faculty member. I pointed out that if you look at issues of The Tech going back decades, you see that every 10 years or so, the faculty arrange a big meeting complaining about how the kids aren’t smart anymore, but if he had any evidence (since MIT Is a data driven institution) we would absolutely look into it, and that I would be open to quantitative or qualitative evidence. He got red in the face and again basically insisted on faculty privilege.
To be clear: we are here to serve the faculty on their terms. But it’s hard when you can’t actually get to evidence in any form. There’s always a bit of “back in my day” going on in these conversations, as well as the extremely real issues with substantive learning loss experienced during the pandemic, which is not about a diminution of intelligence but an inevitable set of chips in conceptual foundations and noncognitive skills. Thats not going to be necessarily picked up by an SAT or ACT, much less remediated by its reintroduction. But they are attributable to the SAT/ACT suspension and people have decided that’s a valid causal force.
I’m not sure what the Caltech study will find and I could imagine lots of solutions that would not bring back the tests but still articulate a formal academic standard appropriate to the specific Caltech education. But we’ll see! I hope they do what is right for them.
Which is very unfortunate, because the best tool colleges had for diversity in recruiting was (example) “I will ask the college board for every [URM] student with a PSAT math above interested in STEM and send them something in the mail so they know about our outreach programs.” That kind of diversity oriented recruiting is permitted by the SFFA decision — but foreclosed by the well-intentioned, substantively bad (IMHO) lawsuits against Search, which had been running in one form or another for like 50 years before it became the hobgoblin of NYT/WSJ articles and subsequently state AGs looking for a punching bag.
People who care about diversity in college admissions scored a wild own goal by knocking out Search, IMHO.
I appreciate your perspective, and agree diversity based recruiting dues not run afoul of the SCOTUS ruling.
But it’s difficult (for me) to support CB selling and profiting from the information of minors (well, most are minors at least at the time of initial CB account registration). It’s not clear though from the NY settlement the actual impact on this part of CB’s biz model…do you have more details?
Schools are going to have to step up their target market reach and penetration in other ways, such as increasing outreach efforts and/or increasing partnerships with college access orgs to maintain/increase the funnel of the diverse and/or disadvantaged applicants they are seeking. There are hundreds of potential college access partners out there, beyond Questbridge and Posse. Just my two cents from having spent five years or so supporting low income/URM students in their college admissions process.
Because of all the lawsuits CB has effectively stopped doing Search as a product. It’s replaced with Big Future, an app meant to connect you to colleges. Not clear to me why anyone would use it, and you can’t target by performance, and in any case, the less socially bad solution is…targeted ads on mobile devices?
I can understand being against “selling information on minors” on principle. Who would be for that? Except that it’s always been opt in and it’s been this way since the 1950s when the College Board’s National Merit program first developed a truly national student market for colleges.
Sure, colleges can and should develop partnerships with CBOs (and I appreciate your perspective/experience). But if you don’t know test performance — and if you are at a school where you think tests have more predictive validity than say sophomore year GPA — you still don’t know who to recruit. You may end up just recruiting to reject. Many of our partnerships with CBOs have gone wrong at this point — we would recruit their best students who would apply with inadmissible academics and we would reject them, and that would be demoralizing and depress future applications to below the baseline pre-partnership.
The most effective way to recruit under-represented and under-served students demonstrating unusual potential who might be a good academic fit for your school was Search. It’s not clear any partnerships can replace it; I’m not a betting man but if I had to lay odds, they would be strongly against any replacement. Now if you think Search is such an ontological evil that it’s indefensible, then the consequences don’t matter. But my guess is that the long run consequences of Search tend towards less diversity among students and more regionalization among colleges outside the few dozen true national brands that will coast on NYT hand-wringing op/eds, ie the status quo before the 1950s when it was invented (as Charles Petersen, no relation, demonstrates in his dissertation).
edit: another possibility is that regionalization of college admissions would be good, you’d see more students staying nearby for colleges that need them, less regional diversity within colleges but more local diversity, and the T20 will be fine on brand. Maybe so, and that might not be the worst outcome (it might even be better than today). It will be wildly different though, and from where I sit (a rare and non-representative seat to be sure!), I would like to see more national searches for “talent,” rather than fewer.
Can’t a school’s outreach efforts replace Search, for example the two FTEs that Yale has added to focus on finding these students? Or orgs like Stars Network (where I know MIT is a member)? Or…?
I do appreciate your insight that some CBO partnerships haven’t worked as intended. It doesn’t seem that all potential students who are identified and apply would indeed go on to be accepted though (single digit acceptance rates and all!)…that seems to be addressable through education and setting of expectations with CBOs? But still, I am sure easier said than done.
Yeah, no one has figured out the new Big Future, sadly.
The thing is that minors may not understand what they are ‘opting in’ for…and I mean this in a larger way, beyond the Search check box on their test registration forms.
It can be difficult to communicate this type of nuance to a high schooler…opting in for X might be ok, but not for Y (noting that most of my work was with disadvantaged student-athletes, where sadly there are plenty of people trying to take advantage of these students, and ‘opting in’ to something might hurt their recruiting eligibility, or impact NIL deals, etc.) My priority was always to ‘protect’ my students.
My short answer is no, I don’t think anything — including significant resources for CBO partnerships, where we have also added FTEs — can replace Search. Of course, without Search — since it’s gone — we have to do what you have to do. It’s just all much less efficient and effective which, when coupled with the SFFA ruling, deals a one-two punch that I suspect will have significant negative consequences at least for some schools (and students) in the long term.
On the other hand, I want to again acknowledge that MIT’s institutional problems are not the most important in the world. We have resources and a brand name. And if an on-balance-good-for-students (stipulated for purposes of argument since I don’t actually agree with that) policy change makes our lives harder, so be it. Certainly hear you on the student athlete thing.
It is less clear to me how this will impact the great majority of schools and scholarship programs. It’s just been driving me to slam my head against the wall to not see anyone discussing this in concert with SFFA, and how both the recruiting and selection sides of the ecosystem are being affected simultaneously.
School studies have a way of reaching the conclusion they want to reach. Hey, let’s have a study linking “school success” defined by a grade with test scores “another grade”. Some schools are just trying to find a way back to justify an endpoint, they already decided they like. A student that does not submit an SAT score maybe someone who did it, got 1100 and then chose not to pay money for a tutor or do tons of practice exams etc… The ones who submitted are also far more likely to have done more prep and written more exams.