Stanford, Harvard, Dartmouth, Yale, Penn, Brown, CalTech, JHU, and UT-Austin to Require Standardized Testing for Admissions

Funny. Blame the student. Not the teacher. Maybe just maybe, it is time for the profs to look into the mirror. The letter completely oversimplifies the impact of covid. is it possible, professors failed to reclibrate?

Cal Tech isn’t in the business of recalibrating to make sure that every kid in the world can get admitted and graduate. That’s not their mission. And recalibrating isn’t in their vocabulary.

It is a very narrow and specialized university. Kid not interested in what they provide- don’t apply. Should West Point drop the physical fitness requirement- or expand the BMI threshold so that every kid who wants to be a soldier can compete?

20 Likes

The letter has the smell of “whining”. “Oh, we can’t win a competition. We are so embarrassed.” You’re right. They can do whatever they want. It is a good topic for a separate debate, “should profs take some accountability for the academic performance of their students?”

It’s a very specific university - basically it’s their ethos. Like MIT, the approach is ‘drink from a firehose because you have the math brainpower to rise to the challenge’ but Caltech goes further. Teaching&learning have a different meaning there -Autonomy, self learning, contributing to research are most valued. Needing help from professors would be puzzling. Their message here is unsurprising.
@aunt_bea

5 Likes

The letter is anonymous. So do the Faculty really share the same view or not? Imagine being a parent of a student at the school reading this? I know it is Caltech and they can do whatever they want.

If I were a parent of a Cal Tech student I’d be thinking “Wow, the faculty is really concerned with teaching. Everyone says that all they care about is their research grants and their latest publication, but they seem quite engaged with the actual learning and teaching of the undergrads. Maybe my kid needs to work harder to keep up”.

5 Likes

Yeah. Blame the student. The letter is complaining about Caltech performance versus MIT going back to 2010. So they had a bad crop of students over the last 13 years?

A student’s performance can be measured and a student can be held accountable. Maybe it is time to lift up the carpet and see what is going on in the classroom/labs etc…and start holding professors accountable.

Accountable for what?

Your premise is hilarious. And me-thinks you have a bone to pick which has nothing to do with Cal Tech…

1 Like

Sorry but, if you are going to look at root cause, you need to consider all the variables in play. Quality of student population and quality of professors. You can’t pick and choose. And I will always advocate for the student so yeah, that is my bias.

Do you truly feel you have a better grip on evaluating the mathematical aptitude of their applicants than the professors at Caltech?

6 Likes

An anonymous letter? I can make the argument that based on the poor performance relative to MIT since 2010, that the quality of teaching has deteriorated. You can’t dismiss all the other variables to manipulate a point.

Caltech students have to be taught? No wonder the quality has gone down.

2 Likes

Note the link skips to the bottom of the story. There is a more detailed discussion above the link, including limitations.

It is worthwhile to note that this is not a controlled study. It is a series of observations. For example, the letter mentions that there was a significant EE 55 midterm decline between 2021 and 2023. EE 55 is a fairly small class, which had 17 students in both 2021 and 2023. EE55 midterm may not be a good representation of all Caltech classes.

While the referenced 2 electrical engineering classes had a decline in performance, there is not a controlled analysis to understand why the decline occurred. Being test blind is certainly one possible contributing factor, assuming the the students in the 2021 EE 55 class were mostly upperclassmen who were admitted before 2020 when tests were required. However, it could also relate to things like remote teaching during COVID. Perhaps some students who experienced remote classes for mathematical fundamentals at Caltech and/or their HS didn’t achieve as solid a foundation as occurred in previous years. Or it could relate to a change in what criteria admissions emphasizes. Perhaps Caltech has been making a greater effort to emphasize diversity in admissions, including diversity among schools from which they admit students. Or it could relate to a change in student expectations for grading and amount of effort that is required to achieve a particular grade (grade inflation). Or it could simply be a small sample issue, with only 17 students in the class. You need a controlled analysis with a larger sample to determine the relative contribution of factors like above, rather than assume it is because of x. The author of the letter acknowledges this limitation in the comments above the letter. He makes it clear that he is not assuming it must be due to being test blind. It is instead one of several possible contributing factors.

The evidence for test blind contributing seems to be mostly based on reports recently published by other Ivy+ colleges that switched to test required – MIT report and the OI report.
I expect that it’s not a coincidence that this letter occurred shortly after certain Ivy+ colleges chose to go test required and published the OI report. The author notes that Caltech’s post-COVID analysis found the opposite conclusion to the referenced Ivy+ schools, with their internal report saying SAT had little predictive power for performance in Caltech’s core math/science classes. However, the author notes that this Caltech report was not made public, so specific details are unknown, and he/she is skeptical about the conclusion. My suspicion is outcome similar for similar measurements, but the 2 analyses are not using the same measurement.

Evidence for other factors includes the letter mentioning the Putnam performance has been falling for the past 15 years, rather than just post-COVID. I suspect Putnam and similar performance has little direct connection to SAT requirements. However, it may have an indirect influence. For kids at the AMO level, it’s often more a question of which Ivy+ colleges they will choose to apply to, and which ones they will choose to attend if admitted to multiple. Would AMO level kids choose MIT, Caltech, or somewhere else? Rather than an issue of Caltech has no way to identify AMO level kids without knowing their math SAT score. Being test blind probably isn’t a big selling point for Caltech over MIT, among top math test award kids. Some may see it as a negative, and/or may have seen Caltech as previously having the highest reported 25th percentile SAT among US colleges as a positive. It could have a similar type of impact on AMC/AIME level kids who may choose to participate and help make a Putnam team more solid, even if they are not likely to be a top individual scorer.

However, if the Putnam performance has been declining for many years before COVID and test blind, then it may be part of a larger pattern about how top HS math test kids perceive Caltech vs MIT. It could also relate to increased relative popularity of CS/tech vs math research in recent years. If math contest kids are more likely to choose CS/tech and less likely to choose math PhD than in the past, that might be one reason for favoring MIT over Caltech. There are many other similar types of possible explanations. Again a controlled analysis would be helpful, which the letter recommends.

1 Like

This seems to be missing the point. Yes, a student could be less prepared for the rigorous STEM education at Caltech for a whole host of reasons, including but not limited to the ones you mention - this was true even before COVID though COVID likely exacerbated the lack of preparation in a multitude of ways for a far larger number of students. However, the question is not why students may lack the necessary preparation but rather how is one to identify whether a student can handle the rigorous STEM coursework a student will be thrown into at Caltech? You mention grade inflation as one of the possible issues that may be leading to the inability to identify student preparation so grades provided in a local context can not be the answer. Thus, how can one identify a student’s math preparation without a standard of some sort that each student must meet?

There is a quote from Caltech admissions that many have brought up which suggest that standardized tests do not predict first term grades but does anyone know when the data was gathered that gave rise to this quote? If it was before their test-blind policy then the quote is useless. Before the test-blind policy, it seemed the requirement on the SAT math portion was a minimum of 700 but the vast majority were on the high end with the math 25th percentile at 790. It’s unlikely that with such high SAT scores, there would be any predictive power on first term grades.

Caltech’s extremely high SAT scores has given rise to the argument that SAT scores are not needed as they would be uninformative. However, does the range of SAT scores between students who apply to Caltech differ from those who are accepted into Caltech? Moreover, Caltech has had a huge increase in the number of applications since going test blind. Is it at all reasonable to assume that this huge increase includes only students who are in the same SAT range as the previous test-required admitted cohort? If that is the case, then why did the number of applications increase significantly since the introduction of the test-blind policy?

Another argument seems to be that the math portion on the SAT does not cover Calculus. However, without the math skills that are necessary to do well on the math portion of the SAT, a student does not have the background necessary to even tackle Calculus. The math on the SAT is a necessary but not sufficient indicator of math preparedness. Some faculty are suggesting that students are less prepared for a Caltech education since Caltech went test-blind. The test-blind policy may or may not be a contributing factor but this is irrelevant as what the test might be able to do is help identify those students who are ill-prepared for a STEM education at Caltech.

2 Likes

Considering that the letter requests an analysis of performance, and requests forming a committee to make recommendations about how to improve the admission process, it’s absolutely relevant whether there is a decline in performance and why that decline occurred. Without knowing why, it’s unclear how to improve the admission process.

For Caltech-level students, math SAT essentially a test of being able to answer quick and simple algebra/trig type multiple choice questions rapidly without making any careless errors and without getting tripped up on things like diagram not matching words. A high SAT score alone is by no means a good indication of being prepared for Caltech rigor. Caltech emphasizes math beyond trig, and emphasizes complex math problems without time pressure rather than rapidly answering simple multiple choice questions.

This has been true long before gong test blind, and Caltech has long had admission system that looks at other factors to evaluate preparation. For example, Caltech continues to have a unique admission system that includes faculty evaluating applications, and all admitted students receiving a “vote of confidence” from a faculty member. Faculty are in a better position to evaluate things like whether research is truly meaningful or a just a bunch of buzzwords . Caltech of course looks at other high level academic ECs/awards beyond just research. Caltech is one of the only colleges in the United States that requires all students to take calculus (proof of being knowledgeable in calculus can substitute in some cases, SAT score cannot). I could continue.

I doubt that the SAT’s key use would be to identify who is prepared for Caltech. However, as touched on in the letter, a theoretical use could be flagging students who are not prepared for Caltech. For example, the average SAT score in US is ~500. A typical 500 math SAT kid who hasn’t mastered basic algebra/trig probably is not going to be successful at Caltech. Then it becomes a question of whether that kid with the 500 math SAT who hasn’t mastered algebra trig would be admitted under a test blind system, or would be flagged in some other part of application, such as not having high level academic ECs/awards, not having taken and done well in calculus, … as discussed above. Another potential benefit could be in self-selection. Caltech’s application pool has traditionally been extremely self selective, perhaps more so than any other Ivy+ college. However, the degree of self selection changed when going test blind, as can be seen with the huge increase in applications, and Caltech having a lower admit rate than all other Ivy+ colleges. It’s a complex situation that warrants a legitimate analysis, like the letter requests… not simply a question of how can you evaluate math preparation without knowing math SAT score?

In the podcast, they mention the study found kids with and without scores had similar performance in Caltech classes, suggesting it included a time when scores were not required.

One need not know the reason for why something occurred in order to employ remedies that can help prevent said thing from occurring again. While studying the issue, some faculty seem to desire SAT scores to help identify students ill-prepared for Caltech, regardless of why they are ill-prepared.

Yes, which is why I stated a high math SAT score is likely a necessary but NOT SUFFICIENT measure of math preparation. Students who can tackle the kind of math education Caltech expects from all of its students should have no issue with the SAT math portion with plenty of time to spare. There are on occasion students with incredible math talent but with slow processing but apart from accommodations, there are letters of recommendations that should be able to contextualize such a student. In fact, this type of student may have difficulty achieving an A on tests for the same reason such a student would struggle on the SAT, so contextualizing is likely needed regardless of test policies.

You can continue but note that for all the checks and balances you outline, some faculty signed a petition suggesting a few students are ill-prepared. Ergo, all these extra checks that Caltech undertakes were still insufficient to identify the students who are ill-prepared for a Caltech education. I can speculate as to why such checks were insufficient - for instance some students do not undertake research, some students do research that does not require math beyond simple algebra etc and the faculty can then investigate where such checks went wrong but that would be irrelevant for the purposes of identifying students who are currently ill-prepared for a Caltech education.

Not sure why you are suggesting a minimum math score of 500 which I believe is the average math score in America when the petition actually suggested that before going test-blind, the minimum requirement to be successful at Caltech was a math score of 700? I expect the faculty would not assume that an average student would be well-equipped to handle the rigors of a Caltech education without significant remedial education which Caltech does not offer. As for the Calculus requirement, not every student takes the AP exam but that could be another means by which Caltech might be able to assess preparedness for its rigor.

It is not a matter of judging how well prepared a student is for the math they will undertake but rather whether a student is ill-prepared for the curriculum. Seems perhaps students without a 700 on math will struggle and be more likely to drop out as suggested in the petition.

That may very well be but this is an assumption as there might have been students without scores during the test required for some reason. I did find the wording a bit odd as it was akin to saying it makes no difference but then any difference became indistinguishable in subsequent terms which suggests there was some difference. Also I believe that first term at Caltech do not feature grades just pass/fail so was this the measure or was it shadow grades?

2 Likes

Your original post suggested reviewing why was missing the point. If you don’t know why something occurred, you are often making guesses about the remedy and effectiveness of proposed remedy. A review was requested, and conducting that review is not missing the point.

The quote stated why I used 500 SAT as “an example” (“example” does not mean I am suggesting Caltech change their admission policies this minimum threshold) – because it was the approximate average SAT score in US. I expected the point would be more obvious and less ambiguous, when using " a typical 500 math SAT kid" than it would be for a high scoring kid near Caltech’s previous admission score bands.

My point was you need an analysis to show that this is the cause. It’s not clear that Caltech is admitting any significant portion of student body with less than 700 math SAT score. For example, prior to COVID, when nearly all students took SAT, Bowdoin used to request SAT scores of test optional kids in the summer before matriculating. This allowed Bowdoin to publish SAT scores for the full class – both test optional and test submitter. While test optional, the overwhelming majority of kids at Bowdoin had 700+ math SAT, in spite of Bowdoin being a LAC that does not emphasize math/science/tech anywhere near to the degree of Caltech, and has a far smaller portion of students pursuing math heavy majors. As such, I’d expect a tremendously higher % 700+ kids at Caltech – due to a very different self selection, a much higher rate of math heavy majors, admission system placing a greater focus on being well qualified in math, and generally being a more selective college.

You mentioned having a higher rate of dropping out. One of the few stats we do have is first year retention – rate of dropping out. Rate of dropping out by year is below. Slightly fewer students having been dropping out since switching to test blind – not more students.

Test Required 2014-2019: Median = 97.9% retention
Post Test Blind: Median = 98.6% retention

1 Like

I said you were missing the point in that you suggested that tast-blind might be the reason for the student body being less prepared but there could be a host of other reasons such as online learning etc. I suggested that it doesn’t matter if test-blind was the cause or not but rather that the SAT math score might be relevant anyway in helping identify those who were ill-prepared for a Caltech education.

Actually you may not need any such causal analysis in order to decide a standardized test will be needed to help identify students ill-prepared for a Caltech education. For instance, let’s say that the applicants to the school are basically the same as they were in 2019 but that around the country there has been a deterioration in education overall. While in 2019 going test-blind wouldn’t have mattered if the admits had the same test scores as the applicants, it might in 2024 due to the general eduation deterioration which means that now some students who would have been above the 700 threshold fall below the 700 threshold. The test-blind policy is not the cause of the situation at all but it can alleviate the problem introduced by general learning loss.

Note that in the above situation a higher percent of students today would have not met the 700 score at Bowdoin than in 2019. However, the numbers do not matter because if a few or even many students are below this threshold at Bowdoin, it’s unlikely it will have much impact since Bowdoin does not have the same steep entry to STEM that all the students at Caltech must face. Also there are other options apart from STEM which is less true at Caltech. While there may have only been a few at Bowdoin, even a tiny number at Caltech could be problematic as there are far less options for them than students have at Bowdoin. Yes, there is self-selection so I expect the median and even 25% percentile are higher at Caltech than Bowdoin but you don’t need more than a few ill-prepared students in a class as small as Caltech for there to be an impact.

No, I said that a score below 700 was linked to a higher rate of dropping out as stated in the faculty petition. The faculty petition also said that some professors had to adjust their grading for the current students which suggests that the decrease in drop out rate may be due to more lenient grading. If the drop out rate were an indication of preparedness, I do not think this faculty petition would even have been penned.

1 Like

Regardless the petition has a section titled “Potential causes…” and suggests evaluating those and other potential causes It’s not missing the point.

Or you may find that Caltech has not been admitting any significant number of kids with <700 score while test blind, so using a 700 threshold has no significant impact on anything besides self selection of who chooses to apply Or you may find that SAT scores have little correlation between Caltech grades among kids with similar non-score admission criteria (similar levels of HS course rigor + similar degree of academic EC/awards + …) as the previous post-COVID Caltech analysis implies. Or you may find that SAT has a significant correlation with first year, but less so for later years, and with Caltech being pass/fail for first year, scores have little impact on Caltech overall GPA or success in later years. Or you may find the 17 electrical engineering students mentioned in the report were not a good representation of the overall student body at Caltech and there is little decline. Or you may find countless other outcomes. The result is “you are often making guesses about the remedy and effectiveness of proposed remedy” as stated in my earlier post. Sure, it’s possible that having a 700 SAT threshold and/or using SAT in other ways will fix everything. And it’s also possible that it will not.

Linked in a non-public report from the 1990s, likely based on classes from 30+ years ago. It was a different SAT test, a different Caltech admission system, different Caltech classes, different … It’s also not clear what if any controls are used. Why are you quick to agree with this analysis from 30 years ago and choose the same SAT thresholds listed in this old report, but quick to dismiss the more recent Caltech analysis from post-COVID when scores were not required?

Your argument was that test-blind may not be the reason for the current issue and yet that argument misses the point that whether test-blind is causative or not, requiring a minimum math SAT score may help identify students who are ill-prepared for a Caltech education. which seems to be the issue that the Caltech petition outlines.

No one said it would be a significant number of students. In fact, I suggested that even a small number could have an impact. Also I see you mention the class of 17 but not the larger class which also saw a decline so interesting the picking of the data you choose to focus on. Frankly, I don’t imagine this petition would have had such a large number of signatures if only one class with 17 students was the sole representation of a decline.

Nowhere did I suggest having an SAT baseline of 700 would fix everything. Rather I suggested using the baseline that existed prior to the school going test-blind might help identify ill-prepared students. It might not but it seems sensible to use the criteria that was in place before the decline took place to see if it acts as a remedy for the current issue. If it does not, that in itself is helpful in knowing what won’t work.

The study may have been done 30 years ago but the threshold was in place prior to COVID. Ergo, the threshold used is temporally far more relevant than something discovered and subsequently abandoned decades ago. I am not quick to choose or dismiss either study but if the latter study were so accurate - and again do we have any details on this study as to when it actually took place and how many students and classes were included - this petition would probably never have been penned.