Well, when that person attended community college in CA, tuition was only $4/unit, so it was pretty cheap. And sure, somebody should have offered advice and support, but there really wasn’t any. And the public school-provided ‘alternative’ high school that this student attended for 3 yr of HS mostly dealt with students who could barely read and could only handle 4th grade math…they didn’t even know what to do with a kid who knew more than that. Community college was a good place for a student like that to start… in AZ, there’s no way he would have been accepted at any of the in-state public universities (he had no science classes except for 9th grade, no foreign language either).
My point in bringing up that person’s SAT score is that if he had listened to the stupid notion that it’s a predictor of whether you’ll finish college and get a degree, then based on that concept, he should have quit before he even started.
If a student wants it bad enough, then they’ll plug away at it, they’ll keep trying and they’ll put in the gritty work in order to finish.
My child knows two people from his high school that attended CalTech and several that attended MIT. Everyone accepted into either participated in math competitions (as that seems to be a prerequisite for acceptance from our school), and so they all know each other well and are still in touch. Their consensus was that CalTech academics is at a completely different level than MIT. I have no reason to disagree.
That’s not entirely true. There are kids who aren’t ready for regular Ma 1a.
There will be a special section or sections of Ma 1 a for those students who, because of their background, require more calculus than is provided in the regular Ma 1 a sequence. These students will not learn series in Ma 1 an and will be required to take Ma 1 d.
Caltech’s lowest level math course Ma 1a is calculus with proofs, like real analysis. MIT’s lowest level math course 18.01 is more like regular calculus, but accelerated (over one semester instead of the more typical one year).
The minimum academic strength to handle these courses is significantly different. Hence the likely reason Caltech finds the SAT or ACT math irrelevant but MIT finds it significantly relevant.
This. 100%. I am an MIT alum (and someone who took 18.01) and S19 is a recent Caltech grad (class of 2023). There is no comparison between MIT’s 18.01 and Caltech’s Math 1a. 18.01 is like an accelerated high school calc class, Math 1a is way more difficult, theoretical, and at a completely different level. Ditto for the MIT frosh Physics and Chem requirements as compared to Caltech’s.
And FWIW, S19 says he knew ZERO people at Caltech who got under a 780 in the math section of the SAT and the SAT Math 2 Subject Test (both were required when he applied in Fall 2018). In fact, he said almost everyone he knew got 800’s on both.
3 or 300, I still prefer to get my info from CalTech, and CalTech says that not everyone takes the same Ma 1. Those with less preparation take a different, less accelerated track. Presumably this is how some of the less prepared kids are able to catch up.
During the summer before the first year, entering first-year students are asked to take a diagnostic exam in basic calculus that will determine which students will be placed in a special section of Ma 1 a for those with less complete preparation, and later take Ma 1 d; …
Students in need of additional problem-solving practice may be advised to take Ma 8 (in addition to Ma 1 a) in the first quarter.
Looking at what CalTech says, the idea that every student who enters CalTech needs to be far and away more prepared than the MIT students seems to be a bit more legend than fact.
But even if it were true? So what? CalTech had enough variance to study the value of the tests and whether they are predicting performance, and found that the test isn’t useful for CalTech especially with regard to Math and Physics. That can’t just be dismissed. Like it isn’t useful for CalTech for math and physics, it may not be useful for a host of other schools and for different disciplines.
Why would we expect a single test to be useful in every circumstance?
The most remarkable thing to me about this statement to me is that CalTech students apparently are as infatiuated with discussing their scores as are some of the parents on CC. Maybe it is a tech thing.
Not necessarily. Often competition is self-perpetuating. Your hear about some kids who “started nonprofits” and got into T20s, now all kids think they need to start nonprofits to be competitive. Your hear about classmates publishing research, then that becomes the new benchmark for competitiveness. It’s keeping up with Joneses, college admissions style.
Looking at what CalTech says, the idea that every student who enters CalTech needs to be far and away more prepared than the MIT students seems to be a bit more legend than fact.
I don’t think students need to be “far and away” more prepared to attend Caltech over MIT. Caltech is just more difficult, period. Even if a student attends the special section of Math 1a, they still need to take Math 1b and 1c.
(And PLEASE… it’s “Caltech,” not “CalTech.” The uppercase T makes Techers cringe!)
Could some of us (not beebee3) dial it back? The sniping is getting a little out of hand.
In other news: Here is something interesting to consider alongside the UCB grades data that showed math grades with and without T-blind (which I found interesting).
And no, I wasn’t digging for this, my friend just sent it to me. It’s yet another complicating variable that was mentioned in previous posts but not quantified:
It seems that Cal, like every other school these days, isn’t immune to grade inflation. It doesn’t appear that there has been any drop-off in gpa’s - regardless of major - since test blind policies were adopted. In most majors (including STEM majors) average gpa’s are up from their pre-test blind days.
Just saying grade inflation is there… there was a question of how much and this gives a feel of how much… With the previous data re: class grades alone, we had to assume no grade inflation when evaluating grades.
2016-2020 seem relatively flat. Since 19-20 and 20-21 overlap for three years… it’s not beyond reason to believe a jump occurs around 20-21
(for clarity, we were looking at grades from various classes - e.g. calc - from UCB and comparing across 2018 vs 2021 or such. It is possible we may need to adjust the 2021 grades down a bit to account for grade inflation)
I guess I’m not sure what distinction you’re making. Competition may be self-perpetuating, but at the end of the day, if kids know they don’t need to submit scores to apply to a T20 and they can still be competitive with their 4.0, that doesn’t seem like keeping up with the Joneses, just a clear-eyed assessment that they have a shot at getting in because their scores won’t undermine their GPA.
My point is that I don’t think the “fierce competition” over placement has anything to do with TO…Is the situation really so different in this respect at test required schools?
I think it’s been well established that the admissions game has changed radically since TO became the norm. Criteria is more opaque and results are harder to predict. Thorsmom66 used the phrase “fierce competition” in the initial post. Obviously highly selective colleges have always been fiercely competitive, but I’m not as focused on the Ivy+ or even the T20 as many others on this topic. By definition, most students get rejected regardless of their grades and scores to schools with a <5-15% admit rate. the T20-T75ish are becoming more of a crap shoot, and that’s the difference in now that TO is the norm.
As I’m sure you know, there are very few test required schools and many test optional colleges. I don’t see how you can reasonably argue that TO isn’t part of the equation when it comes to the rise in applications at many colleges and the competitive landscape that students are now facing for “mid-tier” colleges.
Regarding Caltech vs MIT. Both had extremely restricted range on test scores prior to COVID, particularly math. 2019 25th to 75th percentile stats are below. I’d expect either college to be limited in how well they could review influence of math SAT on performance in past classes, with this degree of a restricted range. Neither college lists any specific numbers, reports, or much detail about their test benefit internal analyses that came to seemingly opposite conclusions, which leaves a lot open to speculation. Without any specific numbers and little specific detail, I wouldn’t assume either review is meaningful or extends to other schools.
Caltech – 790 to 800 Math, 740 to 760 EBRW, 99.5% score 1400+
MIT – 780 to 800 Math, 730 to 770 EBRW, 99.2% score 1400+
I’d expect rather than distinguishing between 790 vs 800 Math, the potential value of tests would be flagging kids who might score out of their usual high range, and did not appear in the previous classes. This cannot be reviewed well by looking at recent past classes. For example, MIT’s admission stats for 2019 shows that a small minority of applicants had relatively lower scores. Specific numbers are below. Presumably more students from this stat range applied when tests were not explicitly required (MIT stated that students who could safely take tests should submit them, rather than saying they are test optional). And presumably more from this range applied to Caltech when test blind.
MIT Applicants in 2019
2% of applicants scored <600
2% of applicants scored 600 to 640
5% of applicants scored 650 to 690
11% of applicants scored 700 to 740
Estimated 25th to 75th percentile range = 760 to 800.
This makes the more relevant question whether less academically prepared applicants would only be flagged by SAT scores or whether they would be flagged in other aspects of the holistic admission process? Given the restricted range noted above, I expect both MIT and Caltech did not primarily admit based on scores prior to COVID. Instead I expect they focused on other criteria that may coincide with highs scores. They might take note of someone who had successes in academic ECs/awards outside of classroom, or had abnormal successes inside the classroom based on available opportunities. They’d still continue to emphasize such non-stat factors when test optional/blind. Would many kids with these types of non-stat factors have relatively lower test scores, and what would be their expected outcome at MIT/Caltech?
The specific process is different for different schools, including holistic ones. For example, Caltech has mentioned that faculty are well involved in the application process, including reviewing applications from relevant department. This type of review may not be practical at colleges that have a larger number of applicants in relation to number of faculty, which could relate to why Caltech came to different decisions than other colleges. Some holistic colleges may also like to use SAT as a quick screen before performing a more detailed holistic review, or may prefer to have the test score as an extra confirming point. Others may not. There are countless reasons for different decisions about testing at different colleges.
There are other threads where this was discussed but to be clear Harvard was not found liable for discrimination against Asian applicants. They won that issue twice in court and it was not argued in the appeal to the SC. The appeal argued that any use of race violated the equal protection clause and this is what was ruled upon.
Why is it necessary for the AOs to identify applicants who are “underprepared and at risk for failure” if literally no one is failing out of these schools?
Isn’t that a big enough reason why certain schools can afford to be TO because the standardized tests (SAT/ACT) don’t serve a real purpose for them?
Unfortunately, Caltech moved the Ma 1a course material behind a login wall. Back when they were publicly viewable, it was obvious that students in that course (special or regular section) needed to do substantially more proofs than in other calculus courses, including MIT 18.01. Which comes back to the point that the minimum level of academic strength needed to succeed in Caltech is higher than the minimum in MIT, to the point that the SAT is not relevant in Caltech admissions, even though it is relevant in MIT admissions.
The MIT situation may more applicable to schools which have (regular, not proof-focused) calculus and multivariable calculus as general education requirements, or are focused on majors (like engineering majors) that require those courses, or are divisions focused on such majors (such other schools and majors need not be anywhere near as selective as MIT).
However, for a more general school, with a range of majors including those with relatively little math (up to calculus-for-business-majors and AP-level statistics) and relatively low level general education requirements in math (AP-level statistics is a common way to fulfill it at many colleges), the predictive value of SAT math section falls.
But, enough about the math sections of the SAT. What about the English sections? While testing math to a specified level is relatively straightforward, testing English may be less so. Old SATs were heavy on vocabulary, but that was gamed by test prep books of “1000 SAT words” or some such. The writing sample section of the three part SAT was effective for a few years, until test prep companies figured out how to game it.