My point is context matter. Clearly a 940 did not predict my failure in UG. Just like a LSAT of 158 did not predict my performance at UCLA Law either. I did above average and pass the CA Bar that year just like 97% of my classmates (for reference, UCLA Law 25-75% for LSAT was 178-170 among my classmates) did that year.
You may want to say my result is “atypical” but I beg to differ. Someone saw some potential in the rest of my application that year and thought I deserved a chance.
That’s a reductionist strawman. Who is saying that?
And of course, another issue is that data doesn’t demonstrate that there hasn’t been a performance decline. It’s a crude measure with too many variables that could affect it, especially in the past few years for reasons that I briefly outlined.
Examples of the types of posts I am referring to are below. There are many other examples. I am not stating these posts are incorrect, more that they seem to be guesses without a clear explanation of how the conclusion was reached. I’d prefer to consider all relevant information that may add to evaluating whether such posts are accurate, even if much of the sources are not perfect. The comment was not intended to be a literal dichotomy, and I agree it is not helpful to the discussion. I will remove it.
“there are a lot of students enrolled in T15 schools without test scores that are not prepared at all, and are being forced to reckon with that truth the hard way”
OTOH, we might see more poor grades in Intro STEM courses. But after a couple of C minuses or worse, most such students will self-select out of STEM and find a major that works better for them.
These students will still contribute positively toward their school’s retention and graduation rates and count as successes by studies/rankings that consider these two metrics, obscuring their lack of preparation.
the CSU push for increased graduation rates = pressure for easier grading
This has been observed at other colleges as well. Test optional admits are consistently more likely to be female, and test submitter admits are consistently more likely to be male. For example, the Bates 25-year review found 60% of test optional admits were women compared to 48% for test submitter. The 2022 Common App report found 48% of men submitted scores via common app in 2022 vs 40% of women.
How much it influences the gender balance of the overall student body depends on how much effort the college makes to maintain an approximately 50:50 gender balance. A selective college has the option to have admission decisions that create a particular gender balance, regardless of testing policies.
Comparing the ~80 most selective colleges in fall 2019 freshmen and fall 2021 freshmen (most recent IPEDS year), the median increased from 51% women in 2019 to 53% women in 2021. Some of the colleges with larger changes include.
Reed: 53% Women → 64% Women
Northeastern: 51% Women → 61% Women
Duke: 49% Women → 55% Women
Yale: 50% Women → 55% Women
Northwestern: 51% Women → 56% Women
The colleges with the largest change in favor of men were
Cooper Union: 51% Men → 54% Men
RPI: 68% Men → 70% Men
@Data10 You used what I wrote as an example of “guesses without a clear explanation of how the conclusion was reached.” As I try to be careful when I write, can you point out which of the text below are guesses that are not factual?
A student who enters as major A and graduates within 6 years, whether as major A or major B, will count toward retention and graduation rates, according to Federal Student Aid. This is factual. They will also count as successes by studies/rankings that consider these two metrics, which is also factual.
As for the phrase “obscuring their lack of preparation,” the premise is the student obtained a couple of C minuses or worse early on as major A and decided to switch to major B that worked out better for them. Since the student was able to cruise through major B, getting poor grades early on as major A can only be attributed to their lack of preparation for major A before entering college. There are no guesses here. Lastly, the fact that changing majors does not affect retention and graduation rates means their lack of preparation for major A has been obscured, or hidden, from the stat sheet.
I was referring to the discussion about test optional students switching majors due to not being academically prepared, rather than whether students who switch majors count towards graduation rates or not. Yes, it was taken out of context. Your post was more extending a discussion from another reply.
It may be true that a good portion of test optional kids are switching majors due to being unprepared and getting low grades, but I haven’t seen any evidence of this posted in the thread, such as a comparison of rates of major switching between test optional and test submitter. The previously linked Bates review does show a different major distribution among test optional graduates and test submitter graduates, but this isn’t the same thing as major switching due to unpreparedness, as it does not discuss major distribution of entering test optional and test submitter students. It’s not clear how the distribution has changed between entering and graduating.
I fully recognize the value elite schools provide, but as a Wall Street guy I don’t think the sample you picked proves the point. Elite financial firms hire almost exclusively from the Ivies, M&S and other big brand name schools. So this is a biased sample because there is a treatment effect that you’re not accounting for
Re: elite banking firms hiring from elite colleges - SO WHAT?
It’s a topic that comes up in a lot of threads….the whole argument of “but if you want to be a Wall Street banker, then you have to go to Elite U.” Ok, that’s fine.
Not everybody desires to be a Wall Street banker though. That is also fine.
And given the TO policies at many an “Elite U,” one technically could get admitted with no ACT or SAT.
And why’s that? These guys are all about performance. Prestige as such doesn’t matter to them as they have few to none external customers. Why does it make sense for them to hire that way?
The firms in my sample shouldn’t be confused with banks. These are a completely different animals, with dramatically different hiring process and employee composition. And they are interesting not only in and of themselves, but also as a yardstick.
You can think of it as a standardized test that measures well beyond the range where SAT tops off.
That’s a great discussion to have, but on another thread. (As an FYI, I know two senior people at two of the firms you cited).
But you missed my point. I wasn’t challenging the quality of these grads (I’ve hired and known a ton of them). What I was saying is, if you’re trying to prove the assertion “schools X have the best and brightest students” by pointing to elite financial firms that only hire from schools X - that’s not the best way to prove your point.
My contribution of the above statistics was in the way of illustrating a point that is a bit more nuanced:
While I will be the first to cite Plutarch’s famous maxim, it is not the places that grace men, but men the places (and I have brought my kids up that way), with the most stark example in the world very near and dear to me (I won’t say more publicly), it is possible at the same time to acknowledge that 80% yield rate at HYPSM is not a mere artifact of irrational decision-making by otherwise bright students (and their families) who didn’t recognize they’d be better off taking a full scholarship at a lesser school.
It was posted in the “who needs those places anyway” subthread. I agree it is somewhat tangential, but I didn’t start that conversation, but merely contributed what I think are some pretty interesting (and easily obtainable) stats.
I think we are in what I once heard aptly called “violent agreement”.
Would be curious to hear your thoughts on the underlying causes of these hiring practices. Perhaps an opportunity will present itself in another thread one day.