If I had the data, I’d start by reviewing first year grades in intended major, particularly for STEM coursework, transfers to other majors, and compare Tests vs non-test. If a math-wannabe ends up graduating with a mean GPA in a Studies major, is that an accomplishment?
Bates may have been an arbitrary name used in yoru post, but it is certainly not arbitrary with FairTest and others.
I did too, day in and day out. And in my business, those would be considered very good numbers.
The reason I approach it this way is that there is a difference between models that help you understand behavior, and models that are defensible for use to help you make decisions. For example, if you were trying to understand student performance, you might include things like the quality of the high school, race, and gender. You would no doubt get an extremely high R^2 by doing so.
But none of those three things are defensible for making decisions on who to admit, and therefore you end up with lower R^2 values in models that are appropriate for decision making.
This is true, and to a certain extent it’s more art than science. And linear regression is often default tool choice, because it’s easy and computationally cheap. But the linearity assumptions are often not true, so other approaches are necessary to tease out the actual truth.
Sorry, didn’t get a chance to review all 1100 posts… the fact that the data is binned is irrelevant - it still proves a strong linear relationship between standardized test scores and first year GPA outcomes. It makes logical sense as well - the main determinant of high test scores is perseverance, cultural affinity to working hard to achieve something, and good education in middle and high school. Those factors drive college GPAs as well.
Not interested in continuing this conversation with you. Thanks.
I note they can, and apparently do, use the school report and Landscape data about your school as part of their academic evaluations. The Admissions Dean of Dartmouth recently said they are using models with dozens of variables to do their initial academic assessment for screening purposes. All that is what they call contextual review.
So the R-squared for SAT alone is undoubtedly much lower than the R-squared of their actual full admissions model (although again it may not even be predicting first year grades). But that actual full admissions model is proprietary, so we don’t know any details.
Also the SAT averages for both groups are much lower than the averages at Yale or Dartmouth. The rigor of the courses may be different at the these schools as well.so what works for Bates may not work at the Ivies
Or so they say. It sounded from the Dartmouth podcast last year that they and Yale weren’t really test optional even when their official policy was test optional.
“Starting with next year’s application cycle (effective for the Class of 2029), Brown will reinstate the requirement that applicants for first-year admission submit standardized tests scores (the SAT or ACT, except in the rare circumstance when these tests are not available to a student). This will accompany enhanced communications to students and school counselors emphasizing that test scores are interpreted in the context of a student’s background and educational opportunities.”
Haven’t read through this entire thread so please excuse if my question has already been asked and answered. Is there a belief that these three institutions may have favored applicants who did submit scores in this year’s admissions process?
Belief? Yes. Data? Not for these schools but some schools released acceptance rates showing that submitters did better. Our two more competitive local high schools encouraged test submission for all unhooked candidates in 2022(ie after the real covid issue of no testing was in the past). They said it was becoming very clear test optional may be hindering for most selective schools. For unhooked upper middle class private and public magnet kids.
It seems that these schools felt that the lack of scores created a hole in their review. It seems that applicants with scores didn’t have that hole and as such were given, at the very least, a more informed review. There has always been the feeling that if there is a choice of two candidates of roughly similar qualifications and of similar educational backgrounds, a high standardized test score may be the helpful tipping point.
Maybe so, and is a reasonable assumption, given that the absence of standardized test scores needs to be made up somewhere else. But several years ago GT did an experiment where the admissions staff separately looked at candidates both with and without their standardized test scores and still made the same decisions.
For the Ivy watchers among us, this leaves five schools as test-optional for 2025 or beyond (Columbia, Cornell, Harvard, Penn, Princeton) and three test-required (Brown, Dartmouth, Yale).
(Edited three hours after original posting when the Penn news came down.)
Not sure how it will be worded in the official policy, but what they said in the Executive Summary Report about that exception was
“Permit exceptions to requiring standardized test scores in rare cases in which a student applying for first-year admission is unable to take the test (when the International Baccalaureate or a national exam may be substituted)”
Does this read to you like they’ll still require a test in those circumstances but allow for AP/IB as a substitute? That’s how I’m reading it…
If so, this would be somewhat similar to Yale… except that Yale allows the AP/IB substitution even when the student was able to take an SAT/ACT and also Yale seems to want the AP scores - all the AP scores - even if the student submits the SAT/ACT.