Yep, preaching to the choir over here. I’m not the one needs convincing.
How you all are choosing your doctor or mechanic has no relevance to this thread. Please move the conversation forward.
Just do the 2x2 matrix analysis on this… your answer makes sense. Likewise on the topic at hand, academic potential/promise/preparedness etc is the foundational variable.
@Canuckdad
We’re creeping back into the previously trodden ground of whether good test takers have the same distribution of creativity/niceness as the general population…
As noted above, most if not all medical doctors are at least decently good, if not quite good, at testing. If the creativity and kindness was thusly already filtered out, we’d be in a much different place than we are.
You can present the stats in different ways to support your desired conclusion.
In general, if you want to support test required, then you’ll get the strongest result if you compare the relative predictive power of SAT in isolation to HS GPA (or similar metric) in isolation, without considering differences in course rigor/selection between different students, differences in harshness of grading and grade distribution in different HSs, etc. The published stat should emphasize that SAT in isolation is better than HS GPA in isolation, rather than give specific details about how much variance in college GPA is explained by SAT. The metric of evaluation should be college GPA based, particularly first year GPA. An example, is saying that SAT is better at predicting first year college GPA than HS GPA, class rank, and similar.
And if you want to support test optional, then you’ll get the strongest result if you compare how much SAT adds to the prediction beyond other measures used to evaluate test optional applicants, and include a measure of course rigor or strength of HS schedule in the measures used to evaluate test optional applicants. And/or compare the relative performance between test optional and test submitter admits. The metric should emphasize a point later than first year, such as cumulative GPA or graduation rate. An example is saying test optional and test submitter admits had similar graduation rate and cumulative GPA at graduation.
This can lead to seemingly opposite conclusions, even when supported by the same underlying analysis. It can also lead to different colleges seemingly stating conflicting conclusions. For example, the UC study found SAT in isolation was a stronger predictor than GPA in isolation, yet the UC study also found that SAT in isolation only explained 5-12% of variation grades within specific courses. The Ithaca student found that SAT in isolation explained ~25% of variation in grades, which is larger than the UC study, yet it also found that SAT only explained 1% more variance in college grades than the combination of non-SAT measures they evaluated (included measure of course rigor). Or Yale said the SAT is the best single predictor of a student’s future Yale grades, while the Bates 25 years of test optional study found no significant difference in cumulative GPA or graduation rate between test submitter admits and test optional admits.
There are also differences from one college to the next for a variety of reasons. For example, a college that considers a larger number of holistic factors beyond just GPA and SAT in isolation (particularly reviewing full transcript, including evaluating a measure of course rigor or strength of schedule) may see a lesser added benefit to both GPA and SAT. And a college that has a greater degree of compression at maximum (in either grades or scores) may see less benefit in the metric that is being compressed.
I agree with this part of your post the most. I think how much the rest of the post applies to a particular institution varies (e.g. I think both Dartmouth and Yale do indeed believe that their data reflects longer term “success” for certain ranges of SAT scores and have said as much, despite not necessarily revealing all related data)
It seems to me that Deming/Friedman/Chetty would also vote for a suite of admission criteria but one that includes testing. When one adds “testing in context” to that, it would appear that it is a reasonable (albeit not perfect) compromise. (I’m not a huge fan of testing in context myself, but from my vantage it’s better than no testing while perhaps helping the low SES student better than testing w/o context)
Overall, it seems we sussed out the lower app rates, the (hopefully) culling of some non-competitive apps, and we need to wait to see how the chips fall.
A student who participates outside of the classroom at the university level whether it be sports, clubs, greek life or whatever is also going to be more challenged to get a higher GPA than someone who just focuses on academics. These studies of academic success measured by “university GPA” don’t take into account the “variables” at play. Of course, someone who scores “high” on a SAT is more likely to get a higher GPA because that is more likely because of where they are spending their time.
End of the day, each university must decide what they want their “student” body profile to be. For example, a MIT student profile is going to be dramatically different than other schools.
And a high GPA correlating to “career opportunities” is very dependent on the field someone is going into. Many career paths are dependent on “interpersonal skills” which makes SAT or other quantifiable testing less useful. No one I have ever known got promoted because they had a 1500 SAT or 4 GPA back in the day.
At the most selective colleges, just having excellent GPA and SAT scores (relative to context) is not sufficient to get admitted. That’s just the baseline for further consideration.
For example, while many colleges allow lower GPA and SAT scores for some athletes in some sports (particularly basketball and football), they do not for other sports which may take similar amounts of time. According to the following link, recruited fencers were expected to have an Academic Index similar to that of the overall student body.
Likewise, students excelling in non-recruited activities like violin or debate often the strongest students despite their time commitments.
It just keeps happening. This ‘belief’ that high achieving, very bright students who test well aren’t athletic or artistic or creative. They are just robots. It isn’t true.
There are so few spots at the most highly selective US universities that they have no problem finding students with high scores on standardized tests (and lofty high school & university GPAs) who also have amazing achievements in athletics & the arts (and these are high character individuals).
I find it interesting that this viewpoint is consistently allowed to be propagated in this thread and on this website.
You want a study that adjusts for GPA based upon how much time someone spends partying at a fraternity?
I agree. I don’t know how someone who has actually met students attending these colleges can stick to that belief.
That’s one way to look at GPA “in context”, I suppose.
I am pointing out a flaw in studies that don’t take into account variables that are relevant. And I mentioned “greek life” as an example of activities.
Nowhere did I say student athletes can’t get 4 GPAs. BUT, like any activities that take away time from studying, it is more challenging to get that 4 GPA compared to someone who focuses 100% on studying.
So you think that SATs are a flawed measure because they do not take into account how much someone parties?
What is obsession with “partying”? My point is the correlation between SAT and GPA in university is flawed because it doesn’t take into all the variables at play and measuring university success by GPA is a very “narrow” view of what university success is.
I don’t think you know a single student who is currently attending a highly selective institution in the US. I don’t think you would keep banging this drum if you did. These colleges & universities have no problem finding excellent students with top tier test scores & GPAs who are heavily involved in time-consuming athletic and artistic pursuits. In fact, they find too many, and can’t admit them all.
I would be VERY interested in knowing why you feel the need to push this agenda.
Don’t agree with this and I don’t understand where you are going with this as far as the validity of considering test scores in college admissions. There is clearly diminishing even negative returns at some point if a student only studies. Breaks from study, whether organized or informal, involving physical activity, music, art, service, social activities (yes even partying), whatever is enjoyable, is necessary for students to optimize their college experience, including their GPA. Of course, some level of study is necessary. But to equate higher GPA to “100% studying” is perpetuating the “robot” stereotype that so many of us object to.
And the fact that you consider time spent at fraternities to be a measure of university success (and hence something to be considered for admission) underscores why many people advocate for standardized tests to play a more prominent role in the application process.
Out of the excuses I’ve heard for an average test score, e.g. the ubiquitous “I’m a bad test taker”, the best is the rare “I was hungover that Saturday morning”. That’s legit.
The thing is, very rarely it is the person focused 100% on studying getting in. Just because someone isn’t an athlete, it does not mean they don’t have a significant commitment.
I am the first to admit my athlete had to put in a little more effort into juggling his commitments because he missed so much school to compete. The first year at this level was a little more challenging. He learned, he figured it out, he grew. He is far from a unicorn. If anything he is a better student/more prepared for college for it.
Well, it’s just like “everyone” “knows” the ONLY way to perform well on a standardized test is to be wealthy and have high-priced tutors and lots of expensive test prep (whatever that means).