How do top scorers on tests fail to gain admission to top schools?

<p>On kids with great grades and low test scores: Some of these kids have test anxiety and just don’t test well. However, some of them are “overachievers” who may have difficulty at college, just as those with high test scores and low grades may.</p>

<p>Mythmom: Since my S is at Harvard, Harvard must not mind too much about his lack of ECs! S was the epitome of the lopsided kid.</p>

<p>

</p>

<p>This is because admission is conducted by human, not by machine.
At one end of spectrum, say Caltech, Faculty’s Freshman Admission Committee review the application file. They also encourage applicants to send in additional material. There are much more information inputed to the process, beyond scores, to reach an intelligent admission decision.</p>

<p>OK but I don’t understand the numbers:).</p>

<p>Does the chart on the ACT mean that 1M kids took the test and only 2000 or so scored 35 and above? </p>

<p>Can you tell I am really bad at math? My SATs back in the day had a 200 point differential between math and verbal - hehe.</p>

<p>Re: the fact that kids are implored to be themselves, but that might come back & bite them if they don’t have what it takes to gain admission to the top schools — I guess my belief is that if they ARE being themselves & it ISN’T what a “top school” wants, then they are truly better off at another school. That’s not a terrible thing. Even ivy grads realize there are other fine schools out there! My S is a high test score/high gpa kid without the traditional EC’s. I don’t worry about it, because he is doing what makes him happy. In the end, I am confident he will be accepted to a wonderful school that is a good fit for him. So, if the kid’s preferred leisure activities don’t look good to Harvard or some other school … oh, well. It’s just really not the end of the world. And in defense of those who have the difficult task of trying to figure out which of the too-many stellar candidates they will offer a place in their school, they are doing the best they can. They are constantly trying to figure out how to improve the process, but it’s never going to be perfect. That’s the nature of the beast.</p>

<p>Marite: How wonderful. Oops. Don’t know where I misunderstood. Too late to edit post. Thanks for your correction.</p>

<p>With regard to post 79: Admissions measurement tools may be inadequate for prediction partly because of the continued cognitive development of the learner, in college, combined with the quality of the teaching, combined with the degree of receptivity & application to the process. There are so many variables that cannot be measured prior to matriculation – not to mention any psychological factors that can both enhance & interfere with the learning, & which may not appear until college – given psychological development, new environment, new stage in social development, personal situations, etc. Some people take quite awhile to mature emotionally, intellectually, and/or socially; some people do so rapidly. Not only are admissions committees not machines (post 83), neither are students. That’s why employing a mechanical means (such as a score) to assess readiness, productivity, future gpa, etc., is an inappropriate model. For the elastic human being that the learner is, a more elastic process may be appropriate.</p>

<p>Mythmom:</p>

<p>Keep in mind that colleges seek to build a well-rounded class. There is room for students who are very lopsided and do not show a lot of ECs, just as there is room for students who shine in their ECs. But actually, now that he is in college, S is involved in more non-academic ECs than when he was in high school, as his schedule is more flexible, and the time he spends in class has been halved (although there is a lot more homework; but he can decide when to do it–usually late at night).</p>

<p>Calmom: It’s interesting, and worth the time, to go back to the original UC study, which is accessible online: <a href=“http://www.ucop.edu/sas/research/researchandplanning/pdf/sat_study.pdf[/url]”>http://www.ucop.edu/sas/research/researchandplanning/pdf/sat_study.pdf&lt;/a&gt;
I hadn’t read it for a couple of years. What I find interesting is that while the variation between test (any) and GPA varies, the combination of the two is best. But, if you have to pick between one or the other - tests are actually the better predictor of freshman success. I would not have guessed that. In 1999, SAT1 scores alone were better predictors of freshman success than GPA. In 1998 and 1999 SAT II’s were better than grades. Combining the two varieties of test scores didn’t help much, (unsurprisingly) but the combination of either SAT1 or SATII with GPA is a better predictor than tests alone or grades alone (all together now: duh) The conclusion that SAT I scores have no positive correlation with freshman success is a bit of statistical sleight of hand achieved by ignoring the difference between grades only and grades + SAT1, and only comparing Grades + SAT1 with Grades + SATII + SAT1. As a basic test of “grades vs. test scores” it actually comes down on the side of “test scores” as a better overall predictor of freshman success once you lump SATII’s and SATI’s together.</p>

<p>What I also find interesting is that while the correlation between grades and freshman success declined steadily (and significantly) during the four year period from 1996-1999, the correlation between SAT1’s (as well as SAT II’s) remained remarkably stable throughout. It kind of makes you wonder what a study continued forward into more recent years would show…</p>

<p>^^ Terrific. The SAT I still has limited value with regard to soph., jr, sr. years. Hmmm: I wonder if it has anything to do with the fact that as the learning becomes more complex, the class demands/requirements more advanced, the Life of the Mind looks less and less like a scantron.</p>

<p>Hmmm. Curiouser and curiouser. Here’s the second study, comparing 4 year college GPA to various admissions factors: <a href=“Publications | Center for Studies in Higher Education”>Publications | Center for Studies in Higher Education; Guess what? The results are the same. Tests are actually a better predictor of 4 year college GPA than are high school GPA. Again, SAT II’s have an edge on SAT1’s; the combination of tests and grades is best overall; without a significant difference between tests.</p>

<p>How did they get the touted results cited by Calmom? They “adjusted” the scores for socioeconomic factors. High SES students tend to score higher on tests than low SES students, so they increased the GPA numbers - not based on actual results comparing students of similar backgrounds, but by means of a demographic average. The raw data indicates that relatively high test scorers achieve better than relatively high HS GPA applicants - even over four years, without that adjustment.</p>

<p>I would add that Harvard as well as other schools often look for students that will be committed to ECs while there. They don’t necessarily have to be the same ECs that the student participated in while in high school. Someone who may have been very active in say, Model Congress, and a national prize winner as well, may choose to immerse oneself in something else while in college, such as writing for and/or editing a school’s political magazine. Often times colleges and universities offer ECs that high school students have never heard of. Yet the adcom can spot someone who is a “team player” and exhibits passion for what he or she has done in the past. Those attributes cam be readily transferable and will still enhance the school community.</p>

<p>Kluge, that adjustment is necessary in order to control for the SES variables --without the adjustment all you have is data showing that high SES students do better than low SES students, since we know that there is a roughly linear correlation between average SAT scores and SES. </p>

<p>That’s the basic problem with the tests: they are a biased measure which favors high SES students, and measures SES data. Colleges might just as well abandon the SAT and make admission decisions based on the FAFSA, if SES factors are going to be taken into account. </p>

<p>The question is – once you sort out the SES data, what’s left? What they have apparently found is that the impact of the GPA is SES-neutral – that is, the the predictive value of a 3.8 GPA is the same for a high SES student as for a low SES student. That’s not true of the SAT data – if you tease out the SES factors you get very mixed results.</p>

<p>Calmom:</p>

<p>I’m not sure I follow. The impact of GPA is SES-neutral. But SES does not have a neutral (zero) impact on GPA, right? So why control for SAT but not for GPA?
If the SAT is biased, isn’t the whole curriculum biased as well? There are so many more opportunities for a student to be confronted with cultural biases throughout the school year.</p>

<p>According to Appendix 2 of the study, without considering the SES factors, SAT is a stronger predictor than GPA (the SAT-II writing score by itself is as good a predictor of college outcomes as GPA). That’s the baseline for the rest of the analysis.</p>

<p>Including SES is an attempt to improve the predictive status of GPA, but it backfires. The authors have apparently shown that SES-sensitive admissions requires even higher weighting of SAT over GPA than SES-blind admission. </p>

<p>The authors calculated that in a hypothetical admissions model that rewards students for higher SES, the weight and incremental predictive value of SAT drop below that of GPA. However, real-world admissions works in the opposite direction: it rewards disadvantage, which in terms of the regression models as in the study means putting negative coefficients on SES. That would increase, not depress, the weight and relative predictive power of SAT compared to GPA.</p>

<p>Actually, after a little more review, it’s a fascinating study, viewed in the light of the perspective in which it was created. There are lots of levels of interest. To begin with, there’s the macro/micro element. The “correction” applied to test scores for socio-economic status makes sense from the point of view of a state university, seeking to eliminate unfair advantages for some residents - but it glosses over the fact that the the high test scorers do, in fact, perform more successfully in college, by the standards of the study than do lower scorers. The reason for the correlation between test scores and SES has been mooted here on CC many times, but it’s not true that everyone with high SES does well on tests, simply that the prevalence of high test scorers is greater among high SES groups. “Correcting” the test scores to adjust for SES may achieve social equity, but it also degrades the correlation between college success and the factors used for admission. </p>

<p>Further, the authors have put a thumb on the scales in a couple of different, and subtle, ways. First, they posed the question as “What additional predictive value do test scores have when added to grades?” They do not pose the question “What additional predictive value do grades have when added to test scores?” The answer to both questions appears to be the same: a little, but not a lot (as you would expect considering the high degree of overlap between the two factors; that is, high GPA students tend to be high SAT students as well.) But by asking only the first question they denigrate the significance of test scores as compared to HS GPA. If they asked the second question I suspect that they’d reach the alternative conclusion they don’t want to state: if you relied on a full panel of tests (SAT1’s + SAT2’s, or ACT) you could eliminate GPA as an admissions factor without degrading the quality of your selection process as much as you would by relying on HS GPA alone.</p>

<p>The next thumb is the consistent comparison of grades against a sub-set of test scores. That is, grades vs. SAT1’s, or grades vs. a single SATII. When grades alone are compared to the combined impact of the full set of scores, grades come in second - at all levels.</p>

<p>A very interesting factor which is unexplained (unless I missed it) is that math test scores consistently show up with a negative correlation to college success. There’s a lot of discussion about why various other things seem to be demonstrated by the data, but this is ignored. It makes me wonder…</p>

<p>Kluge:</p>

<p>Very interesting. Considering that besides the UCs and about 60 other schools, no colleges ask for SATIIs, I wonder about the relative utitlity of SATIIs scores vs. GPAs. One thing to consider is that SAT scores, for all their alleged biases, level the playing field in terms of weighted and unweighted grades.
As for math test scores: another interesting tidbit. in a given liberal arts school, about 1/3 of the student body majors in math and sciences, leaving 2/3 as majors in writing-heavy disciplines. Could that be it?</p>

<p>

</p>

<p>Kluge, what about the regression results?</p>

<p>newmassdad: See the discussions at pages 10 and 13 - they repeatedly discuss the impact on the analysis of using only GPA and dropping the test scores - but never address the impact of using all test scores and dropping GPA. I may be misinterpreting the narrative - and my math is way too rusty to check the calculations - but nowhere do I see an analysis of what the impact of dropping grades from the assessment and using only test scores would be. Nor is there a regression analysis using the combined test results as a unitary factor, as opposed to using each test as a separate factor, and then using that in an analysis compared to GPA. You can only get a sense of that by comparing model 6 to model 1 in Table 4, and then backing out the SES “correction” to get the relative weight of those two basic factors as predictive elements. It’s more about what is not presented than what is presented.</p>

<p>They do stress that even consideration of all of the admissions factors combined yields a low level of predictive significance - less than 30%.</p>

<p>Kluge, I suppose they could have combined all tests as you suggest, but given the regression results they got, it seems pointless, since their goal was to look at the predictive value of each component. </p>

<p>Regarding the low predictive ablility, look at footnote 8 of the UCOP report. It discusses “restriction of range”</p>