<p>I have wanted for awhile to put in a few comments about the ways in which elite university standards for admission create problems.</p>
<p>I have decided to post this to the Harvard board for several reasons. First, Harvard was one of the schools where I was educated. Second, Harvard sets the standards for everyone else.</p>
<p>I. Introduction</p>
<p>On CC and elsewhere there is much discussion of how important very high achievements are. However, less is said about whether these standards of achievement are accurate or useful.</p>
<p>I have been particularly concerned when I see how extremely high standards make students and parents seek near perfection on so many measures of academic achievement. As we all should know, perfectionism can have dramatic negative effects. It leads to countless individuals devoting incredible amounts of time and money to do better and better on tests and grades. Sometimes this means missing out on life and even real learning.</p>
<p>II. Evidence of the overall inadequacy of current objective measures</p>
<p>The measures used in admissions are based on their ability to accurately predict freshman GPA. The main predictors are high school GPA (HSGPA), SAT I, and SAT II.</p>
<p>Unfortunately none of these have proven very reliable. The best study is probably the one done by the UC system. I quote below the results. For more info see <a href=“http://www.fairtest.org/facts/satvalidity.html[/url]”>http://www.fairtest.org/facts/satvalidity.html</a> </p>
<p>After a three-year validity study analyzing the power of the SAT I, SAT II, and high school grades to predict success at the state’s eight public universities, University of California (UC) President Richard Atkinson presented a proposal in February 2001 to drop the SAT I requirement for UC applicants. The results from the UC validity study, which tracked 80,000 students from 1996-1999, highlighted the weak predictive power of the SAT I, with the test accounting for only 12.8% of the variation in FGPA. SAT II’s and HSGPA explained 15.3% and 14.5% of the variation, respectively. After taking SAT II and HSGPA into account, SAT I scores improved the prediction rate by a negligible 0.1% (from 21.0% to 21.1%), making it a virtually worthless additional piece of information. Furthermore, SAT I scores proved to be more susceptible to the influence of the socioeconomic status of an applicant than either the SAT II or HSGPA.</p>
<p>Another way of saying this is that when a test like the SAT predicts 12.8% of freshman grades, 87.2% remains unaccounted for. Even using SAT I, SAT II, and HSGPA and predicting 21% of freshman grades, means 79% is not accounted for. </p>
<p>III. More problems with the SAT</p>
<p>It is well known that the SAT has a standard error of around 30 points. But fewer really think about the implications. On CC, a score of 750 is often seen as a high enough level to apply to the most elite colleges. This certainly applies to Harvard, where a 750 is sometimes seen on CC as almost a minimum.</p>
<p>But what a score of 750 actually means is that a persons true score is probably between 720-780. Since this applies to all three SAT tests, the actual range of error for a person who got 750 on all three parts of the SAT for a total of 2250 actually is scoring somewhere between 2160-2340. For a Harvard applicant, 2160 sounds terrible probably while 2340 sounds much better.</p>
<p>Stop and think of this. Do you think of 780 as WAY higher than 720? If so, you are making a mistake. Then consider how it factors into the total SAT score. There is no real difference between 2160 (720 times three) and 2340 (780 times three). But is that how you think? Is that how the admission officers think?</p>
<p>III. Conclusion</p>
<p>Elite universities such as Harvard have become used to making decisions based on measures that are not very reliable but which are seen to be very important. People can actually think of themselves as an 780 SAT person or a 720 SAT person. The fact that elite universities go along with this legitimizes this sort of error. </p>
<p>I expect one response will be that no better measures are available. Even if this were the case, this is not a good enough answer. First, it doesnt excuse using the measures as if they were more reliable than they actually are (e.g., differentiating candidates on the basis of SAT differences that are not significant since they are within the margin of error). Second, it should lead to efforts to either improve measures or find different ways of making decisions that are more consistent with reality.</p>
<p>What do other CC members think?</p>