<p><a href=“Hunt:”>quote</a>
</p>
<p>The Espenshade study is evidence for this, although we don’t know what specific schools were included. But that study doesn’t tell us what the difference was in stats, if any, between unhooked whites and unhooked Asians, as far as I can recall.
[/quote]
</p>
<p>The point of Espenshade et al’s studies was to try to control for “hooks” by recording the Legacy and Recruited Athlete status of each applicant as additional variables in the model. However, their results are largely dictated by the choice of methodology. Nearly every design choice that they made concerning which variables to omit, how to process the variables that they had, what type of statistical model to use, and how to report the results that they got – all of these generate phony “Asian SAT penalties”.</p>
<p>I gave many examples by now in this and the earlier discussions. Here are a few more.</p>
<p>-multiplying the Asian effect from 50 SAT points to 140 points (a much higher number that led media commentators to say “this is shocking”) simply by including more academic predictor variables. Adding SAT-II, AP, quality of high school, and other items as in the second Espenshade study (2009) will make the effect of SAT (alone) much weaker, so you need a higher number such as 140 points to get the same effect as 50 points in the earlier regressions. </p>
<p>-excluding Early Admission status of applicants from the analysis. Results of the Early Admissions Game study by Avery, Zeckhauser and their collaborators, was that the early decision pool was disproportionately white and that EA/ED were roughly equivalent to large SAT advantages (100+ points for ED). In Espenshade’s data, it is very likely that a higher fraction of whites than Asians applied EA/ED. The effect of early vs regular application would be detected in his regressions as an “Asian SAT penalty” even if the admissions were race blind, because whites would be more likely than Asians to carry the secret sauce.</p>
<p>-using only recruited athlete status rather than measuring athletic “credentials” more broadly. Never mind that using athletics is ridiculous as an admissions procedure, the question here is only whether the ridiculous procedure was applied equally to whites and Asians. Due to selection of sports more preferred in admission, and a lesser clustering of the types of sport, it’s likely that white non-recruited athletes would fare better (under race-blind but not athletics-blind admission) than Asian non-recruited athletes. This would appear as another Asian SAT penalty because Espenshade didn’t control for athletics in general, only athletic recruitment.</p>
<p>-mixing data from different years. SAT-sensitivity of the admissions was lowest in the years when more Asians applied (1997 after the 1994 dumbing down and 1995 recentering). This will appear as an Asian SAT penalty using Espenshade’s methods.</p>
<p>-mixing verbal and math SAT. Higher weighting of the verbal, although a race-neutral admissions policy, would appear as an Asian SAT penalty. I explained the math in that calculation puzzle that Fabrizio refuses to solve. </p>
<p>-mixing ACT and SAT. Proportionally more whites than Asians submitted ACT, which was dominant in midwest states. If “geographic representation” played to their advantage, along with lower National Merit semifinal cutoffs compared to the SAT states, this would appear as an Asian SAT penalty even if the standards for using ACT and SAT were race-blind.</p>
<p>-mixing data from different schools. SAT-sensitivity is lowest at the schools where Asians are likeliest to apply, which would generate an “Asian penalty” after mixing the data and running Espenshade’s analyses, especially the one from 2009.</p>
<p>-using blocks of SAT scores (such as 1500-1600) rather than SAT percentile within each school’s set of applicants, as the predictor variable. This conceals the rarity and the huge admissions effect of high verbal scores (740+) on the old, pre-1994 SAT that appears in two thirds of Espenshade’s data. Whites would appear to have a secret advantage if this effect is not accounted for, leading to another Asian penalty appearing in Espenshade’s regression results as the result of race-blind processes.</p>
<ul>
<li>exaggeration of results (blowup of regression coefficients) when using a series of logistic regression models, as Espenshade does, for binary outcomes such as Accept/Reject. </li>
</ul>
<p>There’s a more basic issue. Even if these studies were perfect, it is not correct to interpret the effects they find as reflecting admissions processes. Espenshade’s 2009 study found that in some ranges, increasing the SAT or ACT scores or number of AP exams reduced the chances of admission. This is not a statistical error, it is a correct description of his data, and is a real effect. But the source of the effect is applicant behavior, not the admissions offices. Applicants with better test scores and more AP exams apply to schools with lower admission rates, and Espenshade’s study mixed more and less selective schools in his data set. (This was pointed out to Fabrizio over a year ago. He didn’t get it and posted nonsense about regression analysis that might have kept him out of grad school had the professors seen it. Lucky that CC is anonymous!)</p>