Brown is too easy

<p>

</p>

<p>I had the same question, but from the confidence interval they reported it looks like the sample was 25-35 direct cross admit battles between Brown and Columbia. The total sample size is larger than the Revealed Preference study but a little thinner for the top schools and with data of lower quality. The site should definitely report the number of battles for each pair of schools, in addition to the confidence interval.</p>

<p>

</p>

<p>It’s the same as College Confidential, self-reported data from thousands of users who submit stats profiles and admissions results, but with an Elo point cross-admit ranking similar to Revealed Preferences, and a display of the raw cross-admit rates (with confidence intervals). I think the site is run by a medical student with a computer/stats hobby, not an admissions counselor. </p>

<p>

</p>

<p>That’s exactly what the “confidence probabilities” table in the article claims to quantify. They assign 79 percent confidence (in the sense of a confidence interval) to the Brown > Columbia ranking output. This is not a 79 percent cross-admit victory rate. It means that they performed thousands of simulations in which the rankings were re-computed and in 79 percent of those Brown ranked higher than Columbia. They claim that these simulations are, under some assumptions, samples from the distribution they are trying to access and if so, really do represent confidence in the usual statistical sense (the likeliness of the result appearing by chance, given their model).</p>

<p>

</p>

<p>Brown is self-selected less than Caltech and MIT but more so than other Ivy League schools, so yes, this should bias its results upward (as an indicator of what all students want, not what students who chose to apply to Brown and other schools want) in the RP model. </p>

<p>

</p>

<p>You described it perfectly as far as I can tell. The “being unique” (Brown, Wellesley, Notre Dame, Caltech, …) makes the ranking complicated if it is understood as a desirability rating, because on the one hand it’s a plus that is attractive to many, on the other hand it leads to the upward self-selection bias. </p>

<p>

</p>

<p>This is part of what’s screwed up in the RP method. Such an applicant expresses a lower preference for HYPS and a higher preference for Brown but hurts the latter’s rating (and helps the former) compared to a more lukewarm preference where one does apply to those other schools, then turns them down for Brown.</p>

<p>

</p>

<p>It is intellectually interesting to dissect the RP ratings but they should certainly not be viewed as in any way reliable or authoritative, even as measurements of their own 1999-2000 data.</p>