<p>Actually, I think USNWR does play favorites. It kind of designed the metrics and methodology behind certain values that the makers or its readers holds. And currently, those metrics favor a handful of private schools (or publics that may as well be privates given the way they are run) being at the very top. In addition, if they drop schools “as punishment for lying”, I actually think that should dent their credibility more. That’s like being an 8 year child throwing a hissy fit about their parents telling them there is no tooth fairy or something. If USNWR was mature, they would have in the Iona case and any other school’s case, simply used the updated data, plugged into their stupid formula and generate a number, and then see where the school lands in the rankings. Also, Emory did kind of data “switch”, so it had two sets of data which were accurate. The issue was they decided to use the one they were not supposed to when sending data to ranking agencies. At least the numbers were not completely made up. Regardless, Emory is wrong for this. It was so useless and unnecessary.</p>
<p>On a side note, I question the metric as a whole in that I kind of think “selectivity” should be describing how hard it is to get in as opposed to who yields (the admitted students’ data describes this). If they wanted enrolled data, they should have a special metric called “caliber” (perhaps they do), but rules are rules.</p>