<p>
[quote]
New research raises additional questions about the "reputational" survey that is worth 25 percent (more than any other factor) on the U.S. News & World Report rankings of colleges.</p>
<p>What the research found is that the reputational scores don't correlate with changes in factors such as resources or graduation rates, but correlate with the previous year's rankings. In other words, the way you get a good reputational score -- and in turn a good ranking -- is to already have a good ranking.</p>
<p>Alexandre, I could not disagree more with that statement. The Peer Assessment is the WORST part of the USNews Report. The Peer Assessment is an exercise in futility and abject cronyism. It is nothing but an absolute fraud that is perpetrated on an annual basis onto hordes of unsuspecting buyers of the magazine. </p>
<p>Before assigning any value to the PA, have you asked yourself HOW the surveys are completed and by WHOM. The USNews would like us to believe that it is a very scientific poll and that we are benefitting from "intelligent" answers. The reality is that the surveys that are returned are probably filled by a** young secretary who assigns 3, 4 and 5 between sips of Diet Coke and bases herself on ... last years' numbers,** or some whimsical notion of what other schools are. Do YOU really believe that the Provost of Oral Roberts University -if he ever saw the survey- has ANY idea how Harvard stacks up against Arizona State? Don't you think that any members of the famous Seven Sisters are not making sure to assign perfect scores to their "buddies"? How else could explain the amazingly high scores that have no correlation WHATSOEVER with their selectivity numbers?
<p>^ Proven, perhaps not, but it is indeed a big "duh". I prefer to think of it as a negative feedback system that helps keeping rankings from jumping too much from year to year.</p>
<p>Are you talking about the Peer Assessment survey? That is one of the most useful factors in the US News rankings. About 90% of the Peer Assessment rating can be accounted for by hard data. There is a sound basis for the Peer Assessment ratings.</p>
<p>Not true. It's superfluous if the same data is in the predictor twice. So this proves PA would be superfluous only if you would include the previous year's ranking AND PA. What this says instead is that PA is measuring something which, correlated as it may be to other elements of the ranking, is mostly correlated with itself. This is a good thing-- it means PA is capturing something which is not found in other numbers. It also means PA is somewhat static, which is perfectly reasonable and expected since reputation is slower to change and is largely a lagging indicator relative to changing quality. </p>
<p>Seriously, this is not news and can be quite easily and reasonable spun to sound positive for PA, and I'm no big fan of the PA or it's methodology.</p>
<p>Certainly you would expect PA to remain relatively constant. Why would you expect it to change much? It's not as though colleges change dramatically year to year. They barely change decade to decade.</p>
<p>The correlation between PA and the SAT 25th percentile is very high, about +.74 among the top 100 universities in US News that report SAT. You can calculate it yourself using an Excel spreadsheet. Just enter the data from US News and do a Pearson correlation between the two columns of data. Seeing is believing. You don't have to rely on anyone else's research. Another way to say this is that selectivity is 50% of the PA rating (.74 squared).</p>
<p>The research showed that if you want to know what next year's PA values will be, just look at this year's final overall ranking:
[quote]
But the theory behind the study was that if these are key measures of quality in the magazine's view, institutions that change in these categories should also experience reputational changes over time. But they didn't -- while the correlation that was clear was reputation with the previous year's rankings.
[/quote]
The key is over time, rather than looking at one static year's values.</p>
<p>Caveat: I admit to being influenced by scientific-method peer-reviewed research.</p>
<p>Correlation is not causation. It's somewhat obvious that better students tend to select what are thought to be better colleges who you would expect to have a higher PA. But there are exceptions too.</p>
<p>It is a good thing that PA does not change dramatically with random fluctuations in a few statistics. A good analogy is a college student's overall cumulative gpa. It doesn't change much as a result of one grade or one semester. It captures the big picture over time. The more history, the more it resists change. This is the way it should be. But a long stretch of poor performance WILL cause it to change.</p>
<p>So what is causing the identified changes in PA? It is not changes in key measures of quality, since there is no correlation. Since the correlation is the previous year's ranking, the reader is left to decide if it's a reasonable cause. The subject of the next research? ;)</p>