<p>The Peer Assessment amounts to a check on the rankings by objective factors, and helps to stabilize them against year to year manipulations and other noise.</p>
<p>Most PA-free rankings correlate with US News, so it would hardly be earth-shaking to drop PA or change its weight.</p>
<p>Are you nuts? What could possibly fluctuate more than response rates to a questionnaire? Secondly, I think what you meant to say was, “a check on the rankings by <em>subjective</em> factors” since there is nothing objective about responses to the Peer Assessment questionnaire. If you rephrase it that way, then I think the paradox becomes clearer: a data driven survey that gets compromised every year by a less than objective ingredient. If the editors are so convinced it won’t make a difference, then why not try it without it?</p>
<p>A subjective factor ¶ that serves as a check on the ranking-by-objective-factors, i.e. on that part of the US News rating derived from numeric data. </p>
<p>The numerics fluctuate and can be manipulated year to year. Opinion as measured in PA assimilates those changes slowly and is less susceptible to manipulation. For instance, let’s say that US News were to use one number such as percentage of National Merit semifinalists in its rating, but did not directly use some other number such as the cost per student or financial aid budget. A school could then manipulate its ranking by purchasing National Merit semifinalists, but this would not fool the Peer Assessment. If, over time, the large population of National Merit students affected the character of a school, this would affect its reputation and the Peer Assessment would change in due course.</p>
<p>In other threads, I have used the term “mush” for the Peer Assessment and I consider it superfluous. However, given that US News gathers the PA data, its role in the rankings is along the lines described above, and I think it does increase the reliability of the ratings. One can argue whether 25 percent is a good weighting of this factor.</p>
<p>Siserune, the correlation of the PA and the ranking is circular, as the impact on the final rankings is so important.</p>
<p>As far as changes that would result from reducing its weight or eliminating it altogether, it would be earth-shaking on the grounds of a number of colleges, starting with the so-called seven sisters. The non-coed schools would tumble down! Rankings without the “balancing check” would present a different image as there would be quite a shuffle in the top 20-30 schools.</p>
<p>The use of the PA to support public schools was clearly explained by Morse. Its use to maintain old favorites in the LAC rankings is not hard to see.</p>
<p>I don’t see how you can provide a meaningful ranking system without a peer assessment component. I would argue that it is the single most meaningful parameter in the USNews ranking system. It provides a general measure on the quality of the faculty as well as the learning environment. These parameters are hard to quantify, and I would argue that discussions on improving rather than doing away with the assessment will be more productive.</p>
<p>PAdad, even the most vocal supporters of the peer assessment find it hard to explain how the PA measures QUALITY of education. A few months ago, there was “hilarious” thread in the College Search forum. The issue was about the difference between the “quality of education” versus the “perception of the quality of education” and about the inclusion of “dedication to teaching.”</p>
<p>As far as producing meaningful rankings without the PA, why not produce BOTH and let the consumers pick the one closer to their heart? The PA fanboys could continue to cherish the results of their manipulated rankings; others could enjoy sanitized rankings. </p>
<p>Of course, this will NEVER happen since the correction (read manipulation) of the data is so important for Morse and his staff. The production of two rankings would simply be too revealing!</p>
<p>I’m not saying a beauty contest isn’t fun and useful. I am saying, that combining it with a data driven survey results in a hybrid that only emphasizes the weaknesses of both. The psuedo-science of the other bells and whistles in the USNews survey takes all the fun and simplicity out of the beauty contest and the beauty contest robs the scientific survey of all its supposed objectivity. </p>
<p>And, Xiggi is right, 1% of “mush” is still mush. Siserune’s reasoning is a defense of self-fulfilling prophecy, over the long-run, in perennial ranking systems. Hardly a strong reed upon which to lean, IMO. </p>
<p>I would agree that the PA survey is superfluous but for the fact that it actually does cost the colleges somethng in work-hours spent assembling the answers to the surveys every year. That’s YOUR tuition money being spent to prop up a commercial venture that unlike other cooperative ventures with the business world, doesn’t even result in royalties to the college.</p>
<p>xiggi said–
The non-coed schools would tumble down! Rankings without the “balancing check” would present a different image as there would be quite a shuffle in the top 20-30 schools.</p>
<p>xiggi, why do you say this?</p>
<p>All-</p>
<p>Out of curiousity, if USNR removed the PA component, how would the ratings come out?</p>
<p>Is this something that can be modeled say, with the current 2007 data? I’m tempted to add a new thread on this question only in case there is some kind of great numbers person who has access to usnr data who might do this report.</p>
<p>I posted the above paragraph before I saw the immediately preceeding posts.</p>
<p>quote–
That’s YOUR tuition money being spent to prop up a commercial venture that unlike other cooperative ventures with the business world, doesn’t even result in royalties to the college.</p>
<p>YES! LET’s ALL SAY THE EMPEROR HAS NO CLOTHES, please.</p>
<p>USNW IS A GIANT COMMERCIAL FIRSTLY</p>
<p>commercial for USNR and a commercial for the schools. that tuition money going to usnr is really out of the schools’ marketing budgets.</p>
<p>Idic, you’ll need to check the rankings to understand that point. With your magazine in hand, check the rankings for Smith versus Harvey Mudd or Washington and Lee. Check Wellesley versus Pomona. Check Bryn Mawr … and go down the list cherry-picking schools with a PA that is higher than their overall ranking would indicate.</p>
<p>PS Hawkette tried to do what you just proposed. Unfortunately, it’s nothing more than speculation since we don’t know how the weighing works at USNews.</p>
<p>Just eliminate the one-size-doesn’t-fit-all rankings and the notion of “best.” Present the data alphabetically as is done for the third- and fourth-tier schools. For each column, add color coding, e.g., blue for the top quintile, green, yellow, orange, red for the bottom. Readers can then scan the columns important to them, e.g., “% of classes under 20” and the blue boxes will jump out. PA can be included, and is just another datum.</p>
<p>Siserune, do you always assume people do not know what they are talking about? </p>
<p>Of course, most everyone who has looked at the USNews with a bit of scrutiny KNOWS that the PA represents 25% of the total score. What we do not know, however, are the details. For instance, we do not know the **score **of a 4.9 versus a 4.3 or a 4.0.</p>
<p>Does a 5.0 carrie a 25/25 or is it something else? Does a 4.0 “earn” 20/25 or is it something else? </p>
<p>Fwiw, since the top school starts with a score of 100, we know that that a perfect score of 25 on the PA is NOT directly related to a perfect (and impossible) 5.0. Further, the same logic applies to all the remaining ranked data: we do not know the NUMERICAL DIFFERENCE for a school ranked first in the over/under performance of expected graduation. For instance, do we know of MANY points are lost by Harvey Mudd when USNews ranks it dead last in that category? How many points does a school such as Smith “earns” and what advantage does it derive from reveiving a much lower expected graduation rate because of its less than competitive selectivity?</p>
<p>Yes, we know the different weights that compose the total 100%, but we don’t know how the reported data is COMPUTED WITHIN that category.</p>
<p>Xiggi, If you disagree with what I said, please use my exact words, which were not the “QUALITY of education”. My statement on the quality of faculty and learning environment certainly does not equate to the quality of education. If I know how to assess quality of education, I will certainly bottle it. I do, however, have informed opinion on the quality of faculty at specific institutions and learning environments. </p>
<p>I disagree strongly with your statements regarding female-only schools. Wellesley, Smith and Bryn Mawr are among the best colleges. If they go co-ed today, their ranking will only go up.</p>
<p>As far as changes that would result from reducing its weight or eliminating it altogether, it would be earth-shaking on the grounds of a number of colleges, starting with the so-called seven sisters. The non-coed schools would tumble down! Rankings without the “balancing check” would present a different image as there would be quite a shuffle in the top 20-30 schools</p>
<p>xiggi was talking about reducing/eliminating PA in this context, and not remove the s sex attribute of the schools.</p>
<p>One naive question: why doesn’t USNR publicly disclose its criteria, and numerical formula? After all, if they are truly interested in journalistic reporting, </p>
<p>Your rebuttal simply reinforces what was contained in the thread I labeled as hilarious because of all the “semantics” games played to define the Peer Assessment. Obviously, the PA is whatever anyone want it to be … from the readers to the people who fill the surveys. Too bad, it does not always matches the directions of USNews, doesn’t it. </p>
<p>As far as your statement, “I disagree strongly with your statements regarding female-only schools. Wellesley, Smith and Bryn Mawr are among the best colleges. If they go co-ed today, their ranking will only go up.” may I suggest you read MY words again, and see how they were related to the context of this discussion on PEER ASSESSMENT. </p>
<p>My words were: “As far as changes that would result from reducing its weight or eliminating it altogether, it would be earth-shaking on the grounds of a number of colleges, starting with the so-called seven sisters. The non-coed schools would tumble down!”</p>
<p>HTH</p>
<p>PS And, if you care to know. I happen to believe that Smith and Wellesley ARE among the best colleges in this country. I also believe that many non-coed schools demonstrate that the best colleges are not necessarily the most selective and highest ranking schools. I do, however, believe that they are and have been misranked for a long time by US News.</p>
<p>Xiggi, My apology as I confess that I didn’t read the whole thread. I am sorry that you left out Bryn Mawr in your rebuttal. It deserves to be mentioned together with the other two colleges. </p>
<p>PS, And if you care to know, I have never paid a subscription to the USNews ranking and really don’t know the details. Thus, it is sort of silly of me to even participate in this thread.</p>
<p>Padad, there is absolutely no need to apologize. We are simply exchanging opinions and thoughts. Also, you do not need to have a subscription to the USNews to express your thoughts about it. </p>
<p>You’re correct about Bryn Mawr and I simply did not retype the name in my list. Actually, the list should be extended to include Columbia’s Barnard or Mt Holyoke … and the ones I forget to mention. The problem with any type of list is that it’s hard not to forget one or two.</p>
<p>xiggi: the “scores” as you call them (i.e. the coefficients of the various columns in the US News table) can be inferred from the table itself. That US News publishes the relative weights of the different factors is decisive in being able to do this; it increases the accuracy available from the somewhat incomplete data to the point where a reliable result can come out. It is not as simple as running a regression to extract the coefficients, which may be why the other poster’s attempt didn’t work. I can advise if anyone here wants to reverse engineer the US News ratings; my to-do list is too long to code it myself before the next ranking comes out.</p>