Peer Assesment Rank

<p>

</p>

<p>no, but there is always the fact that at most top schools, the researchers aren’t teaching, and thus it matters little to undergrads whether or not their school developed a new kind of velcro.</p>

<p><a href=“http://www.marco-learningsystems.com/pages/kline/prof/profchap5.html[/url]”>http://www.marco-learningsystems.com/pages/kline/prof/profchap5.html&lt;/a&gt;
<a href=“http://www.getuponline.org/casualization/casualization_chronicle.htm[/url]”>http://www.getuponline.org/casualization/casualization_chronicle.htm&lt;/a&gt;
<a href=“http://www.yalealumnimagazine.com/issues/99_07/GESO.html[/url]”>http://www.yalealumnimagazine.com/issues/99_07/GESO.html&lt;/a&gt;&lt;/p&gt;

<p>penn: 40% of classes taught by full-time professors
yale: 30% taught by full-time professors</p>

<p>barrons,
You already know that I think that the PA is of very little value to most students and has no place in the college rankings system, but I want to give you a chance to defend your position. How would you suggest that a student looking for a college use the PA scores if he/she is not interested in the technical fields and/or performing research while in college?</p>

<p>The top schools on the peer assessment list look pretty good (this is probably how the world would rank the US schools):</p>

<ol>
<li>Harvard University 4.9</li>
<li>Massachusetts Institute of Technology 4.9</li>
<li>Princeton University 4.9</li>
<li>Stanford University 4.9</li>
<li><p>Yale University 4.9</p></li>
<li><p>California Institute of Technology 4.7</p></li>
<li><p>University of California-Berkeley 4.7</p></li>
<li><p>University of Chicago 4.7</p></li>
<li><p>Columbia University 4.6</p></li>
<li><p>Cornell University 4.6</p></li>
<li><p>Johns Hopkins University 4.6</p></li>
<li><p>Duke University 4.5</p></li>
<li><p>University of Michigan-Ann Arbor 4.5</p></li>
<li><p>University of Pennsylvania 4.5</p></li>
</ol>

<p>Norcalguy,</p>

<p>Who is “the world?”</p>

<p>

</p>

<p>Nope, Barrons! One needs of modicum of Critical Reading aptitude to be able to recognize the “right” horse or the … use of sarcasm.</p>

<p>

</p>

<p>Norcalguy.</p>

<p>“this is probably how the world would rank the US schools”</p>

<p>and being unable to locate half of them on a map of the US. Asking the “world” to tank the best US undergraduate business schools would probably yield the same exact list. :)</p>

<p>

</p>

<p>

</p>

<p>It’s much more complicated than to say the PA score is really only a measure of Grad school prestige or faculty quality. I agree it’s subjective, but it appears to be a subjective meshing of research/faculty prestige PLUS undergraduate quality. The proof? The scores themselves don’t quite make sense if it were truly only a faculty quality measure. The two obvious standouts are Berkeley and Michigan. Berkeley, especially, would be in the 4.9 group if it were truly only a grad school ranking. As it is, it appears to gets an arbitrary “punishment” ding for undergrad to account for the 4.7. Another example is Wisconsin and UTexas-Austin. Wisconsin is only rated at 4.2 and UT-Austin is in the same 4.1 group with Georgetown, Rice, Vanderbilt, etc., schools they HANDILY beat in terms of graduate programs and faculty strength. Comparing Rice vs. UT-Austin, UT is ranked higher in just about EVERY academic program they both share and by quite a high margin in most cases. Not to mention, it has many more highly ranked programs to begin with and across a much broader academic spectrum. So this is an example of Wisconsin and UT-Austin getting some sort of subjective nudge down due to their admittedly less selective undergraduate program (in the case of UT, it’s required by state law to be at least 90+% in-state at the undergrad level). There are also examples of schools getting a boost BECAUSE of their undergrad strength - UVA, Brown, and Dartmouth should not be ranked where they are by this measure (or certainly not over UT, Wisconsin, and UCLA!!) if it is truly a “research” measure only. So, while it may be true the PA is indeed subjectively biased toward strong research/grad schools, there are clearly corrections made for the quality of the undergraduate college.</p>

<p>Xiggi, perhaps I did not explain myself properly. I never said the Peer Assessment score measures quality of undergraduate education. I have in fact always said that such a thing cannot be measured. Education, particularly at the university level, is a highly personal undertaking and it varries from individual to individual. I do, however, believe that the Peer Assessment score measures perceived quality of undergraduate institutions (not education) based on the strengths of their academic departments, the quality of their faculties and facilities, ties to academe, research and industry and the wealth of resources. How good an education one gets, on the other hand, depends almost entirely on that person and how much effort they put into their education.</p>

<p>Xiggi, I think you’ve gone overboard in defending your position. What the “peer assessment” measures is really quite clear: it’s “reputation.” Simple as that. Dedication to teaching - offered at one point as an example of what might factor into a school’s reputation, is not part of the definition of that term, merely (and expressly) an example of one factor which might affect a school’s reputation.</p>

<p>Like it or not, reputation is important to people. And I agree with Norcalguy (being a “norcal” guy myself) the list of top PA schools strikes me as a pretty accurate read of how the generally knowledgible (but not CC obsessed) public would rank these schools. Those probably are the schools with the top academic reputations in the country, or near enough.</p>

<p>^^^Right. The peer assessment itself is not a measure of undergraduate quality or quality of teaching but simply reputation, one measure that might go into one’s consideration when choosing an undergrad just like SAT scores or alumni giving rate or graduation rate.</p>

<p>

</p>

<p>Kluge, rather than worrying about the definition of the peer assessment, why not spending some time making up your mind! Why did you bother writing an entire paragraph about the elusive “quality of education” if the key to understanding the value of the peer assessment was confined to its measurement of “reputation?” Funny how the “key” word was not even mentioned! </p>

<p>Lapsus linguae or lapsus calami? </p>

<p>

</p>

<p>

</p>

<p>Alexandre, I can’t disagree with your points., especially about the term “perceived quality.” This allows for the perception and the reality of the strengths of their academic departments, the quality of their faculties and facilities, ties to academe, research and industry and the wealth of resources not to have to be the same thing! </p>

<p>The results of the PA are an unmistakable sign that perception is in the eye of the beholder. A perception that could be corrected with a bit of attention to the data that starts after the second column of the rankings. But that is obviously not the objective of the surveyor or the surveyees. Didn’t Morse recognize that the objective of the PA is simply … to level the playing field and boost the rankings of the large public research schools?</p>

<p>But what do I know? Since presidents of schools who are asked to complete the PA survey do seem to know better, why not listen to this voice: “Moravian College, founded in 1742, one of America’s oldest and most respected liberal arts colleges, feels the use of this highly subjective and highly manipulated instrument undermines the college selection process and does not contribute to the common good,” said Christopher M. Thomforde, president. “We agree with the criticisms that this survey provides inaccurate information and **distorts perceptions of the quality of instruction found at America’s colleges and universities.” **</p>

<p>Joshua, </p>

<p>Only 58% of these presidents and deans respond to the survey. And they themselves have said they have no idea about other schools, especially their undergrads (at least a few, as posted on this thread). And of course, they are from over a 100 schools, not just the top schools in America. A dean from a 100th ranked school has as much weight as a dean from a top 10 school. </p>

<p>Don’t make boisterous claims without following the thread and paying attention to the facts.</p>

<p>Xiggi, what kind of’ “reputation” did you think I was talking about? A reputation for fine architecture? Water quality? Football prowess? Their student’s good looks? The “reputation” of these academic institutions in the context of “peer assessment” is their reputation for academic excellence. You can argue methodology, significance, accuracy, even deep dark conspiracies if you like, but the intended focus of this factor - a university’s reputation for academic excellence - isn’t really a tough question to figure out. This is simple - so simple I shouldn’t even have to write this. You’re really stretching to make your point, for no reason I can understand.</p>

<p>Though i’ve posted this on previous threads discussing the Peer Assessment, i think it bears repeating. In sum, the problem with the Peer Assessment are at least two fold:</p>

<p>1) An inherent bias with such a survey
2) The impossibility of being able to accurately “grade” every university out there</p>

<p>The closest analogy I proposed in the past was citing similar weaknesses with the NCAA College Football Coach’s Poll (here are some of my previous posts):</p>

<p>

</p>

<p>

</p>

<p>joshua007,
It appears to this reader that, somewhere along the way, your education did not teach you original, critical thinking. I suggest you stop deferring to everyone’s else’s posts and the opaque opinions of unnamed academic responders in the PA survey. It appears that you are not even attempting to understand the problems associated with this measure, but blindly accept it as it reinforces your personal view about the research-intensive colleges that you and your UK science/engineering colleagues know about. </p>

<p>Forget the names of the colleges being ranked and think about what the PA is supposed to measure, eg,

  1. Can Peer Assessment be defined in a way that all agree on?<br>
  2. How is PA supposed to be compiled? Is there any standard that different responders are to use in assigning their grades?
  3. Who is doing the grading? Over 1300 colleges get the survey and only 58% respond which means that over 546 colleges did NOT respond. In addition, we don’t know who the responders are.<br>
  4. What is the legitimacy of the relative grades, eg, is there any potentially nefarious grading going on, eg, is the University of Maryland marking down the University of Miami in an attempt to end the statistical tie between these two schools?. </p>

<p>Frankly, it is hard to find the good in Peer Assessment scoring unless you are someone who is interested in a career in academia and want to know the colleges that have the highest profiles in the technical research areas. In your opinion, what else is useful about PA?</p>

<p>I suggest you read “the Wisdom of Crowds”. Even the individually partially informed make good estmates of the facts when in a moderately large diverse group.</p>

<p><a href=“Penguin Random House”>Penguin Random House;

<p>

</p>

<p>Response rate is measured at the individual level, not the institutional level. The response rate does not reveal how many colleges responded. USNews said that over 4,000 INDIVIDUALS were surveyed. See page 78 in the 2007 ranking volume, or post #6 on this thread.</p>

<p>hoedown,
Thanks for the clarification. So if 4000+ individuals were surveyed, that means that at least 2320 responded and at least 1680 did not. It sure would be nice which people (and schools) are in each group. Not to mention how nice it would be to know what they said and how they graded.</p>