USNEWS 2010 Rankings

<p>^ agreed, Bjo. Academically, this is where the schools fall in terms of opportunities, rigor, faculty quality, strengths, etc.</p>

<p>Also, I’ve got to admit that I’m tired of people on CC trying to bring up Liberal Arts Colleges like Amherst, Williams, etc and LAC-esque universities like Brown and Dartmouth as superior in Undergraduate Academics to well-known research universities like columbia, Johns Hopkins, Cornell, Harvard, etc. While you people bring up a valid point that LAC’s afford greater “attention to grades, focus on advising, etc” for undergrads, you bring these points up under the assumption that such an education is perfect for EVERYONE. True, hand-holding might be desireable for some people, but for others, they look forward to being taught by researchers and leaders at the forefront of their respective fields which are in much higher abundance at places like Columbia, Harvard, JHU, etc, then at places like Amherst, Dartmouth, etc. Also, the hand-holding is not necessarily everyone’s ideal undergraduate education.</p>

<p>So please, in the future, stop trying to use such reasons to explain why _________ LAC-like college is greater than ____________ research university.</p>

<p>“True, hand-holding might be desireable for some people, but for others, they look forward to being taught by researchers and leaders at the forefront of their respective fields which are in much higher abundance at places like Columbia, Harvard, JHU, etc, then at places like Amherst, Dartmouth, etc. Also, the hand-holding is not necessarily everyone’s ideal undergraduate education.”</p>

<p>Right. I couldn’t have said it better myself.</p>

<p>

</p>

<p>I disagree. The PA is first and foremost a roughly accurate reflection of the strength of the faculty, as perceived by other academic professionals. It has nothing to do with the strength of graduate programs, except insofar as the schools with the strongest and deepest faculty tend to attract the top graduate students and develop into the best graduate programs; but you’ve got the causation arrow reversed if you think the college and university administrators who fill out the PA survey are basing their answers on the strength of the graduate program, as that’s not what they’re asked and it’s not what they’re thinking about when they complete the survey. They don’t know what goes on in the classroom, either at the undergrad or grad school level. They do know who’s on the faculty at other schools; they keep a close scorecard, and they have a pretty good idea about which schools are attracting and retaining the most coveted faculty, and which aren’t. Believe me, as much as schools compete for the best students, they compete far more fiercely for the top academic talent, they know where their own institution stacks up in the pecking order, and they know who’s on top and who’s not, who’s rising and who’s sinking. That’s what the PA rating measures. </p>

<p>If you have any doubt about that, consider this: how is it that US News comes up with a PA rating for LACs, most of which don’t even have graduate programs? It’s because LACs keep an eagle eye on each other’s faculty, and know whose services are in demand, and whose aren’t. </p>

<p>That said, the PA rating is a crude indicator because it’s based on an overall assessment of faculty strength across all disciplines—something even the most attentive college and university administrators are not well positioned to assess, and the larger the school the less well-informed they are likely to be. Much better and more fine-grained is the NRC assessment of faculty strength, discipline-by-discipline, as measured by other faculty members in the same discipline. Unfortunately, the latest NRC rating currently available is now almost 15 years old, and it’s confined to schools that have graduate programs. So for the moment, the PA rating is unfortunately the best peer assessment we’ve got.</p>

<p>It never ceases to amaze me that people who in any other field would consider peer assessments a perfectly valid indirect indicator of quality (but perhaps only one among several) somehow come to take the position that if conducted among academics, peer assessments are utterly worthless. I think it reflects a kind of anti-intellectualism, a belief that if academics think it, it must be wrong. Which, if you think about it, is a rather bizarre way to look at how one might evaluate academic institutions which after all are run by academics and intellectuals.</p>

<p>

Your comment does not merit a response. First of all, this is not a “marketer’s survey” and second of all, a 46% response rate invalidates the statistically significance of ANY study.</p>

<p>

</p>

<p>Good point. According to students, the revealed preference study shows the following:</p>

<p>Harvard 1
Yale 2
Stanford 3
Cal Tech 4
MIT 5
Princeton 6
Brown 7
Columbia 8
Amherst 9
Dartmouth 10
Wellesley 11
U Penn 12
U Notre Dame 13
Swarthmore 14
Cornell 15
Georgetown 16
Rice 17
Williams 18
Duke 19
U Virginia 20</p>

<p>Clearly, students perception of “prestige” is very different. Isn’t it?</p>

<p>hope2getrice

</p>

<p>Because the PA is a subjective score, you are no one to predict what all these individuals would do. You keep missing the point. You can only tell us what YOU would do and how you would do it. The rest, is poor speculation. Get it?</p>

<p>

</p>

<p>Dude, 46% response rate is EXTREMELY good and is statistically significant. 46% is considered a very good response rate for a mail survey.</p>

<p>Consider 41% of America voted in the 2007 election and 53% voted in the 2008 election… 46% response rate on a survey is DEFINITELY NOT STATISTICALLY INSIGNIFICANT… Where are you getting your information from…</p>

<p>A high response rate is the key to l****egitimizing a survey’s results. When a survey elicits responses from a large percentage of its target population, the findings are seen as more accurate. Low response rates, on the other hand, can damage the credibility of a survey’s results, because the sample is less likely to represent the overall target population.</p>

<p>

</p>

<p>PA is a subjective score. The subjective tendencies tend to be weeded out when you survey a vast number of academians… if you take a look at the macro scale of things… Harvard is no. 1 because many academians feel Harvard is no. 1. Just as many high school students know that already… but things fall off as you get lower and lower down to second tier schools. Only academians are qualiifed to speak on those schools… We students really can’t. So we have to rely on the PA scores outside of the perfect HYP to gauge schools we are not qualified to speak about…</p>

<p>Who are you going believe is qualified to speak on the percieved quality of education at XYZ institution. A President of competitor school down the street or an average joe CC forum member…</p>

<p>I would trust the subjective opinion of a person who has been in higher education for 25+ yrs and is widely respected by members of faculty who hold PhDs… than some 18 year old freshman college student who <em>thinks</em> they knew more about colleges than they actually do.</p>

<p>PA scores are subjective, but so are the opinions of us members… When you get a statistically significant response from respected members of the higher education community… the higher sample size will legitimize the results… If the subjective opinion of a vast number of academics think Princeton is no. 1… then Princeton is probably no. 1! Simple as that.</p>

<p>A Brown alumnus who later became the President of an Ivy league University should put Brown as a 5.0… Brown, as the fine institution it already is… did manage to produce a leader in the field of higher education (whether or not it was up to the individual is a whole another topic to discuss) You can’t get better than University college President these says in Higher Education. That point is… a single 5.0 is an outlier… take a large sample size… it will negate a 5.0 or a 1.0 from a jealous university president who didn’t get the Dartmouth job or whatever.</p>

<p>46% response rate is high… the collective sum of opinions elicited from a large number of academicians… should neutralize out the 5.0 and adjust it to where it appropriately place… It’s called taking a large sample size to get an accurate reading of the collective opinion of the target audience…</p>

<p>I’m sure USNews editors of a massive journalism conglomerate are well aware of the potential flaws of the PA score system…and have addressed it. They better because they’ve been sending out surveys since 1987… many years of experience under their belt. US News has skyrocketed towards success and has literally became the Gold standard outside the NRC rankings… I’m pretty sure they have it under control. Methology has been tweaked many times before this.</p>

<p>

</p>

<p>Um, I think it’s fact that UIllinois, UWisconsin, etc have better graduate programs than Brown and Dartmouth in both quantity and quality. If you don’t believe me, check the NRC rankings. For this reason alone, Brown and Dartmouth would have to have gotten a 3.0 or something on the PA score, well below Wisc and Ill, if they were truly to be evaluated as overall institutions, grad programs included, but they were not. </p>

<p>It’s NOT so hard to understand. The Brown and Dartmouth scores are good enough as it stands and definitely put them within the top 15 schools, which is certainly NOT where they would be if this were a graduate school ranking. If it were, I would maybe say Brown and Dartmouth can hope for a top 50 ranking.</p>

<p>Well, just because they are professional academics doesnt mean they arent slanted towards selling their own brand and pumping their own alma mater or employer. They are human beings and subject to subjective opinions. PA is the only CURRENT methodology we have to evaluate faculty. I often tell students to examine the faculty credentials in evaluating a program depth and strength at their schools. But that is also problematic, because the best credentials dont mean diddly squat when it comes to teaching ability. How many times have you had (or heard anecdotal reports of) amazing professors whose credentials may be deemed (or sneered at by credentialists) inferior? I know many of my undergrad professors (living and dead) who were simply amazing professors…and it had nothing to do with whether or not they went to Harvard or Yale. My point is that a cursory and preliminary (and admittedly superficial) method for prospective students to evaluate a school and its programs is with faculty credentials, but that they need to ask around or meet some of these professors if they can, to determine who is a great teacher and who is not. Sometimes the hardest professor is the best professor and too often kids give high marks on “ratemyprofessor.com” to ones who give easy “A’s” and poor marks to one’s who rarely give out “A’s”. The NRC rankings need to be updated to be certain.</p>

<p>If I could design a perfect world, I wouldnt differentiate with minutiae. I would put schools in relative peer groups by alphabetical order and leave it at that. </p>

<p>Finally, as I stated before, all of this “branding of colleges” as if going to the highest ranking school you can find just for its ranking is going to instantly make you a success in life or a superior human being to everyone else “beneath you” is really a testament to one’s own character flaws if you ask me. </p>

<p>There is no question that the super elite colleges provide a superb educational and career opportunity. But they are not the only path to success in life, as success is defined not just by money, but by many other factors. The NYT published an article just last night online about kids from Wharton having to make new career choices because job offers are evaporating, even for those who interned at Goldman Sachs last summer. And frankly, from a sociological point of view, maybe that isnt half bad. Making people think about something other than investment banking and doing something perhaps more beneficial for society than becoming greedmeisters.</p>

<p>One more point: in the USNWR rankings they state their methodology includes a 20-25 percent factor for retention: students returning. </p>

<p>Now we can argue about this until the cows come back from over yonder. What exactly does that really mean? How many kids don’t come back (at any school) because of financial problems? Ivy League kids have the advantage of being given a lot of financial aid for any family earning less than 120k (AGI) per year. Most second tier and third tier colleges cant match that commitment. Further, some kids dont come back because of academic stress or immaturity problems, which may or may not have anything to do with their SAT scores. </p>

<p>My point is that its easy to just say, “oh they don’t return to that school, it must be a bad school.” But the truth may be something different.</p>

<p>Here is one college president’s comment on the uselessness of peer assessment:

[Is</a> There Life After Rankings? - The Atlantic (November 2005)](<a href=“http://www.theatlantic.com/doc/200511/shunning-college-rankings]Is”>Is There Life After Rankings? - The Atlantic)</p>

<p>

When people make disrespectful remarks like this…</p>

<p>Ugh, just makes me sigh in anger that so many people think they have a clue about what goes on in an environment they’ve never experienced.</p>

<p>If you think that collaborative work environments and a lack of competition amongst peers is a bad thing, that’s fine, but that does not amount to hand holding. Focus on advising is not universal amongst these schools, and at Brown, a good advising experience is totally dependent upon a student utilizing connections they’ve made properly. We make it impossible to not forge a relationship with the people who can help you, but students still complain all the time because they didn’t take the extra step to then utilize that connection as a resource.</p>

<p>I have to laugh when people think that being in a course designed to be a weed-out course that having survived that is like getting a medal for their war service. In the meantime, I learned the same amount of material, moved on to a graduate level course on the subject, met a lifelong mentor, and learned how to work out complex material with groups of people. Congratulations, you couldn’t even understand your professor’s accent and the test material did not reflect the level you were taught at in class or that the HWs you slaved over because your peers had no interest in screwing their HW curve. I’ll stick with my learning environment where for me, a lot more was accomplished.</p>

<p>Some people need that competition to motivate them, so go somewhere else. But honestly, cut the **** that makes one sound better than the other, because quite clearly we can work that game from either perspective and it’s not really helpful.</p>

<p>Some people in this thread remind me of UCBChemEGrad coming into a Brown thread and in three posts go from questioning whether we even offer Sc.Bs in Chem E or engineering in general to saying, "The ChemE faculty at Brown actually looks pretty strong… " Great example of the utility of asking administrators to assess the level of scholars in hundreds of fields across hundreds of colleges. </p>

<p>The problem with peer assessment is that it cannot be done well, because no one really has enough information to figure this stuff out, no one has the same criteria, etc etc. PA is just a score that is used to confirm prevailing wisdom, and for that reason those scores will rarely shift. It dislikes anyone who operates in a unique way and it lags significantly behind changes in faculty (for instance, 40% of Brown’s faculty are new in the last 6 years. How is it possible that our PA has not gone up or down when we’ve replaced/added so many people? It is essentially impossible the quality of our staff stayed exactly the same to the tenths place when having that much turn over.</p>

<p>I’m not saying that prestige is not a factor that can and should come into play, I’m just saying there are some really big problems with using such a flawed measure as such a huge factor or as an even bigger factor.</p>

<p>I have an interesting idea for resources-- what about percent of the yearly operating budget that comes from the endowment? This is a good measure of the stability of the institution-- using endowment earnings as a huge portion of your year-to-year budget makes your institution rather susceptible to market forces. For instance, Princeton has to borrow 1bn to keep running because almost half their yearly operating budget came from the endowment. Whereas Brown has a smaller endowment, we’re looking at 90 million in cuts over 5 years, Princeton is looking at 82 million less next year. While size of endowment does speak to financial stability, especially for private schools (but with rising importance in publics and public support for publics diminish), this other measure, percentage yearly budget, will show a different story for stability. Places like Brown and I bet UCB and the other publics as well are going to see less changes in their operations due to the financial crisis. Our budget cuts, 90 million over 5 years, is actually a reduction of our projected budget-- we’re going to be spending the same amount we did in 2008-2009 every year for the next five instead of increasing our spending. While we’ll lose some to inflation, really, we’re not talking about drastic changes, just the slow down of planned improvements. Other places may actually see some more substantial changes.</p>

<p>Someone posted this site [url=<a href=“http://www.rankmyway.com/rank.php]Rank[/url”>http://www.rankmyway.com/rank.php]Rank[/url</a>] a while back. It doesn’t have a ton of features, but it is just in a beta phase. There are some problems with the calculations too (i.e. it automatically takes the in-state tuition of public colleges, when I think either (a) the user should choose or (b) take the average of the two based on in/oos ratios). </p>

<p>It was kind of cool to play around with, and applies to the current discussion.</p>

<p>vossron, that was a great quote! PAs can’t be too useful outside of the sphere of colleges that one is familiar with, at least to have any accuracy or significance. I think sports programs may have a significant and unfair influence on it too.</p>

<p>

This may be an ok criteria for private colleges, but it would unfairly, imo, negativly affect public schools. How much does Berkeley’s rely on its endowment?</p>

<p>The other comment I’d make is that tiers work a lot better than ordered lists.</p>

<p>For instance, on RMLs list, I would be hard-pressed to find a difference in quality from say, 1-8 and 9-18, 19-27, etc (grabbing where I think there are “borders”, subjectively).</p>

<p>

</p>

<p>Hi. After doing some careful, but not too-extensive research on the web about Tulane, I have found out that USNews has underrated this institution. I think Tune deserves to be in the top33-top45. Its’ current rank which is 51st, is an insult to the institution. But this is just my personal opinion though.</p>

<p>I’m also surprised to not see Case Western.</p>

<p>Hey melody. I just reshuffled and rearranged before I grouped the existing top 50 schools of USNews’. Tulane was ranked 51st last time that’s why it wasn’t in my list. But Case Western, which has an abbreviation of CWRU, is listed in Rank #33.</p>

<p>I haven’t looked into USC yet, but I cannot rank it in the top 25 yet as I strongly think that there are at least 25 schools out there that are superior to USC at the moment. I do know, however, that it’s vastly improved since the last 10 years or so. If that upward trend of USC continues, it might crack in the top 25 pretty soon.</p>

<p>

Haha. Ok, ok…so I made a mistake. I corrected it.</p>

<p>one question though, modest. If peer assessment is supposed to be measuring distinguished academic programs - and indirectly faculty quality, why should it be dropped from the ranking protocol? All other factors in the ranking measure student quality, class size, financial resources, etc…why not have a factor that ranks faculty quality? Faculty is just as critical of a component to a college or university as students and resources.</p>

<p>And if USNWR broke out individual department rankings for every academic offering, like it does currently for engineering and business, the aggregate results will look pretty close to the current PA score.</p>

<p>^^^^^Certainly UCB would look like it was way underranked if you did that!</p>