Why are international college rankings drastically different from the Usnews rankings?

<p>

</p>

<p>The quasi-scientific basis for a data model is in verifying that its features and weights give approximately the same results as a “Ground Truth” ranking (which might in fact be something like the Peer Assessment scores in this case). The data modeler doesn’t make normative judgements about whether “alumni giving” really, truly has a “5%” contribution to college quality. He makes a hypothesis that this feature might improve the model; he keeps it in the model if it does turn out to improve the results; he assigns it the weighting that gives the biggest improvement. The hypothesis might come from the modeler’s own intuitions about features related to college quality. However, weighting them is (or ought to be) an experimental process.</p>

<p>(USNWR doesn’t publish detailed descriptions of their process, other than a description of criteria and weights. So I’m surmising that it must be similar to other modeling processes.)</p>

<p>Which is all a fancy way of saying “We know what the result should be, so we will rig the things we choose to use as data and their weighting to get the result we think is right”. Brilliant!! I always thought modeling meant making things fit to achieve an objective observation, like something in nature. To pick subjective factors and give them subjective weights to match what was a subjective observation (which in a case like this is all “ground truth” is) seems like the height of lunacy. I mean if Harvard, Yale, Princeton, and 3 or 4 others are predetermined to be the top schools and the model then forces them to be the top schools, how will they ever not be the top schools? How useful is a result like that?</p>

<p>

</p>

<p>Yes, you could view it that way. </p>

<p>

</p>

<p>Once you develop the model, there is no assurance that the results will continue to be the same (unless you are continually putting a thumb on the scale). In my opinion, the usefulness isn’t in repeatedly re-assessing where Harvard, Yale, and Princeton stand relative to one another. The usefulness is in applying the model to a couple hundred other colleges.</p>

<p>Let’s say we have near-consensus on what the set of top N colleges are (if not the exact order). In fact, many different measurements (not just the ones US News uses) do point to approximately the same set. Then we want to know how close all the other colleges are in quality to those top N schools, based on measurements of the features identified in the model (admission selectivity, graduation rates, class size, etc.) … since not every student can be admitted to the “top” schools.</p>

<p>

And there is a key flaw. When you can convince me that you can use one model to compare Penn State University to Yeshiva University and have the result actually address any thing meaningful about which is “better” than the other, then I will start listening again. The entire enterprise starts from a flawed premise.</p>

<p>But another major flaw is

Barring major scandal or something similar, reputations often long outlive the reality. By placing such heavy weight on peer assessment (was that the way they got the model to fit the “ground truth”?) it nearly insures those schools never move out of the top spots, and being in the top spots insures their peer assessment stays high, because USNWR told us it was so. Terrible, terrible methodology.</p>

<p>When we add the top ten from both lists, we wind up with “top fifteen” (after the overlap) or so. Can we agree that those top fifteen are all very good universities?
The rest is petty quibbling. Actually, the “top fifty” are very good universities, but different departments shine more brightly in each.
And what is often ignored is that a good school is as good as a student makes it. You can major in beer at Harvard, too. But Harvard is on the whole a superb university.</p>

<p>Be there little doubt that the difference in the rankings does reflect the emphasis on academic (world ranking) vs. the inclusion of sports and a few other factor (US ranking) and the inclusion of universities outside the U.S.A.
But- none of this is really <em>that</em> important, except for the administrators at the schools.</p>

<p>@205mom - I agree with your sentiment, and I truly wish that the rankings were as petty and unimportant as you say. But when you have seen as many cases of parents and students chasing prestige largely based off the USNWR as I have, you might think differently. Sure, there were always people that thought Ivy or nothing, but the rankings have made this far, far worse. Besides, there is some merit in in opposing something on principle alone, even it it is Quixotic.</p>

<p>Of course these are all great schools. I think the most interesting point you made is that so are the top 50, or I would say maybe even 75. Although there are a few schools in there whose students are not much better than average when you look at test scores. But I digress. The tragedy is that due to the rankings, far too many people think that anything below 15 is just a podunk school not worthy of even considering. That is sad.</p>

<p>@fallenchemist it is very sad-but really don’t you think this is pretty limited? Maybe the top 20 schools would be worthy? ;)</p>

<p>@Pepper03 - Even with the wink, I am not sure what you mean, lol. Worthy of what?</p>

<p>@fallenchemist it was a response to your observation that the tragedy is these rankings render schools not in the top 15 worthless.</p>

<p>Ah, I see. Yes, 15 was arbitrary. I have actually seen cases where top 10 was all that was acceptable to the parents, to the point where they wouldn’t pay for anything else. I have also seen cases where parents pressured their kid to attend a top 25 and laid on the potential guilt pretty thick if they chose otherwise, but would accept another choice. So it really varies a lot. I kind of took the middle ground of what I usually have seen to make a point.</p>

<p>And to be clear, the significant majority of parents are not this way, and even the students with the very top stats that clearly have a chance at the Ivies are not usually so narrow in their outlook. But as we have all seen, it happens enough to be alarming, or at least disturbing, and it really ramped up there for a while. I think, maybe, it is leveling off and there might even be a bit of a backlash against the rankings to some degree. Very hard to say, because if that is happening it is a small movement. But I think there can be no question that the situation pre-ranking (1982 I think?) and post-ranking is dramatically different. Once you give people something to latch onto that allows them to validate their sense of self-worth and bragging rights, there is no stopping it. Human nature for a sizable minority.</p>

<p>There really is no way to rank the quality of undergraduate education objectively if you are going to use multiple factors, because somebody has to decide how to weight those factors. You could use a single factor–and it’s my impression that if you used SAT scores of matriculating students, you’d get something close enough to USNWR’s list that the same schools would be in the same ranges, especially at the upper part of the list. Personally, I think quality of student body is the best single measure, because it essentially shows you how the relevant market is evaluating the quality of the product. Since the international lists don’t use this factor at all, I think they are pretty unhelpful in evaluating universities in terms of undergraduate education.</p>

<p>By the way, I always like to take the opportunity to tout the most scientific rating system of them all: <a href=“Ranking Colleges by Prestigiosity - College Search & Selection - College Confidential Forums”>http://talk.collegeconfidential.com/college-search-selection/978040-ranking-colleges-by-prestigiosity-p1.html&lt;/a&gt;&lt;/p&gt;

<p>Personally, I think that graduate outcomes are the best measure, regardless of incoming student quality, since, if I spend money, I care about the opportunities I am getting. Of course, outcomes depend on your goal, so industry and geography specific rankings and tiers make sense. Trying to rank CalTech along with Yeshiva or even Brandeis makes little sense.</p>

<p>Also, if you go blindly by rankings, you could even go down a path that makes it <em>harder</em> to reach your goals. For instance, a kid who goes solely by rankings would choose WashU’s Olin over IU’s Kelley (better overall rankings and better undergrad b-school by BusinessWeek rankings). However, if his goal was becoming an IBanker, IU would have been the better choice. WashU places very few kids at BB IBanks (and those kids who get in essentially have to do it by themselves). IU has a stellar IB workshop which places virtually kid who is in that program in to a bulge bracket IB.</p>

<p>PurpleTitan, you’re certainly right. I think quality of student body is a sensible way to compare similar programs, however.</p>

<p>@Hunt @PurpleTitan - Very valid points, IMO, and there are several others similar to those that just goes to show that there really is no ranking that is meaningful unless you want to focus on a single factor, and even then things can be misleading. But since for the vast majority of students there are multiple factors they are trying to balance for their own personal situation, it is actually fit, which is a subjective judgement assessment, that would seem to be the best guiding principle. Hence no ranking system could possibly be meaningful.</p>

<p>Because when you get right down to it, we are trying to say (well, they are trying to say) there is some absolute idea of “best” out there, when this is really about trying to find the best college for an individual. In other words, what difference does it make if Harvard or Princeton are ranked as “best” if a student isn’t even qualified to get into those schools? And even if the student is qualified, it still doesn’t mean that any one of those schools ranked in the top 10 or whatever is best for that student. That is another issue with the entire notion of ranking, it perverts the approach many take to finding the right school for them.</p>

<p>To your point of using SAT/ACT scores, @Hunt, I am sure you are right that among the top few schools it would mirror the rankings. But it does fall apart, possibly, as you go down the list a little further. I know because the school I follow most closely, Tulane, would be 15 or so spaces higher if that were the only criteria. It easily passes schools like the several UC schools that are not UCLA and Berkeley, It is slightly better than Miami ranked a few spots ahead. In fact, just looking at SAT and ACT, Tulane has slightly better scores than UNC-CH which is ranked #30 compared to Tulane at #52. You get the idea. What to make of all that, I can’t say except that it might show how subjective the more heavily weighted factors it uses are. Now to be fair, I know that to some extent Tulane still has factors like 6 year graduation rates that are only now wringing the Katrina effect out of the stats, because of the methodology USNWR uses. There are other aspects of that I could get into, but that is too far off track.</p>

<p>As far as outcome based, that is pretty delicate sometimes as well. Again, Tulane would probably suffer unfairly if that wasn’t done just so, because Tulane has, as you all probably know, put a very heavy emphasis on community and service learning into its overall philosophy and right into its graduation requirements, possibly still being the only research institution to do so. Because of that, Tulane has one of the highest rates of students that join organizations such as the Peace Corps, Teach for America, and similar organizations right out of school. It would be easy for someone like USNWR or Forbes or Fortune to try and do an outcome based ranking and completely overlook that kind of issue. Just saying. Interesting that it is Tulane that is probably the biggest outlier for both of your suggestions, and in both cases it largely goes back to Katrina. So sure, you could say that just because one school is an exception doesn’t mean it won’t work for everyone else. And it might, but who knows. If there is that exception there could be many more.</p>

<p>But like I said, why rank at all? If students just looked at the factors that were important to them and came to a conclusion as to which school was best for them, not being prejudiced by the rankings from the start, that would be the best of all, to my way of thinking. I just don’t think the rankings serve any useful purpose, and certainly not enough of one to outweigh the harm they have done.</p>

<p>

</p>

<p>In an ideal world, everyone doing that on their own would be best, but it’s as lot more difficult to do than you seem to realize. Where would they get such information? Would it be okay if someone just published all the relevant stats, but didn’t rank? </p>

<p>You seem to think that everyone else is a fool for looking at the rankings and mindlessly pursuing what everyone was pursuing in the first place. Consumers of anything aren’t irrational, they are simply using the lists to efficiently pare down what is otherwise an overwhelming array of choices. Those who wish to find diamonds off the list are welcome to do so, and maybe they’ll get a great bargain in the process, but either is a valid choice - time is a cost as well, and it is in short supply.</p>

<p>Let’s look at it this way. Suppose you wanted to maximize your enjoyment of your leisure time by watching a television show. You could watch a random show out of thousands that are available, and the chance that you’d have a good time are pretty low, as most shows aren’t going to be enjoyable to the random person. Or, you could ask a friend who’s taste you share in television shows if they have any recommendations. Your chances are now much higher. But your friend only has so much time as well and doesn’t see all the shows, so perhaps you ask a professional, who’s job it is to look at lots of shows, and see what they recommend. If they’ve recommended something good in the past, you will tend to trust them for future recommendations. Unfortunately, college tends to be a one shot deal, so you have no past history to go on. You have to go with everyone else saying they had a good experience with Reviewer X, and since you trust your friends and family, you go with Reviewer X over picking a random subset of 3000 colleges. That’s all US News and the other rankers are doing - taking their collective wisdom and publishing it. People obviously find it valuable. You are free to ignore it if you wish. But don’t think that those who follow the advice are being irrational - ignoring it may be the more irrational path.</p>

<p>I think the smartest consumers use the data USNews collects without focusing too much on the overall ratings.</p>

<p>^Exactly! I know that once we found one interesting college, we used US News and other resources to track down similar colleges. Since we were looking at a region and colleges we were completely unfamiliar with, the lists and reviews proved invaluable as opposed to sorting through innumerable unsuitable choices. Did we miss one or two? I’m sure we did, but that’s not the point - we got to 95% satisfaction in a relatively small amount of time, but even then we had to go back, revise, and reevaluate after college visits and test scores came in. To this day, less than two months before D leaves, I occasionally question her choice - not because I don’t believe the school isn’t great, but maybe there was something equally good that she could have gotten a full-ride at. But then |I realize I’m being rather irrational and go back to being happy with the pick. At some point you just have to be done.</p>

<p>Just looking at overall rankings without doing research on your own <em>is</em> irrational. You have Fiske and Barron’s for reviews of colleges. You have this site. You have Wall Street Oasis for Street jobs. You can find a list of where start-up founders went to school. You can find out which schools are best at producing PhDs. You can look at the inputs in to the rankings. You have forums for almost every profession out there now. Obviously, you know where each school is located. You can visit colleges. You can look at placement reports of undergrad b-schools. Obviously you know where you qualify for in-state tuition and this site plus the NYTimes list of who offers merit aid gives you a pretty good idea of how much you have to pay for certain schools.</p>

<p>If you have some ideas on potential future careers and locations and do some legwork, you can easily come up with a list that suits you without caring about overall rankings at all.</p>

<p>

</p>

<p>True. Several single factors all point to approximately the same set of top ~50 schools.<br>
<a href=“Top 500 Ranked Colleges - Highest SAT 75th Percentile Scores”>USA University College Directory - U.S. University Directory - State Universities and College Rankings;
<a href=“College Endowments”>http://www.reachhighscholars.org/college_endowments.html&lt;/a&gt;
<a href=“http://www.parchment.com/c/college/college-rankings.php?page=1&perPage=25&thisYear=2013”>http://www.parchment.com/c/college/college-rankings.php?page=1&perPage=25&thisYear=2013&lt;/a&gt;
<a href=“http://talk.collegeconfidential.com/college-search-selection/708190-avg-class-size-4.html”>http://talk.collegeconfidential.com/college-search-selection/708190-avg-class-size-4.html&lt;/a&gt;
<a href=“http://www.usnews.com/education/best-colleges/paying-for-college/articles/2013/09/18/colleges-that-claim-to-meet-full-financial-need-2014”>http://www.usnews.com/education/best-colleges/paying-for-college/articles/2013/09/18/colleges-that-claim-to-meet-full-financial-need-2014&lt;/a&gt;&lt;/p&gt;

<p>The Forbes rankings (which emphasize outcomes) and the USNWR rankings both point to approximately the same set of top schools. </p>

<p>In other words, these measurements are mutually corroborating. When different measurements all arrive at similar conclusions, the burden of proof falls on the skeptic to show that some common factor is confounding all the results. If I were playing devil’s advocate, I’d point to “money”. </p>

<p>However, the outliers are interesting. For example, WashU and JHU, 2 of the 3 elite privates which I have noted do not seem to give their grads much of a push in those few professions where brand seems to matter most, underperform in Forbes compared to USN (Emory is the other one, but they were excluded by Forbes because they lied).</p>