<p>The guy I know from Minnesota told me, I’m pretty sure after he visited both campuses, that he preferred the feel of South Bend, IN to that of Cambridge, MA. I have very close family ties to rural areas (my mother grew up in farm country, with the nearest neighbor a quarter-mile away). But I like urban environments, and my preferences, having visited both of those towns, would differ in preferring Cambridge to South Bend.</p>
<p>There are lots and lots of Catholics at our high school, but over the last few years only four applied to Notre Dame and none got in. Fordham is probably the most popular of the well known Catholic colleges that kids from our neck of the woods apply to. Villanova and Boston College also get quite a few applications, and with considerably lower rankings, the College of New Rochelle and Iona College.</p>
<p><a href=“%5Bb%5Dtokenadult%5B/b%5D%20wrote:”>quote</a> Well, if you have any disagreements with the revealed preferences study, why don’t you meet the authors and discuss how to improve it?
[/quote]
</p>
<p>The question asked of you (or anyone else impressed by the study) was what makes the RP ranking better than instantly available large-scale data, such as the ratio I defined above, or numbers extractible from the annual National Merit report. Glib referral to the authors sends a clear signal that the study is being trumpeted by people who don’t understand its content. We see in this thread, for example, people who actually think that a 75 percent victory rate for Harvard over MIT was a finding of the study as published in the New York Times, when actually it is a complicated summary statistic with no direct interpretation in terms of cross admit battles. </p>
<p>Since you were in touch with the authors, did you ask them for the data? Why isn’t it available, when any layman can immediately understand and interpret the “revealed preferences” directly from the cross-admit tables, something that isn’t true of the rankings.</p>
<p>
</p>
<p>The study does not do any of the above – have you read it? It does not disclose the data it is based on, i.e. the table of cross-admit results. It does not address known methodological problems (instability) in their model caused by lack of cross-admit data as one goes down the rank list. It does not indicate why it is better (be it less manipulable, easier to interpret, more informative, etc) than simple metrics such as the one I suggested, or a simple tabulation of the cross-admit results. That’s for starters. </p>
<p>
</p>
<p>Sounds like bluffing. Is it too much to ask that before posting another dozen links to this study, the Harvard cheerleaders bother to read and understand what it says?</p>
<p>That’s right, though I think the special-interest schools like BYU, ND, BC get ranked high primarily because of indirect victories (they “beat the school that beat the school”, as in boxing rankings) and only secondarily because of self-selection per se. There is a huge anomaly when they rank Caltech second in the country based on between 7 and 12 matriculation victories; Caltech is a great school but nobody believes that it is generally preferred to Yale and Princeton or that it is more popular among engineers than MIT. </p>
<p>The false-positive comes about because Caltech has the strongest applicants, whose other options are better than the alternatives of applicants to any other school, and when they select Caltech it is in preference to MIT, Harvard and Stanford (which carries more weight than beating a no-name) and the minority of battles that Caltech loses also weigh less because they aren’t to low-ranked schools. The latter benefit is partly due to sample size: a larger number of Caltech battles might have revealed losses to Carnegie Mellon, RPI, Georgia Tech, Cooper Union, and the full-tuition honors program at low-ranked state schools.</p>
<p>The Top American Research Universities according to The Center for Measuring University Performance. </p>
<p>
</p>
<p>The Center for Measuring University Performance was founded by its co-editor John Lombardi, Chancellor of the University of Massachusetts Amherst.</p>
<p>It’s interesting how many approaches there are to this “who’s number one” question, which has resulted in lively participation in this thread. I look at this issue mostly through the lens of the young people I know through a statewide parents association, and consider what colleges those young people might find most suitable. There certainly is diversity of opinion on the issue of how colleges differ from one another. </p>
<p>Do you know, off-hand, of any journals that publish articles about these issues? What learned back-and-forth have you seen about the many issues raised in this thread in formal scholarly publications?</p>
<p>For those so inclined, here is a link to the abstract "Measuring Quality: A Comparison of U. S. News Rankings and NSSE Benchmarks " There are almost four pages of cited references, that includes, of course, works by Pascarella and Kuh.</p>
<p>There is no dearth of material on this subject. From the journal, Higher Education in Europe, Vol. XXVII, No. 4, 2002 “Some Guidelines for Academic Quality Rankings” by Marguerite Clarke is also worth perusing.</p>
<p>Thanks for all of the great links! Here’s one more. It lists a lot of resources on the assessment, outcome-based measurement side of the ledger. </p>
<p>Because there is a lot of additional information in the application patterns that would correct the picture from matriculation battles alone. For one thing, the anomalous advantage to niche schools would be diluted or disappear.</p>
<p><a href=“marite%20wrote:”>quote</a> Are the authors claiming that revealed preferences are a superior way of ranking colleges
[/quote]
</p>
<p>Yes. They claim that their method is a superior way of producing the preference rating used as an input to USNews-style rankings. They say that use of their method or one like it would alleviate pressure on colleges to manipulate rankings. They also claim (despite a disclaimer at the end of the paper) that such rankings reflect market-aggregated information about the actual quality of schools, not just a popularity contest. At several points in the article they “sell” their approach as an improvement over the status quo.</p>
<p>
</p>
<p>The empirical part of the study is a gold mine of data that I hope they will publish. The graphs of admission rate by SAT score are priceless. </p>
<p>
</p>
<p>You may be thinking of comments made in CC about the study, because no such qualifications are made in the paper itself. It would destroy the rationale for their method if it were valid mainly for the handful of need-based schools of similar reputation. In fact their method relies on indirect comparisons of dissimilar schools (chains of “A beats B who beats C”) precisely because there is a lack of direct cross-admit data for most pairs of schools.</p>
<p>More than interesting update article: according to Inside Higher Ed, the numbers are growing. Apart from Moravian, other colleges signing on include: the College of the Southwest; Colorado, Eckerd, McDaniel, Northwestern (Minn.), Philander Smith, Shimer, Unity, and Washington & Jefferson Colleges, Denison, Furman, Missouri Baptist, Naropa and Ohio Wesleyan Universities. </p>
<p>As I wrote in several previous posts, it is also significant that at this point there is nothing close to a consensus among those university officials who havent joined the boycott yet. Many are clearly sympathetic to the protest against the PA survey but are opting to take a wait and see position - these IHEs want to see the development of viable alternative measures to provide students and parents with a replacement for the rankings before they make a move against USNWR. Pressure is mounting nonetheless as the boycott movement makes headway - since there is also the question of just how many colleges will decide to sign on in the wake of the upcoming Annapolis meeting.</p>
<p>Even if you’ve already read the article linked in the post above, it’s worth a return trip, as there have been forthright comments added by Ed Hershey (one assumes the Ed Hershey who’s been pr guy at Cornell, Colby, Reed) and Chris Nelson, the president of St. John’s, MD.</p>
<p>With all due respect to Philander Smith College, could someone explain the extent of the “unfairness” of USNews towards this Arkansas school? Isn’t this school helped by merely being listed in the first place, even if it is in the category of “least selective schools?”</p>
<p>How does US News truly interfere with the mission of P. Smith and its President? Does the USNews influence the 80% admission rate, the average 15 ACT score, and the 4% 4 year graduation rate? </p>
<p>Forgive me for being cynical, but the joining of P. Smith is not significant and hardly relevant. It is about one thing: the same issue that compelled schools such as Reed and Drew to join the battle, and that is called FREE PUBLICITY.</p>
<p>I agree with xiggi about the free publicity. I had never heard of Reed before finding CC. It’s really not on the radar in my area, despite by all accounts its fine reputation. I only know of Drew because it’s 20 miles from my home & is know as a good school where merit aid is awarded. It certainly has no national name. And no disrespect intended, but how many people have heard of Philander Smith?</p>