<p>well, the results ARE a popularity contest. The authors make no claim as to quality of education at any of the schools; they only study the PREFERENCES of students who have been admitted to more than one school. The reasons for choosing are not part of the study. It may well be that some students are deterred from applying to or matriculating at Chicago because of its reputation; but the same reputation may be taken as a sign of educational rigor. The authors are agnostic on this point.</p>
<p>I understand that but point this out as a weakness of the " student preference" method, which the authors suggest could be an alternative to the USNews rankings and one that is not so subject to manipulation. It strikes me that if this method were to become the preferred ranking system, it would only mean that new types of manipulation would occur - colleges would increase the # of “perks” , not necessarily related to academics, that make a college attractive to students - better housing, better food, more social events, etc. Not that this is a bad thing - somewhere earlier in this thread someone criticized “benchmarking” as leading to higher costs. Heaven forbid that a college should operate like a business and see what the best competitors are doing and try to meet the competition!</p>
<p>It also strikes me that the USNews leaves itself room for “subjective” ratings for a couple of reasons - first in order to preven embarassing results that appear contrary to common sense like ranking U. Chicago above Brigham Young. And secondly so that they can move schools up and down a few spots every year to give the impression of a “horse race”. If they put the same schools in the same order every year for 30 years with only rare changes, it would get boring to readers and they might not buy this years edition.</p>
<p>I was struck by the difference that contrasting presentation of the data made between the alphabetical NY Times table (post 237) and the original, ranked revealed preferences study (post 239). The alphabetical table encourages users to ferret out useful information, much like the alphabetized lists in tiers 3 and 4 of the USNews rankings. The rank-order revealed preferences table encourages users to see winners/losers, better/worse, #1 vs. #4. (One gets the false impression that any school that ranks above another has a higher “win rate” with cross-admits than any school below, for example).</p>
<p>I suspect that many of the presidents would gladly live with the methodological quirks of some of the metrics in the USNews rankings in exchange for free advertising and the large amount of accurate information that USNews makes available–if only USNews didn’t insist on crunching it all down into one ranking number that influences alumni, trustees, parents, and prospective students so strongly.</p>
<p>Are the authors claiming that revealed preferences are a superior way of ranking colleges or a more reliable guide to choosing colleges? That was not the impression I got.
The one piece of information that stuck in my mind was the alleged strategy of Princeton of lowering the rate of admission of students scoring between the 94 and 98% percentile on the SAT.
The authors also acknowledged the limitation of their methodology: the schools had to be of similar reputation and to be all need-based, so that there was more likelihood of students comparing apples and apples.</p>
<p>
</p>
<p>Exactly right: they propose that their method, if carried out on a large scale, could be a more sophisticated replacement metric for just one or two of USNews’s categories, specificially admissions rates and yield (now no longer used), which they consider “crude.” In fact, their concluding sentence is something about the revealed preferences not corresponding with educational quality. </p>
<p>But the way that they present their data, complete with a column titled “rank order” and their rhetoric concerning providing an objective measure of desirability doesn’t exactly encourage users to read the data in that nuanced way. And somehow the fact that this was a small-scale study designed to show the feasibility of actually employing the same methodology with a significantly large sample gets lost in the shuffle, too.</p>
<p>Well, that’s the readers’ problem, non?<br>
It never ceases to amaze me that people blame others for their own failures, whether it’s colleges blaming USN&WR for their failure to live up to their own missions, </p>
<p>The revealed preference methodology always seems to me to be, in large part, a reflection of USNews rankings. I think that especially at the top tier schools (because they are more likely high profile institutions), high school kids and their parents are more inclined to choose the higher ranked school over a lesser ranked school (assuming other factors, i.e. financial aid, aren"t an issue) for no other good reason than that “it wouldn’t be a mistake to go to the higher ranked school over the lower ranked school.”</p>
<p>I somehow doubt that people choose Harvard over Princeton or Yale because it’s number one. In fact, it is not always number one. Princeton often takes that spot.
Now, if it were choosing Harvard over Duke, that might be a different story. But Harvard over Princeton or Yale because of USN&WR? Don’t buy it.
And if they choose Harvard because it’s number one, then it’s their problem, not the fault of USN&WR or the Revealed Preference study. Are we a nation of lemmings?</p>
<p>Actually, I’ve learned a lot from reading USN&WR. Among the top 50 universities and LACs were many I’d never heard of before. It made me realize how limited my knowledge of higher education, based on word of mouth, and heavily skewed toward the Northeast, was.</p>
<p>Odessey, I think this is going too far - you make it sound like USNWR created the whole concept of preference and that there is no objective reality or even a perception of reality independent of their rankings. They have certainly had an influence in that area but if USNWR were to cease publication and nobody took their place, 100 years from now people will still prefer Harvard to Brigham Young the way they did 100 years before today (putting aside the fact that for some (Mormon) kids BYU is the better “fit” for them.) If "revealed preference " ever got going beyond this one small study to the point where their database was large enough to be trustworthy and if they had year over year rankings, I think that (unlike USNWR where schools jump around several ranks each year) you’d find that the rank order, especially near the top, would be quite consistent over decades - a “top five” list from 30 years ago would be little different than a current list. Lower down I think the list breaks down because even now there is less than perfect information available to college shoppers, especially for the less famous schools. Any person on the street can name HYP and maybe throw in MIT and Stanford (they might miss CalTech) but if you ask them to compare Wellesley with Wesleyan you’d come up blank most of the time.</p>
<p>Percy Skivins wrote:</p>
<br>
<br>
<p>Lesser, as in <em>size</em>, in scale of operations? No. Less of a fit for some, sure.</p>
<p>I agree with Percy’s last comment. Since I no longer live in the NE, most of the adults I know have little feel for colleges past the top 20. They do know that Wellesley is a girls’ school, though. No one at S’s HS had heard of Caltech, not even his principal. </p>
<p>As long term CC readers may recall, S decided to apply as a junior, 2 days before winter vacation. Had we had time to research, his application list would have been quite different. The reports would give a starting point.</p>
<p>Yes, the authors of the revealed preference study end their paper with the words, “We close by reminding readers that measures of revealed preference are just that: measures of desirability based on students and families making college choices. They do not necessarily correspond to educational quality.” </p>
<p>So what the study possibly reflects is more likely to be perceived “fit” than necessarily “objective” educational quality. But, as I wrote a few pages ago in this same thread, a high school student who is actually admitted to more than one college is highly motivated to find out which college is most fitting, that year for that student, and aggregating those kinds of individual choices seems to me to be helpful–even if not conclusive–information. </p>
<p>There is a saying, “If wishes were horses, then beggars would ride.” What I particularly like about the revealed preferences approach is that each matriculation tournament that enters into the overall calculations is participated in by students who were actually admitted to each college in question. Some students “skip over” higher-ranked colleges to apply to a particular college, and colleges don’t admit applicants on a consistent rank order basis as against one another either, but in large part this approach compares the choices of students who are in a position to choose among colleges, not students who were never admitted and are responding primarily to vague reputational statements about the college and expressing only an uninformed opinion. The student has to choose which colleges to which to apply, has to fill out the applications, waits for results reflecting the college’s choices among many applicants, and then is highly motivated to gather additional information to make a final decision about where to matriculate that avoids opportunity cost. That’s not the last word about which college fits any of my own kids (I can already imagine, for instance, my oldest son applying to the U of Chicago and not to several of the colleges with a higher revealed preference ranking), but that is helpful information. The study systematically oversampled young people aspiring to the most selective colleges in the country. A broader data set would reveal more information about region-specific preferences and preferences among somewhat less selective colleges (which probably are not as distinct as among the most selective colleges). </p>
<p>I know a young man who last year turned down an offer of admission to Harvard to attend Notre Dame. Several other young people I know turned down Harvard for MIT. It’s still a free country, and students can still make their own choices among the colleges that admit them. </p>
<p>The College Board state reports on SAT I and SAT II testing include a table “Institutions That Received the Most SAT Program Score Reports from Your Students.” So we can get a reality check on regional preference patterns (in terms of where students apply). The report from Minnesota </p>
<p><a href=“College Board - SAT, AP, College Search and Admission Tools”>College Board - SAT, AP, College Search and Admission Tools; </p>
<p>(Table 28) </p>
<p>includes some different colleges from those found in the report from Texas </p>
<p><a href=“College Board - SAT, AP, College Search and Admission Tools”>College Board - SAT, AP, College Search and Admission Tools; </p>
<p>(Table 28) </p>
<p>but a few colleges show up as colleges with a national draw, at least as to attracting score reports from students out-of-region.</p>
<p>Johnwesley - I meant “lesser” in terms of overall “educational quality”. “Fit” may be more important than “quality” but quality, even if it’s hard to measure, still exists and is important. How do I know that U. Chi. is of higher overall “quality” than BYU - it’s like the SC judge said about the difference between “art” and “pornography” - I can’t put it into words but I know it when I see it, so that a ranking that puts BYU ahead of U. Chi. is “wrong” in terms of quality. I’m not even sure it is right in terms of popularity because of the sample size of the study.</p>
<p>Tokenadult - there will always be people turning down “higher” ranked schools for reasons of “fit” or whatever. Even Harvard does not win 100% of the time against every single school - 1% of applicants choose Tufts over H. And maybe YOU are that one or should be. Personally, I can say that I liked the “vibe” on the Tufts campus better than H (but if it had come down to actual acceptance I’m not sure I would have had the nerve to turn down H) But the fact that the preference is 99/1 and not 1/99 or even 60/40 does tell you something. Given the limitations of the study, I would be suspicious about small differences - for example they give Penn-Darmouth as 46/54. Shifting this #, given the sample size, might involve only one or 2 cross-admits, and even if the # was statistically significant, what would it mean, other than the schools are roughly matched - at that point you would really have to rely on personal fit and not be swayed by other’s preferences. But in the cases where there was a major gap you really have to think twice if your preferences are so different from those of your peers - do they know something you don’t know? You could buy a really cheap shoddy coat that “fits” you really well, and it will always be a cheap shoddy coat, or you could buy one that did not “fit” you perfectly but was of a recognizable brand name and impeccable quality - in the latter case you could always have the coat tailored somewhat (or gain/lose weight) and others who saw you might respect the label and not care about the fit.</p>
<p>“Several other young people I know turned down Harvard for MIT.”</p>
<p>This one especially does not surprise me at all. The study gives the split as 73/27 (in favor of H) and frankly I’m surprised it’s that skewed in H’s favor (though only Yale , at 35%, has more wins against H, so MIT does pretty well against the clear #1 “brand name” in colleges). For someone who wants to study the kind of thing that most kids at MIT major in, personally after visiting both I thought that MIT was the “no-brainer” winner in undergraduate education. Harvard looked to me like it was suffering from the “Hertz” syndrome - “we’re #1 so we don’t have to try harder.” Someone talking earlier about other schools having to spend money because they do “benchmarking” where they compare themselves to other places and try to upgrade their facilities/faculty, etc. in order to meet/exceed the competition’s benchmarks. I had the feeling that H’s approach was “we don’t have to do no stinking benchmarking. We ARE the benchmark.” Maybe they are, but usually that kind of arrogance does not pay off in the long run.</p>
<p>Percy, if I thought it were merely about knowing the difference between BYU and Chicago, I would agree with you. Just as I think I can spot the difference between Miller Lite and and Yuenling (at least I hope so!) I think I can also spot an institution that takes intellectual fervor seriously and has the material resources to sustain it, as opposed to one that is simply going through the motions. But, beyond those general principals of uncertainty, as far as I’m concerned, subjectivity is just a fancy word for “fit”.</p>
<p>Regarding Revealed Preference, you can decide for yourselves whether or not to “drink the kool-aid”. When I looked I noticed several anomolies, such as prominence of BYU & Notre Dame. I believe the methodology is flawed because non-applicants have themselves expressed a relative dis-preference for a school by not applying to it, and this is not captured.</p>
<p>I believe in each case the sample of applicants to a school is not representative of the underlying population, which includes non-applicants. All the people who like BYU applied there, and they would and did prefer it . Their preferences were revealed. The underlying population includes a far higher proportion of people who would not apply to BYU, and would not attend it if they were admitted vs. anyplace else. The preference of these non-applicants was not adequately revealed, in my opinion, judging from the results. Because the sample of applicants does not perfectly reflect the behavior of non-applicants. Whatever these equations say, this nuance was not captured appropriately in the results, which imply a preference for BYU in the underlying population of college applicants that is miles ahead where, IMO, BYU is really preferred by the mass population of applicants at large.</p>
<p>Non-applicants do not feel the same way about the schools that were aplied to as the people who applied to them feel. The non-applicants like the schools less than those people do. Otherwise they would have applied themselves. A ranking of the schools non-applicants didn’t like enough to even apply to may well not precisely track the ranking of those who did like those schools sufficiently. As per my BYU example.</p>
<p>IF you think everything’s just peachy then you think BYU really has this status in the population at large. Ditto Notre Dame. I don’t.</p>
<p>There may be other applications in the social sciences where the sample can be deemed reasonably representative of the underlying population, in which case this methodology might be expected to yield more reasonable results. This is not the case here; non-applicants have themselves expressed a preference by not applying.</p>
<p>I think the study is valid if it is clear that the preferences that are revealed are only those of students who: 1. applied to the schools that are included in the study; and 2. were admitted to them and were thus in a position to choose between them.</p>
<p>Granted, people who do not apply to a school may not prefer that school; but they could also refrain from applying because they do not think they would get in, eg. someone with a 1100 SAT is less likely to apply to HYP than someone with a 1500 SAT. There are some anomalies, such as BYU and ND; but I believe the authors acknowledge them. </p>
<p>The authors do not propose that students and their families should use the result of their study to guide their college choice. They are trying to address the issue of yield, which, as many have noted, has been manipulated by colleges. Harvard #1 position on this list does not make it the best school; only the most often chosen by students admitted to Harvard and some other school.</p>
<p>
Umm…yeah. So why does this make the methodology flawed? </p>
<p>Because the vast majority of college applicants have no interest in Harvard and do not apply, does that mean it isn’t a fine institution?
Out of the kids who DO apply and get accepted to H & other schools, most choose Harvard. That is very significant. I don’t see how BYU or ND are anomalies. When given the very real choice (acceptances in hand,) kids often choose those schools over other schools.</p>
<p>Yes, monydad, I like your point, and it explains why my son’s decisions would never be part of a matriculation tournament including Notre Dame or BYU, but Notre Dame seriously IS competition for Harvard: I already related that I know of a case in which Harvard lost that competition. I think Marite’s point is important too, that the reason some students don’t apply to some colleges is that they don’t want to make “lottery ticket” applications to colleges that they desire in the abstract but have little chance of being admitted to. </p>
<p>It’s enough for the method that young people all up and down the list of applicant characteristics apply to multiple schools, and the schools admit students who were admitted also to some other school. Then the students who have to make some choice about where to matriculate choose on the basis of whatever is important to them. The authors of the study appear to believe that if students were identified by religious preference that they would fall into distinct college preference categories, and being part of a family that is neither Mormon nor Catholic I find that plausible. But both Notre Dame, especially, </p>
<p><a href=“http://www.■■■■■■■■■■■■■■■■■■/search1b.aspx?InstitutionID=152080[/url]”>http://www.■■■■■■■■■■■■■■■■■■/search1b.aspx?InstitutionID=152080</a> </p>
<p>and </p>
<p>Brigham Young </p>
<p><a href=“http://www.■■■■■■■■■■■■■■■■■■/search1b.aspx?InstitutionID=230038[/url]”>http://www.■■■■■■■■■■■■■■■■■■/search1b.aspx?InstitutionID=230038</a> </p>
<p>have reasonably impressive groups of “most similar” institutions on other criteria, so being the most preferred university among students of a religious preference that is common in the national population apparently is a reasonably good strategy for drawing in students with high SAT scores and other desirable characteristics. </p>
<p>You might find it interesting to email one of the study authors and see if he or she has any thoughts about how to refine the analysis to take your point into account. The study authors seem to be quite approachable. </p>
<p>After edit: relating monydad’s interesting point to the thread-opening post and follow-up posts by asteriskea, this is what kills the itty-bitty liberal arts colleges that are calling for a boycott of the U.S. News peer assessment surveys. A lot of high school students never even consider applying to such schools. They may be fine schools, but cost-conscious students pass them over for state universities, research-eager students pass them over for research universities, and students who like urban environments pass them over for schools in big cities. Some of the colleges calling for the boycott are in a very nasty competitive situation already, whether anyone notices admitted applicant preferences systematically or not.</p>