^ 92% with no engineering program. Sorry to be nitpicky…
My apologies if my previous post (#41) offended @ucbchemegrad or other public universities.
That’s not the intent. I was merely pointing out the fact in the article on a slight inaccuracy of that sentence, i.e. “The reason: six-year graduation rates that do not match schools such as Yale, Harvard, Stanford, Columbia, Princeton, and several other public and private institutions.”
Berkeley, by the way, has a high 91% six year graduation rate, with some of the best engineering programs to boot.
^ No offense taken. I was just teasing.
Here are the top 10+ universities by 75th percentile SAT M+CR scores alone:
Caltech, Harvard, Princeton,Yale
Columbia, Chicago, Dartmouth, Harvey Mudd, MIT
Pomona, Stanford, Vanderbilt
What’s the common denominator?
Resort locations? Big successful D1 sports teams?
Hot chicks? I don’t think so. What these schools have in common is academic excellence.
Yes, it is teenagers who make these choices. However, they are influenced by parents, teachers, GCs, and a wealth of public information. Collectively, the people involved in these decisions aren’t stupid.
Here are the top 10+ universities by USNWR peer assessment scores:
Harvard, Stanford, MIT
Princeton, Yale
Berkeley
Caltech, Chicago, Columbia, Cornell, JHU
This list is a little different from the above list. Not too terribly different, but a little different.
Is it a better list? Does it reflect more mature, better-informed judgements about undergraduate academic quality?
I doubt it. I suspect it better reflects the interests of professional academics.
However, if you value what professional academics (“peers”) really think about undergraduate academic quality, don’t just look at what they say. Also look at what they do. Look at where they send their kids to college.
http://www.cbsnews.com/news/where-professors-send-their-children-to-college/
@tk2176, that link counts as a non sequitur because we are comparing research universities here, not LACs. For sure, the LACs do very well in the 2 “academic” alumni achievement ranks I look at while even some Ivies (like UPenn) don’t do all that well in producing PhDs or prestigious student award winners per capita. However, the more pertinent question is whether professors prefer Georgetown/Vandy/WashU over UMich/UVa/UNC/UCLA for their children or no.
Great! Since we can hear the justifications from the horse’s mouth, may we ask why you used the “most prominent” metric and accepted their relative importance without … scrutiny? Would you please let us know if you explain WHY the peer assessment ought to overrepresented in your alternative ranking? Explaining as in going beyond the mere statement that if USNews uses it, anyone ought to accept it as value.
A good start might to share what the criteria are used by USNews to determine the PA numbers? I assume that you must be sufficient familiar with the composition of the PA to explain why you deemed it important.
Fair enough?
Is the survey known as the Peer Assessment a metric that affect undergraduate students directly? Seriously?
May I ask you if you ever read the actual survey and examples of the responses?
“While I am at it, Chicago’s six year graduation rate is 92%, Caltech’s 92%, MIT’s 93%, Stanford’s 95%, Amherst’s 95%, William’s 95%, Princeton’s 96%, etc. There is only one public university, UVa, that has a higher percentage at 93% than that of Chicago. No other public schools has higher rate in this metric. So there are no “several other public institutions”, just one, that has higher rate in this metric. Sorry to be nitpicky.”
Michigan has a 91% six year graduation rate which includes also a highly rated engineering program that acounts for about 25% of all undergraduates . That is almost the same graduation rate as Chicago. Not bad for a school with over 28,000 undergraduate students. Sorry to be so nitpicky as well.
@xiggi, I do believe that academic rep is important, for reasons already stated in a previous post, namely, the UCLA HERI survey that recognizes the continuing importance of academic rep, among other reasons. You probably know as well as I do that U.S. News surveys the “peers” to get the data for that metric.
Here is my main point, and please try to focus on it: U.S. News magnifies the impact of wealth. Remove the wealth-related metrics, and our alternative rankings are what you get. They show that in many instances, including Brown U, the exclusion of the wealth metrics shows how they magnify or undermine the standing of many schools. The alternative rankings reinforce the preeminent place of HYPS, Columbia, MIT, etc., but down the line the impact of wealth, I believe, results in distortions.
Do I believe the alternative rankings are definitive? I don’t believe any “rankings” are definitive, and I only offered the alternative rankings to show the unwarranted impact of the wealth metrics in the rankings of U.S. News. If you read our blog, you will know that we (because in the book there is a “we”) have scrapped rankings in favor of ratings, referred to in a previous post as “report card” assessments. U.S. News creates distinctions where there are none, or very few, by using ordinal rankings; of course; and their use of wealth-related metrics compounds the problem.
Correction to my above statement. The number of engineers at the undergraduate level at Michigan is approximately 22%.
@rjkofnovi, UMich is a fine school. I have a lot of respect for their graduates. Based on a couple of sources, Michigan’s six year graduation rate is 90% (not 91%), 4 year’s graduation rate being 76% and an impressive 97% first year student retention.
Don’t worry about the 22% or 25% being the percentage of the number of engineers at the undergraduate level at Michigan; impressive percentage either way and I have no nit to pick on Michigan anyway.
http://collegeapps.about.com/od/collegeprofiles/p/UMich_profile.htm
[QUOTE=""]
@xiggi, I do believe that academic rep is important, for reasons already stated in a previous post, namely, the UCLA HERI survey that recognizes the continuing importance of academic rep, among other reasons. You probably know as well as I do that U.S. News surveys the "peers" to get the data for that metric.<<<
[/QUOTE]
I don’t doubt you think the rep is important. I shall now assume you did not apply much scrutiny to the “data” and did not realize the utter lack of integrity of the exercise and the absolute vagueness of the survey. The PA is a popularity contest without any foundation.
@xiggi: Popularity contest, yes. Doesn’t mean it’s not without foundation.
Lot’s of popularity contests matter a lot. For instance, who becomes President of this country, etc.
Of course, lots of things devolve into popularity contests:
- College admissions selectivity is based on a popularity contest among high school seniors applying to colleges. (similar for graduate and professional schools among college seniors)
- College yield is based on a popularity content among those admitted to the college.
- Hiring of college faculty is based on a popularity contest among colleges for faculty candidates, and of offered candidates for colleges. (similar for any kind of job or employer)
- Recruiting at college career centers is based on a popularity content among employers for colleges.
@xiggi, I believe your position is simply this: you believe that academic reputation, however assessed, is meaningless. Very well.
So what is the right way to account for the impact of wealth? Money, after all, buys many good things (including not only small classes, but also better facilities, better need-based aid, etc.) If we reassigned the “financial resources” weight to one or a few of those specific things, would that address the “double counting” issue? Would it change the results?
I don’t believe academic reputation is meaningless. It’s one of the most heavily weighted factors in the US News rankings. In the beginning, that’s all US News used. At some point, it occurred to the editors to ask whether objective measures would corroborate the opinions of the “peers”. Many of these measurements (not only US News but also Forbes, stateuniversity.com, Kiplinger measurements) do not, exactly. Measurements for selectivity, class size, graduation rates, PhD production, etc., tend to push down a bit on the rankings of top public schools relative to top private schools. About the only ones that don’t are the international graduate program rankings, which emphasize faculty research productivity.
By shifting more emphasis back to the peer assessments, you may be magnifying the impact of research production. I’m assuming that strongly influences the peer assessments… but I really don’t know. Even if I’m right, it’s not necessarily a bad thing, if that’s what you value. I just don’t think you’ve thereby presented a less distorted window into pure university quality. Not for undergraduates, anyway.
From a public policy perspective, I think it is desirable to expose any gap between top public and top private universities. People should know if their state universities are under-resourced, resulting in bigger classes, lower graduation rates, or less competitive faculty salaries.
MITTTTTTTTTTTTTTT
“Based on a couple of sources, Michigan’s six year graduation rate is 90% (not 91%)”
Well, I guess I am going to have to be the nitpicky one here:
http://obp.umich.edu/wp-content/uploads/pubdata/cds/cds_2014-2015_umaa.pdf
“Six-year graduation rate for 2008 cohort (question B10 divided by question B6): 91%”
To be fair to theluckystar, this is very new information which he/she obviously hasn’t seen…until now.
@tk21769, I believe you are correct that the academic reputation metric reflects peer recognition of research, and that, overall, this favors some public universities vs some private schools (e.g., Wake Forest). Whether or not a large research university is good for undergrads varies, though students in honors programs often have better pathways to undergrad research and mentoring. Our inclusion of classes with fewer than 20 students, on the other hand, clearly favors private schools.
[QUOTE=""]
@xiggi, I believe your position is simply this: you believe that academic reputation, however assessed, is meaningless. Very well.<<<
[/QUOTE]
I do not think you are in a position to judge what I believe. I have posted on this issue for years. I questioned how much time you spent scrutinizing the underlying elements of the USNews PA, and I continue to think that the answer ought to be … very little.
The issue is that a survey of academic reputation COULD be important if done properly. To do this, it should be expanded with categories and, more importantly, designed in a way that forces the responders to use a modicum of integrity. As it stands today, the survey is a simply a joke.
Perhaps you should make an effort to actually learn a bit about the details of the survey. And then, we might be discussing the merits of over sampling your own rankings with this questionable metric.