Research university ranking based off of just university quality

Shenanigans in the PA are just one egregious part of the PA survey. The problems with its methodology and process run MUCH deeper than the works of a few crooks at Clemson or Wisconsin who were exposed. The basic issues are lack of knowledge about hundreds of peers, lack of adequate definition of the questions, and total lack of accountability for the responders.

^^^^^As opposed to universities like Emory, for one example, who just manipilated their SAT scores so their “objective” numbers would look more competitve. There are "shenanigans all over the place at the USNWR ratings. To constantly condemn the number one area that favors top public universities without mentioning the others is disingenuous at best.

The Sweitzer study focused on what drives changes in the US News PA rankings, and high school class rank was prominent, along with test scores to a lesser extent. Selectivity as a valid separate metric (as opposed to augmenting the scores for retention and grad rates) interests me more now not because it drives the PA rankings for a relatively small number of publics but because of the positive “peer effects” that Sweitzer mentions. I believe his finding that faculty productivity does not appear to drive the PA rankings is somewhat misleading. By focusing only on the universities that showed significant changes in the PA rankings during the time frame of his study, his work doesn’t reveal the importance of faculty productivity for the prestige of leading publics. Because that productivity, at least as recognized by the peer evaluators, did not change dramatically during a given time frame certainly doesn’t mean that those schools would retain their prestige following a mass exodus of top faculty.

You can spend the next ten years harping on the impact of manipulating SAT scores, and this will not change one iota of the FACT that the PA is an utterly incomplete weapon. Is there a difference between the shenanigan used to boost a SAT score (if you knew much about it, you might know it barely moves the needle or might backfire entirely) and the shenanigan at the PA level. Take a look at the percentages represented in the final scores to gain a modicum of understanding of how the USNews really works.

Here’s the reality: Bob Morse knows very well how limited its survey of the school official truly is. And he freely admits that the use of this INTANGIBLE serves solely to level the playing field, which in plain English means to boost the scores of the public schools.

xiggi, I realize he made the “level the playing field” statement several years back, except that the word “solely” was not a part of his statement. As previously noted, parents and prospective students are very interested in academic reputation, and, yes, several public schools do have strong reputations despite having larger classes, lower per-student funding, and much less per capita support from alumni. There are better ways to assess that reputation, however, than that used by US News. Absent that metric, in whatever form, what we have left is Forbes.

Yes, if you don’t like the USNWR rankings, you can look elsewhere.
However (just to focus on this one tired example) USNWR isn’t too unusual in the ranking it gives to the University of Michigan. Forbes arrives at nearly the same ranking (24th in research universities). So does stateuniversity.com (36th overall, 17th among research universities.) So does Parchment (38th overall, 24th among research universities). Kiplinger’s ranks it 6th among public universities for in-state value (compared to 4th among public universities by USNWR). Five undergraduate rankings, all using rather different criteria, assign roughly similar positions to the University of Michigan.

So Michigan’s PA score (tied for 13th) is an outlier compared to all these other rankings. Since the peer assessment is so loosely defined, it’s hard to know for sure what the “peers” are picking up that other measurements miss. I agree with Uniwatcher (#82) that research production may have a big impact. Even if Sweitzer couldn’t find evidence that changes in PA scores are driven by changes in research production (there simply aren’t that many big swings in PA scores among schools with high research production), it’s still possible that high-and-stable PA scores are strongly linked to high-and-stable research production. But we’re just speculating, really. That’s why I don’t like over-weighting the PA score. I don’t know if the peers are judging every school by the same criteria.

http://www.collegeadmissionspartners.com/college-admissions-counseling/forbes-college-rankings-garbage-garbage/

You’re really trying too hard tk to justify Michigan’s rather low ranking at USNWR.

"So does stateuniversity.com (36th overall, 17th among research universities.) "

That includes LACs; so what you are saying is that Michigan is #17. That is a far cry from #29 isn’t it?

^ No, I’m not saying that. Stateuniversity.com is saying that. Stateuniversity.com also says that some of the other top national universities rank as follows:

4 Duke

7 Cornell

8 Brown

9 CMU

10 ND

11 Harvard

14 JHU

15 Chicago

16 Yale

Is this more believable than the US News order?
Regardless, I don’t think #17 is such a far cry from #29. There is a lot of slop in all these rankings. The ones we’re citing place Michigan between13th (USN PA in isolation) and 29th (USN overall), if you exclude LACs. Split the difference, and you’d have Michigan neck-and-neck with Berkeley … which would be plausible enough, I guess.

Another ranking, by the way, is College Factual’s. It ranks Michigan 60th. After removing 27 LACs and specialized schools ahead of it (like Babson and Olin), Michigan would be 33rd. On the other hand, the Washington Monthly national university ranking places Michigan exactly where the PA score places it, 13th (behind TAMU and UT-El Paso, among others).

These are all rather high rankings, as far as I’m concerned (considering how many universities we have in America). If you especially value the opinions of ~1000 professional academics, then fine. Go with that one.

And think what such a ranking for an academic wasteland such as UT El Paso really means!

" No, I’m not saying that. Stateuniversity.com is saying that. Stateuniversity.com also says that some of the other top national universities rank as follows:

4 Duke

7 Cornell

8 Brown

9 CMU

10 ND

11 Harvard

14 JHU

15 Chicago

16 Yale"

Ok. I can agree with this ranking list since it would be plausable enough, I guess.

I am working now on a comparison of selective public honors colleges and programs with private elites. The comparison will use midpoint admissions test scores and six-year graduation rates, as that is the only overlapping data that I have. As for the Forbes rankings, they are somewhat improved over the initial attempts, but still reflect the biases of both Forbes and the Center for College Affordability and Productivity, which cranks them out. The director of the CCAP is notoriously hostile to most public universities, reflecting his devotion to the Austrian school of economics, his defense of the Koch brothers’ attempts to pay for like-minded professors, and his interest in pushing state universities toward privatization.

Uni, I am no fan of Vedder, but I do not think you can label him as “hostile” to the public universities as much as you can declare him to be hostile to … what public universities have turned into. Here are a few of his indictments:

The privatization of the universities is loudly quacking canard, unless one realizes that public universities are in dire need to find solutions to balance the books and curb their appetite to continue to finance the lifestyles of the few chosen at the top of the pyramid. In a way, perhaps more people should support an Austrian model as opposed to the abomination that the US version of the German university has become.

It does not seem that farfetched to think that school should be more interested in the well-being of their REAL customers than in the well-being of the service providers and mostly the insiders. Yet, that is what education is all about, and this for the entire K-16. The customer is hardly the … king. He just happens to pay the bills!

xiggi, thanks, but we’ll have to agree to disagree on the CCAP. They would like to see public universities go one of two ways: (1) be privatized–UVA, William & Mary, Michigan, and a few others or (2) be made more “efficient” by requiring professors in the publics to teach more classes and do less research, unless the research is lucrative. Inevitably, this will lower the quality and academic reputations of the publics even more, relative to private colleges, and make most publics into production-oriented vocational schools. The Great Recession gave the CCAP and their compatriots a wonderful opportunity to make their case and to support even more spending cuts to public higher ed.

Over time, the current quality spectrum that includes a fair number of publics would be limited to a very few of them along with the best private schools. So with far fewer options for the most talented students, the private elites would have to become even more selective, and would likely cost even more than they do now. I don’t see how this maximizes opportunity for the best students, and of course it would restrict mobility for many others.

The horror! The horror! Imagine the impact of an university requiring its divas to teach more classes and abandon its superior dedication to research and direct a few graduate students who inherit most of the teaching duties!

I guess we will have to disagree on the value placed on the “reputation” brought by the huge focus placed on research that invariably is read by a few peers and published in journals that only live through the wasteful benevolence of academia.

In the end, I still do not understand how the older model benefits the overall community of undergraduates who are robbed out of the education they were promised.

You are a big fan of the Honors’ departments. Aren’t those shining examples of how professors dedicated to … teaching impact their students? Or is it based on the volume of publications in obscure journals reached by the university in general?

xiggi, I do believe that the very best publics, even when considering the entire universities and not just the honors programs they may have, greatly expand the opportunities for the best students, often at less cost than at a private elite that offers full, need-based financial aid. I also believe that many other public honors programs within lower-ranked universities private an education equivalent to some public elites and many private schools.

Most honors programs have instructors of two types: (1) tenured professors (yes, with publishing cred), who are good teachers, and whose departments are willing to take a hit from the administration on faculty “production” requirements and allow professors to teach smaller honors classes rather than show how many students they can teach per faculty member; and (2) assistant professors, adjuncts, instructors, lecturers, or part-time faculty/administrators who are selected because they are very good teachers, and who often have some honors administrative or advising responsibilities. There are also some honors programs that use mostly tenured professors and few adjuncts, etc. Many honors programs also allow transfers from within the university, which accommodates students who have proven their ability without having begun in honors. The honors option therefore provides both an initial alternative to a private elite school and an opportunity for students within the public university as a whole to join the honors program.

As for “obscure journals”–almost all academic journals are obscure to the public, and the same can be said of medical journals and technical publications. The publish or perish model can sometimes appear to be trivial, and Vedder is very good at pointing out the most seemingly obscure examples. But how else is a professor to prove his or her cred, outside of teaching ability? Can teaching ability be divorced from academic research? Is that the case at Harvard, Yale, or even Swarthmore and Pomona?

How do faculty members gain admission to prestigious academies and win significant awards? These laurels have a great deal to do with the academic reputation of a school. The CCAP makes assumptions that this level of quality should be restricted, mostly to those who can pay for it. Quality should not be rationed when there is a surplus of students who can benefit from it.

Okidoki, we can safely assume that we may have divergent views on the value of the type of research that is generated at research universities. In fact, I do NOT have many issues with the existence of the model. My problem starts when this model is used as a criterion to establish the reputation of the faculty AND used in rankings that purport to establish the best COLLEGES (a proxy for UG education.) In simpler terms, despite thinking that the ARWU and its peers are not worth the paper the results are printed on, I am happy to let the results stand on their own, as long as people understand the utter irrelevance of those results to the life of your typical student who has to pick a school.

What is however NOT irrelevant is the dedication to teaching exhibited by the universities – an element that Vedder seems to correctly address. I note your spirited defense of the honor colleges … and I do agree that the presence of the faculty might be higher. However, this does NOT hide the masquerading that takes place at the rest of the school where the “divas” are only there in name and the instruction is provided not only by the occasional adjunct but an army of ill-equipped, uninspired, and poorly trained students who receive nice title such as TAs, GSIs, or RAs. At many schools, this is purely and simply a total disgrace. This disgrace even takes the form of having certain undergraduates “taking care” of providing parts of the instruction or even grading! Nobody would tolerate such non-sense at the K-12 level but it is fair game in tertiary education. Go figure!

There are schools where this parody of an education is limited. And guess what, those are the schools that happen to be excluded by rankings such as ARWU and not given much attention by the rest of the pseudo-scientists who concoct new devices to level the playing field and boost the type of universities that are the worst abusers of the failed model. You do mention Pomona and Swarthmore. How do those schools fare in the ARWU boondoggle?

At the end of the day, it’s a matter of ROI for the students/customers, and the current situation is horrible in terms of value per dollar spent. And I believe that there IS a place for a ranking to highlight that honors’ colleges might offer a better ROI than their overall general programs, and especially when they admit almost everyone with a pulse and a wallet – which often takes the form of public borrowing.

As far as final cost, I have to disagree about the “lesser” cost at public universities than at private with generous help. The nature of need based financial aid makes that proposal difficult to verify. For most of the people I know, attending the local public university is more expensive. The biggest issue remains to earn an acceptance.

“the impact of funding is magnified when the magazine also assigns points simply for having a lot of money”

USNWR simply does not do this. “Having a lot of money” is synonymous with endowment. USNWR does not use endowment as a metric, nor is it certain that they even tabulate for their own purposes the endowments of the schools they rank. “Alternative U.S. News Rankings” appears, then, to be based on a poor understanding of that which it is an “alternative” to. In that sense, it is a misuse of a ranking – which then, in turn, can be misused.

However, most high school seniors do not have the credentials that will result in admission to those other schools with good financial aid, or to schools which will offer them top-end merit scholarships. Of course, whether the local public is cheap for the student does depend on the tuition and financial aid policies of the state of residency, and of the school itself to the extent it has local control over those policies.

@merc81, we did not consider the endowment separately as having anything to do with the alternative rankings. Our view is that the metric of financial resources, which is used by US News, magnifies or replicates to an extent the related metrics for class size (two sub-categories), and adds metrics for faculty salaries and alumni giving that are not output measures. Of course endowments can have a major impact on the financial resources ranking; but, again, we did not use endowments as a metric. We did use US News’ data for academic reputation, retention rates, grad rates, and classes <20 students. The point was to show that using these actual US News metrics (which are outputs) without the magnifying effect of the input metrics (especially financial resources, faculty salaries, and alumni giving), would yield some significant changes. We do discuss the possible impact of endowments in the lengthy post on the alternative rankings, especially with respect to the ranking for Brown University. The phrase “having a lot of money” referred to the specific US News metrics that emphasize “money.”