“In offering a critique of the new Forbes College Rankings, I am not biting the hand that feeds me, because I am an independent Contributor, not a Staff Writer; my opinions are my own. I appreciate the effort by Forbes to examine outputs rather than inputs, as there is undue emphasis in so many other rankings on the metrics that measure how hard it is to gain access, rather than what—if any—value is added during the course of one’s college years. That said, the Forbes criteria exacerbate some problematic aspects of the debate over the value of college.” …
Um, isn’t Forbes’ methodology explicitly designed to measure output?
'I would urge Forbes to work with institutions that participate in the NSSE to obtain and use that data on what students actually experience while on campus with regard to the meaningful tasks of learning. ’
Prestigiosity is covered elsewhere in this forum. Payscale matters. For every bright mind at Google, they have a hundred gerbils cranking code. Coding has replaced our nation’s steel mills.
Universities are diverse enough bodies that there is no one-size-fits-all formula that will tell you what school is best. We like to think that these things can be quantified and put into a ranking system, but they can’t be. That is what the real problem is with those rankings.
This seems rather simple, at least to me. None of these stated outcomes they are trying to measure, be it the “Best” college or the “value added” by a college or any of these other supposed outputs have firm definitions, much less measures. Further, they are not only non-scientific terms, they are so far from it as to be completely useless as a basis for discussion. Harvard (or Princeton or Yale or Stanford) are not the “best” schools for millions of students. And isn’t the main point of these lists to be useful for some kind of decision making in the future? So obviously it makes no sense to put together a bunch of data points (some easy to measure precisely and others very ambiguous and artificial) and add them up using completely arbitrary weightings and then say the resulting number somehow precisely represents anything, and certainly not a subjective concept.
It seems to me the real discussion should be why do people pay any attention to these ridiculous rankings? I know why the magazines do it, they want to sell their product and they know there are tons of suckers that will swallow this swill. They understand that people, and perhaps Americans especially, have this compulsive need to have easy answers to complex concepts. Why do we need to be able to say with some pseudo-authority that College X is better than College Y, instead of being satisfied with the idea that College X is the right size, right price, right location, right academic fit and overall just the most right for me (my child). I just don’t understand why it is so difficult for some otherwise intelligent, college educated people to understand that there is nothing to this ranking nonsense.
If the organizations want to simply create lists of the most competitive admissions (although even that is hard), or the highest average test scores of accepted students, or anything else highly specific and measurable, then great. Maybe that will mean something to people and maybe it won’t, but at least it isn’t a farce like these ranking lists that purport to measure the impossible. It used to work just fine for people to take one of those giant college guides like Barron’s and narrow down the choices for applications based on some criteria important to them. It might have started out with actually thumbing through the entire book in order to get ideas for how to narrow the choices. Maybe it was as simple as the first big list being schools whose average SAT scores matched your SAT scores. Or maybe first it was schools in a certain part of the country, then test scores. Now we can do that same kind of selection narrowing by computer and it is much faster. Better? Well, that is a different discussion, but certainly doable. And far, far better than having the first cut be only schools that are in the top 20 of some completely artificial list.
Interestingly, none of the criticisms Chris Teare levels against the Forbes poll focus on whether the outputs are accurate or not; they merely critique the weight and importance placed upon such things as teacher evaluations, future income, etc. My response is that, if people were that reflective, they wouldn’t be paying attention to magazine polls in the first place. I mean, short-term, meaningless, gratification kind of comes with the territory, doesn’t it?
Not much of a defense of specific factors in that article. I continue to believe that using Rate My Professors as a proxy for student learning is fairly useless (for example, professors get higher marks if they are easy graders - I can see why students could like such a professor, but is that something that really belongs in a discussion of good teaching?). And payscale is based on self-reported salaries - using money as a metric of success is not terrible, but it’s certainly not the only measure and all self-reported data has to be viewed with some suspicion.
With regard to the defense of the rankings by Vedder, I have rarely seen such drivel. The pinnacle of idiocy was achieved when he compared grading students in a course to ranking colleges. Where would I even start?
Almost as good was his saying that it is only “likely” that a student wanting to study engineering would choose MIT over Pomona (his highest ranked school in the country), even though the latter doesn’t offer engineering. To even suggest there is an iota of logic to the idea that one should possibly choose a school because it is ranked higher by Forbes even though you cannot study what you want to do for your career just leaves me beyond flabbergasted.
It almost proves how useless these rankings are when someone like that gets to have the title “Rankings Guru”. I suspect he has never come within 100 miles of what it means to measure something scientifically or even within the parameters of the social sciences. It is riotous and frightening at the same time.
The extent to which a particular list is a good one for a particular student (or critic) is generally highly dependent upon whether the rubric used aligns with the variables most important to the person. Looking at a particular school across lists may be helpful. Most helpful would be a list of primary sources that students and parents should look for each school they are considering. The information is available in places like CC for people with time to comb the threads. But a Cheat sheet would be helpful. The cheat sheet would have links to all the primary sources generally used by any of these lists (not a link for each school but a link to a collect of Common Data Sets, Payscale, etc with a little explanation about what the site intends to do. I recall seeing a site that lists value of job, for instance-maybe it was NSSE-not sure. But anyone compiling that list would be doing students and their parents a great service.
Colleges should probably be ranked in broad tiers based on factors such as a high graduation rate in four or five years depending on what is typical for that college (eg. Northeastern and its co-ops), student satisfaction, quality of the professors/research at the university, retention rate, and selectivity.
They could also then categorize the rankings further by individual fields (natural sciences, social sciences, humanities, arts, engineering/CS/math, pre-professional programs) and just distinguish between a top tier university, a second tier university, and a third tier university.
Does it really matter if you go to Wharton business school or Notre Dame’s business school? They’re essentially the same quality and distinguishing between the two is pretty pointless. Is someone really going to be better off because they went to Stanford instead of MIT for computer science? I think the differences in the quality of education between the top 30 universities is probably negligible, and it’s dumb that we try to create an ordered ranking. Who cares if Harvard is first, second, third, or fourth? It’s not like Harvard is any better (or worse) if it’s ranked lower than Yale or Princeton. They’re essentially all the same caliber of school and believing that going to a school that’s one or two rankings higher means anything is pretty dumb. The top 10 schools could probably be placed in any random order and no one would be shocked, same thing with the next 10-20 schools after that.
I would be interested to see a chart of schools showing the percent of grads working in their field within 12 months of graduation–by major, to compare outcomes between universities (and noting how engineers compare to engineers, theatre major to theatre majors). And it would be informative to see the average starting salaries of those grads by major in their field. In terms of outcomes, I’d love to once and for all see if there was a significant difference between the various levels and ranks of schools we talk so often about here when it comes to employability, opportunities, and pay rates. I’m not suggesting this should be the only metric that counts, but it would offer some transparency on employment advantages (and to what extent) of paying full price for traditionally prestigious schools vs. taking big merit awards at other schools.
Forbes will essentially continue to be a " " " “viable” " " " source for many due to the fact that a good amount of people only care for the prestige of a college/university; it’s a shame really.
Averages don’t really tell the full story either. Lower ranked schools may simply be there because they take lower quality students, which would certainly ruin their averages, even if their best are just as good as those from top schools.
Besides, the quality of education can really depend on the person. For example, a school like MIT, which is with good reason universally acknowledged as a top school for technical majors (especially for academic/research fields), has a competitive atmosphere that is not for everyone. Even brilliant, high-achieving students who go on to make important scientific discoveries may want to be in a place that won’t specifically try to make them feel like they aren’t anything special (i.e. one talented mind among many). That kind of push will motivate some people to prove themselves, but others it will simply push away because they feel like they don’t receive the proper respect for the work they put in (and I know people who chose not to go to MIT because they hated this, and others who went there specifically because that attitude pushed them to greater heights). The same is true for any other school, prestigious or otherwise.
In short: you can assess individual, objective qualities of schools easily enough, but it is folly to think that there is some real measure of overall quality of a school. As fallenchemist put it, people want easy answers to complex concepts. No such answers exist and we should stop trying to pretend otherwise.
I just thought this should be extracted out and highlighted. It really is quite bizarre (to me, at least) that the method above worked so well for today’s college parents—and their parents, too—but we imagine that our kids need some sort of multivariate witch’s brew nonsense ranking to make the same choices we did.
Perhaps we just want to be able to say that we went to a school that was “objectively” better than some other one, rather than just a better fit for us specifically. Do we really care about how they rank the schools, or just that a “reputable” source boils it down to just a single number?