Math Rankings: is this accurate?

<p>Wrong. You are making things up now. I have seen the report, and it fails. I could say the same about you, you seem quick to accept it as valid despite highly correct criticisms. You still have not addressed that it is over 13 years old as well. You also chose to ignore the two things I quoted above regarding organization of the institution and age of the institution. These are so clearly ridiculous that I am amazed anyone takes the report seriously.</p>

<p>You know collegehelp, a measure of a man is to admit when he is wrong. But respond however you wish, you can have the last word unless you say something truly outrageous. I think anyone with a brain can read the evidence above and come to a rational conclusion.</p>

<p>He doesn’t think he’s wrong, so you can keep his manhood measurements out of this.</p>

<p>

LOL. What genius came up with that?</p>

<p>^Well, the Gourman Report does assign precise numbers and it does assess the quality of programs. This is simply a statement of the obvious.</p>

<p>The most obvious evidence that the Gourman Rankings have validity is that they roughly agree with other ranking systems such as US News and NRC and Baccalaureate Origins and Revealed Preference. Furthermore, the opinions expressed in posts on CC generally substantiate the Gourman Rankings. What’s more, the Gourman Report correctly ranks several programs high even though they are in schools you would not expect such as U Delaware in Chemistry and Claremont McKenna in Econ.</p>

<p>Bottom line is “are the rankings accurate?” If the rankings are reasonable, then the method must be correct.</p>

<p>The rankings are 13 years old but they still generally hold true. Universities don’t change that quickly.</p>

<p>The one criticism of the Gourman Rankings that I think is valid is that it underestimates the strength of LACs.</p>

<p>Regarding ties…under Spanish, for example, Georgetown and Vanderbilt have the same score.</p>

<p>Why, Dr. Gourman I assume. Although I have to say that personally I thought that factoring in the age of an institution and the age of a department was far more laughable, because it is so obviously irrelevant.</p>

<p>As to collegehelps last post, that is just ridiculous as well. Repeating the same mistake over and over makes it right? The NRC is for grad schools, that has already been stated and is true. It is called the National Research Council after all. USNWR is also for grad programs. Revealed preference, as far as I know, is for the entire university, not individual departments. So none of this has a thing to do with supporting a flawed undergrad ranking program. And you still refuse to address the clear flaws I pointed out, because obviously you cannot.

You are joking, right? How do you know they are reasonable unless you are basing it on your opinion, which is meaninglessly nonscientific, or on ranking results, which is as circular an argument as they come. Finally:</p>

<p>

You apparently don’t know the meanings of words in context. When one assigns measured values, performs some sort of calculation, and does a ranking based on those, that is a quantitative assessment, which in this case if it had any validity would say that the top number is the highest quality. That does not make it a qualitative assessment, which implies judgements rather than measurements.</p>

<p>You can keep trying collegehelp, but that was too outrageous to leave alone. Try making some relevant arguments. Oh that’s right, there aren’t any.</p>

<p>Age of the institution and age of the program are probably not major factors but I think they ARE relevant. I have heard professors and Deans remark about the “maturity” of programs at various universities. I think they are referring to the fact that start-up programs still have to work out the kinks and that programs tend to improve with time based on years of experience.</p>

<p>

Now you just sound desperate. You have no idea how major a factor it was, because his methodology was never published. I am sure professors and deans talk about the maturity of the programs all the time (sarcasm). But I will give you a concrete counter example none-the-less. When Texas was swimming in oil money, they started up various programs at the universities they couldn’t previously afford, and used their wealth to get professors to move there from MIT, Harvard, Cal Tech, Stanford, Berkeley and other top programs. These departments were instantly recognized as first rate. Yet the Gourman methodology would not recognize this, since the departments were still new. Fail. Besides, how much more mature is a department that is 150 years old than 100 years old? I think they both probably worked things out by then.</p>

<p>If legitimate sources agree with Gourman, why not just read the legitimate sources and skip the middleman?</p>

<p>There are no legitimate sources regarding undergrad especially. The sources above are for either grad programs or for comparing one school to another in its entirety, not on a departmental basis. No one else tries to rank them because it is an impossible thing to do.</p>

<p>Let’s take this back to first principles. Think about what makes for a good undergrad department. Do they offer the courses one needs to be well versed in that major, so that one can enter the work force or get into a top grad school? Virtually all the top 100 USNWR schools do in mathematics, a foundation subject, but the same can be said for biology, chemistry, archeology, history, English, and on and on. Are the professors good at teaching it? No way to accurately know and certainly no way to quantitatively measure it, it is all anectodal. Some people might use class sizes as a criteria. OK, that is potentially measurable, but no one has measured this on a per department basis and it changes year to year. One can keep going down this path, but it will always be the same. The material that needs to be taught to undergrads is quite similar at each school, and so trying to rank departments at the undergrad level is a fools errand. It is truly as simple as that.</p>

<p>I too am not a fan of Gourman, no published methodology (other than broadbrush), etc, but I also have no huge problem when collegehelp trots it out when someone asks about undergrad department rankings, so long as they are accompanied with the appropriate caveats. </p>

<p>There have been a few times when there have been particular subjects that not every school has, and in those cases the departments that I’ve read about as being “good” in those areas have also been high on Gourman’s ranking of them. So at least it might be one more source for someone seeking such information to look at. Obviously I would not claim it is “accurate”, the only 'accurate" ranking would be one I performed based on my own criteria. Some rankings involve weightings that I do not subscribe to, in Gourman’s case who knows what he’s doing.</p>

<p>But at least I think (but certainly cannot prove, just a matter of faith if you will) some reasonably smart guy with some insight has made some attempt to put various data items together to come up with something he thinks is reasonable, based on his criteria. I think (but cannot verify) that he has some sort of methodology he’s using, I don’t think he is coming out with these (ridiculously) exact number rankings by using a random number generator. Since some results I’ve noticed, particularly in the less-generic fileds, have made some sense.</p>

<p>“Accuracy” cannot be claimed, and clearly the guy seems to have personal weighting preferences that many do not subscribe to. It seems to me as if he puts great weight on things like highly regarded faculty, # faculty and # courses offered in a subject as pertinent to undergrad education in a particular field. And gives less weight to things like academic capabilities of one’s fellow students and class sizes, which others (but not necessarily everyone) may believe are more important. But that’s just my guess, because as has been noted who really knows what he’s doing. </p>

<p>Rugg’s also gets trotted out, and this is not really a ranking at all. It purportedly is a survey of students at a school about which departments are considered strong at that school. There is no data on how the surveys are done, their level of statistical significance, how frequently they are done. And there is no norming between schools.
So one has to take this for what it’s worth and not more. But I do think it’s something one can look at. </p>

<p>There really isn’t much out there. It’s true much of what makes for a good undergrad department is elusive to quantify, or hasn’t been quantified by anyone, and people may also have different opinions about it.</p>

<p>But if someone is asking about strength of departments, these are about all that’s out there, so if someone wants to throw them out, with appropriate caveats, I for one have no problem. They <em>are</em> out there, after all.</p>

<p>Smart people can decide for themselves how much, if any, weight to give these and other tidbits of information. And they all may not agree.</p>

<p>Couldn’t disagree more. A bad system is better than none? I hardly think so, and in this case it is especially pernicious because it leads to the mistaken (IMO) belief that someone should pick an undergrad school based on some highly flawed ranking of a department. Anyone can publish a list of schools, which is about all that Gourman really is.</p>

<p>I guess if I have to say it a million times I will just do that. There are no measurable parameters that determine the quality of an undergraduate department. For an undergraduate education, there is no significant research component, there is no magic material or textbook that one school uses that the others cannot, there is nothing so significant about most departments for most majors that one can rank them at the undergraduate level. If you can tell me what parameters will make the math (or history or French) department at school X better than school Y, and if these parameters can be measured, then I will believe that there can be a ranking system. Short of that, it is just a list.</p>

<p>Some people would censor information and 'ban the books" based on their own beliefs, I think intelligent people can draw their own conclusions, which may accord or not. YMMV.</p>

<p>What makes a dept better–depth in many areas, top people as measured by their standing and respect in the discipline, good facilities, record of success by graduates.</p>

<p>monydad - please don’t put words in my mouth. I never said a word about censoring or burning anything. That is a kind of vile thing to say. People can draw their own conclusions certainly, but only when presented with all the facts, and often also when presented with opinions to which others have given a great deal of thought or based on extensive experience.</p>

<p>barrons - I rather agree with some of what you are saying in spirit, and I think there are other factors. So how do you measure “depth in other areas”? Or their “standing and respect in the discipline”, apart from their reputation as a researcher? This last is important, because there are numerous cases of Nobel Prize winners and other major award winners that either were not teaching undergrads any longer, or were famous for being horrible teachers. Of course there are some that are great teachers. I have no idea how you separate that out in a systematic fashion so that it only pertains to the undergrad experience. “Record of success”? Interested in how that gets measured as well. Facilities certainly applies more to some disciplines than others. For straight math, history, English, many others it is rather hard to see how there could be a huge difference that would impact the undergrad experience.</p>

<p>

You can look at employer recruitment / job placement as well as grad school placement in the sciences (the NSF has some data on that).</p>

<p>fallenchemist, sure, all universities can use the same book to teach a subject, but don’t you think that the university that not only uses the book, but also uses the Professor that WROTE THE BOOk to teach the undergraduate students, would be preferred by undergraduate students?</p>

<p>

</p>

<p>There are plenty of non prize-winning math (and other areas) teachers that are not great teachers too. I have not found greatness to hamper teaching ability at all–and at least they are well known and can help more getting one into a good grad school than some non-entity.</p>

<p>JohnAdams - There may be, occassionally, one prof within a department that wrote a book that is in wide use for undergraduate teaching. But in any case, would I not go to UCLA as a chem major because no professor there has written a book used in the classroom? Or pick a school I think is overall a “lesser” school because someone there wrote a book that became a hit in the classroom? I definitely don’t get the reasoning there.</p>

<p>noimagination - that is fine for the sciences as far as it goes, but says nothing about long term success. Bigger and more prestigious schools have more recruiters for sure, no doubt about it. But having spent a career in this area, I can tell you there are tons of successful people from schools of all stripes, and I have a person from Milliken that is supervising a more experienced person from Wisconsin. Anecdotal of course, but I have seen enormously successful people from Hope, Missouri-St. Louis, Loyola New Orleans, etc. I am not convinced how good of a measure that is 5 years out.</p>

<p>As far as barrons comment, of course there are non-prize winning teachers that are good and bad. I didn’t think I needed to spell out every example. You are lucky if you have had reknowned profs that were all good teachers. I have had one Nobel Prize winner and one Amer. Chem.Soc. medal winner for teachers and they were both roundly condemned as awful, which I found also. I had one very well known prof that was widely considered as a candidate for the Nobel that was wonderful. But I think your theory fails on a couple of grounds, not the least of which is that under that theory no one that wants to go to a good grad school should go to an LAC. Whereas the actual results are that LAC’s send more students to top grad programs in the sciences (and possibly in other areas, I don’t know) on a per capita basis than the research universities. Yet most LAC teachers are not well known or prestigious in their field. And in any case, neither this nor the book writing aspect is generally quantifiable or statiscally compiled anywhere, so again it is useless for undergraduate rankings as a practical matter. I still wouldn’t do it if they were compiled somewhere, but they are not.</p>

<p>None of this has anything to do with the Gourman report, which is the real subject. My posing of the other circumstances was really meant to be more rhetorical; if that is a discussion people want to have there ought to be another thread.</p>

<p>There is a larger point to all this as far as my opinion goes. While I detest rankings, and feel this particular ranking attempt is the worst case, the real issue is that it should be moot because one shouldn’t choose a college based on a single department, unless the major is quite unique. I mean sure, if you are pretty sure you want to learn Vietnamese and the history of the country, etc. yet the school under consideration just doesn’t offer it, then eliminate it. But for the vast majority of majors and the majority of schools, focusing one’s decision by trying to judge the reputation of a single department strikes me as quite misguided and even somewhat risky. There are a few exceptions where the vast majority of courses are in that discipline such as Architecture or Biomedical Engineering, but not for most majors.</p>

<p>

Eh, obviously it’s possible to do well from any school. It’s possible to do well after dropping out. It’s possible to do well without going to college at all. The question is whether your odds can be improved by attending one school or another. Sometimes the answer may be no. Sometimes it may be yes. I think recruitment info is relevant when trying to decide whether a more expensive school yields an improved ROI.</p>

<p>I think barron’s point is that since teaching quality is impossible to predict based on faculty caliber you might as well go with well-known faculty.</p>

<p>I think the criticisms of the Gourman report originated from an article written by a librarian in the late 1980s and they have been parroted by many since then. All I did was read the actual Gourman Report and compare their rankings with corroborating sources. That is how I gained confidence in the Gourman Report.</p>

<p>Gourman explains his method in general terms but does not give his exact weights. He used a somewhat different weighting system for each major. It is a very complicated system. I’d like to know the exact weights but I am satisfied with his general explanation of his method because his results seem valid. By valid, I mean that his results are corroborated by independent sources.</p>

<p>Take math, for example. For the top 23 universities in the US News graduate rankings, the correlation with Gourman is very high (+.8). The correlation between Gourman and NRC rankings is almost perfect (+.9).</p>

<p>for math
US News grad rank, Gourman rank, NRC rank, school</p>

<p>4 1 3 MIT
3 2 4 Harvard
2 1 1 Princeton
2 6 6 Stanford
2 2 2 Berkeley
6 5 5 Chicago
7 16 11 Caltech
8 14 12 UCLA
8 11 9 Michigan
10 10 10 Columbia
10 7 8 NYU
10 8 7 Yale
13 13 15 Cornell
14 12 16 Brown
14 29 23 Texas
16 28 27 Northwestern
16 9 13 Wisconsin
18 17 14 Minnesota
18 18 22 Penn
20 23 19 Rutgers
20 27 17 UCSD
20 15 21 Illinois
20 25 18 Maryland</p>