Quality defined as academic reputation, graduation/retention rates, and percentage of classes with fewer than 20 students.
The author specifically excludes endowments/financial resources which he considers double counting when USNews uses that metric as well.
As the author explains:
“It is one thing to assign points for the effects of ample funding (smaller class size, better faculty), but the impact of funding is magnified when the magazine also assigns points simply for having a lot of money. What the magazine does is analogous to a well-heeled college applicant with a high SAT not only receiving credit for his high score but also for the money behind it.”
I really like this list. I think it focuses more on on the important metrics. Initially he had the top 65. I asked if he could expand it to to the top 100, which he nicely did. I’ve forwarded this list to my DD; I told her that this is the list you should use as one of your main tools.
Thanks MDdad2012. I am the editor of publicuniversityhonors.com–if anyone has any questions, I’ll share what I have learned over the last 3.5 years of researching these programs.
I question the usefulness of this metric. Many classes can be taught quite effectively auditorium style (or even w a pre-recorded lecture), followed by small breakout sessions for one-on-one attention.
The proposed grading system will simply reward rich schools, giving short shrift to schools that might be actually be effective at teaching but lacking fat endowments which are required to support low student to professor ratios.
@GMTplus7, that is true, though it’s also the case that some schools are more nurturing while others are more sink-or-swim. And profs can pay more attention to each student in a seminar-type class. Hard to capture that without some numerical threshold, however.
@GMTplus7, we use the smaller class sizes as a metric because honors programs cite these as being important; and, of course, U.S. News does too. In recognition that many larger lecture classes can have great profs, we do not count the U.S. News metric that penalizes schools for having a relatively high percentage of classes>50.
In my experience, large classes are usually not taught this way. Large lecture-style classes in the sciences will usually have laboratory sessions and/or recitations, but those sessions are usually for the purpose of applied/skill-based knowledge in the field and are usually taught by graduate students (who can be great, but are inexperienced at teaching in general). Large classes in other fields sometimes have a discussion section attached but, in my experience, most often do not.
Besides, I think the question is not whether or not the class can be taught effectively but whether a small class is better than a larger class. I am willing to bet that it is. There is a lot of research on the K-12 level showing evidence that smaller class sizes benefits students and has learning-based outcomes attached to it, but I’m not aware of any on the tertiary level.
Obviously, it’s individual preference, but I really liked an undergraduate teaching environment with small classes, with no TAs (often a byproduct of large lecture classes), and with every instructor a full-fledged, earned PhD, faculty member. Real communication with the faculty was thereby ensured (and those enduring conduits proved valuable in non-academic arenas, as well as scholastic ones).
A recent Gallup/Purdue study shows the importance of faculty mentoring and support for long-term success, in life and in careers. This kind of faculty contact is more likely in smaller classes. Having said this, it is also true that for the imparting of knowledge and for the enjoyment of a great lecturer, many large classes have great value.
One nice thing I like about my daughter’s honors program at University of Maryland are the seminar classes. She can take interesting classes on a wide variety of subjects limited to the students in the Honors College. Some of the classes have as few as a dozen students. The USNWR doesn’t give as much benefit to the honors programs in the public universities.
Thinking back on my college experience, the quality of the professor was much more important than the size of the class. Since I am not one who enjoyed speaking up in class, I preferred a terrific lecturer to a class where the prof expected the students to lead the discussion.
Why are those the two most important categories?
Because they’re the most heavily weighted? So we’re going to weigh them even more heavily by discarding other factors?
I’m not sure I buy the “double counting” idea, either. Assigning 10% to “financial resources” is “double counting” … but assigning 22.5% to “undergraduate academic reputation” is not? Is reputation not influenced by factors that already are counted (like selectivity)?
Huh? According to USNWR, Chicago’s average freshman retention rate is 99% (1 point higher than Princeton’s or Stanford’s). Its 4 year graduation rate is 88% (same as Princeton’s and 12 pts higher than Stanford’s). What public institution has retention and graduation rates that high? Not Berkeley (97% RR, 72% GR). Not Michigan (97% RR, 76% GR).
At any rate, the results aren’t all that different. There’s a little bump up for a few universities, and a little bump down for a few others. In my opinion, the relative position of Brown, Chicago, Duke, etc., is a matter of perspective.
Perhaps more useful measures of retention and graduation rates would be the rates relative to expected retention and graduation rates for the school’s characteristics (primarily admission selectivity, but also such things as mix of majors). A super-selective school having higher retention and graduation rates than a moderately selective school is no surprise, but that has less to do with the school than the students (selection effect). But whether a school tends to retain and graduate at a higher or lower than expected rate (treatment effect) may be of more interest to prospective students, particularly students with top-end academic credentials choosing between highly selective and moderately selective schools.
US News does use selectivity (and other factors that presumably are reflected in the peer assessments). Why not eliminate the subjective peer assessment scores and spread its weight across the other factors?
An argument for keeping the PA scores would be that the peer assessors have insights into college quality that the modelers cannot easily capture from a few objective measures alone. A similar argument could be made for keeping “financial resources”. It assigns extra weight to all the other good things that money can buy (facilities, etc.)
@tk21769, you are correct about Chicago’s retention rate. We have corrected the error, and thanks for bringing it up. So Chicago drops from 4 to 7 now, and Brown rises from 16 to 8, rather than 16 to 7. There are other minor changes for rankings 7 through 13 only. We also changed the word “duplicate” to “magnify.”
The reason we say that reputation and graduation rates are the most important is that we had to begin our comparison with metrics that U.S. News employs, and in terms of those metrics, academic reputation and grad rates are the most prominent.
You are correct in saying that Chicago’s four-year rate is better than Stanford’s, but Stanford has a 15% enrollment in engineering, while Chicago has no engineering majors. UC Berkeley has 12%. One reason most rankings use the six-year rate is that the four-year rate is extremely biased against schools such as Ga Tech, MIT, Stanford, Michigan, and Berkeley, etc., all of which have a lot of engineering majors.
Finally, many of the changes are fairly minor–but many are extremely large. Thanks again for the correction.
I find it interesting that most of the high-risers (ones that moved up 6 or more slots) were large publics. Is this due to not penalizing for large class sizes? Or was there another factor that played into it?
Yes, engineering and other high-requirement majors may have greater chance of extra semesters of school needed.
Measuring graduation rates in years also penalizes schools where many students take time off, but still (for example) complete their degrees over 8 semesters spread across more than 4 years. Schools big into co-op programs, or which enroll many non-traditional or lower income students who take time off to work to earn money to pay for school, may be the one most penalized here. Reporting additionally the graduation rate in 8/10/12 semesters (or 12/15/18 quarters) may provide additional useful information here.
“One reason most rankings use the six-year rate is that the four-year rate is extremely biased against schools such as Ga Tech, MIT, Stanford, Michigan, and Berkeley, etc., all of which have a lot of engineering majors.”
The above fact has been pointed out many, many times to posters here on CC.