Does the relative size of a department indicate quality?

<p>On graduate study, at the top schools that receive most of the attention on CC, most students go on to some sort of advanced degrees. Many may not be doctoral programs in their fields of study, but medical school for bio majors, or business school for economics majors certainly should count.</p>

<p>The real problem with this method it that it uses unproven proxies for indices of quality. Assume that students are attracted to top departments, so give the departments credit for having large numbers of majors. Assume that having lots of professors is good, so give the departments credit for that. Assume that the distribution of SAT scores across the departments is uniform, so give a department credit for high average scores at the school. What about a place like CMU, where the overal SAT average is lower than some other top universities, but the average in the SCS is astronomical? Do you penalize the SCS for the lower scores of the artists, do you assume that the college of fine arts is even better than it is because of the computer geeks? Is the SAT score even remotely meaningful for the quality of the drama department?</p>

<p>The real tests of department quality would be outcome measures- how much do the students learn in their fields? Since there are few direct tests of this, how about outcome-based proxies? For many fields, the PhD production would be relevant for comparting one college to another. A strong record of PhD production means that the students are talented enough to make it through a doctoral program, and that their college experience both prepares them for graduate school and makes them want to take that route. If one added professional school attendance for those fields in which this is relevant, then the result would be an outcome measure that did reflect the experience of most students at top colleges. Of course, this would not be useful at all for fields in which further education is rare, and of little importance, again the art, drama, and music areas come up for people who will be performers, not academics. For these fields one would need other evidence of career success- are they working in their fields, tenured in top orchestras, winning acting awards, having major art exhibitions. Very hard to get data.</p>

<p>To focus on top students at top colleges, look at the number who win prestigious academic awards- Putnam competition, NSF fellowships, etc. Checking how many undergrads publish research papers would be helpful, but this data is almost impossible to obtain.</p>

<p>The NSSE is entirely proxies. The things it asks sound like they should be important, but do people who have these sorts of engagement with the faculty and their studies really end up better off for it? Is there proof? This also would penalize the technical schools, and those broader universities with large numbers of technical majors, since engineering tends to score low on these measures, but places like MIT turn out critical thinkers in huge numbers.</p>

<br>

<br>

<p>Collegehelp already addressed this…but in some cases a small department (in a large university) can be due to the selective nature of that department. I think it is a bit of an overstatement to say that small departments are due to “lack of interest” in that department. Another thing to consider when looking at size is attrition. A large department may simply be because a large number of people declare that as a major or enroll in those courses. The true test would be how many people graduate with degrees in that program. Also, some programs are larger simply because ALL students are required to take courses within that department (e.g. English…all freshmen at most colleges have at least some kind of English requirement).</p>

<p>Collegehelp, roughly 60% of Michigan students take the SAT and about 70% take the ACT (30% take both). </p>

<p>Of the 60% who take the SAT, the mid 50% is 1240-1400 and of the 70% who take the ACT, the mid 50% is 26-31 (closer to 32 actually).</p>

<p>But at Michigan, like at most elite flagship state universities, SATs are often misleading. Most state universities only counte the best score in one sitting and most students attending state schools never take any prep courses and seldom prepare for the SAT/ACT. </p>

<p>If students attending state universities took as much time to prepare for standardized tests (including prep courses) as students in their private counterparts and if state universities reported the average of the highest score per section rather than the highest score in one sitting, average SAT scores at state schools would be significantly higher. </p>

<p>So statistics are not always very telling.</p>

<p>thumper-
I agree that the number of majors is the most meaningful number. If a department is small because of its selective nature, then that should be accounted for by factoring in the SAT scores of the students in the department (instead of what I originally proposed…the overall SAT). Yes, in the case of English the number of faculty might be large because the English department services the entire college. Number of faculty is not a good substitute for the actual number of majors.</p>

<p>afan-
I think I understand what you are saying…you make some good points. SAT is probably not relevant in the drama department or in any program based on special talent in art, music, drama, and so on. So, my idea falls apart in those cases. I don’t know of any “score” for talent and creativity similar to SAT scores. </p>

<p>And, I concede that, for the non-arts, the SAT in the particular department is more appropriate than the overall SAT because, as you point out, SAT scores are not uniform across departments. The School of Comp Sci at CMU should be assessed by its own SAT scores.</p>

<p>Regarding outcome measures: I think learning is the outcome that is most immediately relevant to the purpose and function of higher education. GRE exams in specialty fields measure that for some majors but not all. And, not everybody takes the GREs. PhD production, career success, grad school admission, job placement, winning awards…they are probably all related to the production of learning, which is the fundamental goal and function of the college. The outcomes you mentioned are indirect and many other things come into play between the education and those outcomes. They are good proxies for learning, though, and I can’t think of better ones except possibly;
(1) the graduation rate in a particular department. For example, engineering is one of the stronger majors at Cornell and their graduation rate is higher than the overall University.
(2) the average final cumulative gpa in the department…but then grading standards vary
(3) the GRE subject scores…but not everybody takes GREs and GRE subject tests are only given in certain fields</p>

<p>Moreover, all this information is difficult or impossible to obtain. I was looking for something simple but valid that everybody could use.</p>

<p>You say the indicators I proposed are unproven. But, they could be proven (or refuted). I know how they could be proven but I don’t have the time to do it. </p>

<p>For example, I could make the phone calls or do the digging to come up with the number of majors in economics, for example, the total enrollment in Arts and Sciences, and the SAT 75th percentile for the majors in economics. Then, calculate the “quality score” for about 20 or 30 colleges. I could then tell whether the “quality scores” have “face validity”. That is, do they make sense. I respect common sense. I could also correlate the “quality scores” with the outcomes you proposed (PhD production, career success, grad school admission, and so on) and with GRE scores, departmental grad rates, departmental gpa. I could also see if the scores agree with published rankings like US News and Gourman.</p>

<p>The size and SAT indicators haven’t been proven, but they could be proven. The relative size idea is like a theory that should be tested. It won’t be perfect but imperfect things can still be useful (thank God).</p>

<p>Alexandre-
Thank you for the correction and clarification. It would be important to obtain the right numbers before taking the results seriously.</p>

<p>You can get number of degrees conferred by major from IPEDs or from Common Data Sets, but even many universities may not know the SAT scores by department. They could figure it out if they wanted to, but they may not have bothered. </p>

<p>If you use department size (even if you use number of majors, not numbers of faculty members to correct for the English department phenomenon) you would have to use it only across comparable institutions. You could compare one large state university to another, but not to a private university or an LAC. The state universities have large engineering departments because there is a public policy mandate to do so. They get lots of top students, not because their engineering departments are better than the top privates -maybe they are maybe they are not- but because the cost to the student is unbeatable. Not many LAC’s have engineering departments, and those that do are small. This does not mean they are bad departments, but engineering-oriented students rarely go to LAC’s. </p>

<p>The bigger problem is the underlying assumption that it is meaningful to produce a formula that reports the “best” departments. For some students, the availability advanced courses and large research groups are critical. For other students small evironment, small classes, and close faculty contact are much more important. For two such students the lists of “best” departments may be entirely different. As soon as you choose weights for student/faculty ratio, class size, number of courses, or number of faculty, you decide whether the formula will favor the large university or the small college models. If you could derive your weights from outcome measures, then they are not based on face validity-which is often wrong- but on demonstrated effects. The problem is that, as illustrated in this discussion, it is hard to define the optimum outcome measures, let alone to get the data.</p>

<p>For example, many top graduate schools consider the subject GRE’s to be of very little value. Perhaps they help weed out those with terrible scores, but they do not tell much about distinguishing good from great applicants. So these admissions committees do not consider the GRE’s to be a particularly useful outcome measure. Some say the the questions are too trivial, some that they do not test depth of thinking or breadth of knowledge, some say the content itself is wrong for assessing potential scholars.</p>

<p>I revised my method somewhat based on the posts in this thread. I went to the IPEDS COOL website and got some degrees awarded info in one subject…english. I used degrees awarded as a substitute for enrollment. I limited the comparison to similar types…LACs. I chose LACs that were similar in selectivity to see if the “quality score” could distinguish LACs with similar overall selectivity.</p>

<p>Here are the results. Do they agree with what you know about the English programs at these LACs?</p>

<p>college score</p>

<hr>

<p>Kenyon 66
Davidson 55
Haverford 51
Colby 47
Middlebury 42
Barnard 41
Oberlin 40
Vassar 36
Colgate 34
Wellesley 33
Bowdoin 33
Whitman 24
Washington and Lee 19
Macalaster 15
Claremont McKenna 13</p>

<p>I know Davidson’s program is a darned good one, especially with the Patricia Cornwell scholarship and similar opportunities. I also know Kenyon, Oberlin, Whitman, and Vassar have great English programs. I don’t know about the others.</p>

<br>

<br>

<p>DS is a music major. To be honest, their SAT scores would not give much of an indication of the quality of the program. What would give an indication is the types of programs these kids do in the summers (e.g. Aspen, EMF, Music Academy of the West, etc), and what they do after graduation. Kids in competitive music programs are accepted based on their auditions, not their SAT scores.</p>

<p>“Here are the results. Do they agree with what you know about the English programs at these LACs?”</p>

<p>FWIW, Oberlin has a huge conservatory of music. None of these other schools do. IS Oberlin’s Conservatory of music students/ degrees included in its count, thereby diluting the % of its English majors? I know English is considered a very strong major there; no idea how these places would compare.</p>

<p>Who knows what special programs may distort the numbers for some of these other schools.</p>

<p>I still do not accept the basic premise that this is an appropriate way to compare departments between schools.</p>

<p>Let’s say,e/g/, Wellesley has an unusually large econ department, and as a result its other departments are relatively a smaller portion of Wellesley’s total. Does that make Wellesley’s English program worse than , say, Barnard’s because in this hypothetical Barnard doesn’t have an unusually huge econ department so it has a higher & english majors? No it doesn’t. The whole concept is flawed, IMO.</p>

<p>SATs are an “input”, not an “output”. They don’t tell you anything about what happens there - there is no measure of “value-added” . And, in these schools, as the CollegeBoard notes, at this level, in aggregate, they are simply a measure of family income. A “1400” is simply a “1200” plus $100,000 in family income. So what you end up doing is measuring the quality of a department by the average income of the families of the students who spend time in them.</p>

<p>Since the vast majority of students - everywhere - with virtually no exceptions - do not continue to Ph.D.s in the subjects in which they majored - the most reliable way to find the quality of a department is to learn something about the experience of the average student studying within it. Otherwise you get wonderful anomalies, like the music department at Williams having a higher rate of med. school acceptances than the biology department. So if, for example, med school admissions rates are the measure, is the “quality” of Williams music department higher than, say, Northwestern’s biology department?</p>

<p>“The NSSE is entirely proxies. The things it asks sound like they should be important, but do people who have these sorts of engagement with the faculty and their studies really end up better off for it?”</p>

<p>It’s proof in itself - after all, students are the consumers. The consumer gets to decide whether heading to graduate school, or becoming a manager at McDonald’s is a valid career goal, and whether the school helped or hindered her in attaining it. And whether the quality of education itself was worth its salt.</p>

<p>Econ is a particluarly difficult one to measure, I think. It’s the largest or one of the largest majors at most places, and unlike something like Egyptology, gets students bound for PhDs, students bound straight for business, students bound for MBAs, students applying to law school, and even some headed for engineering. Is the same type of program the “best” for all these different types of students?</p>

<p>For students heading to PhDs in econ, the best programs would be math heavy, which implies that there should also be good math or applied math departments. I’d also look at where the faculty got their PhDs, what the research opportunities are for undergrads, and the size of the department compared to the number of students in the major. And, I’d look at the teaching reviews which some universities make available online to the public.</p>

<p>When you’re done with all that, these measures are not static. As an example: Columbia’s econ department has been overwhelmed by the huge increase in econ majors in recent years. This year they hired EIGHT new faculty, in addition to some hires last year. Most of these were the department’s first pick – which is an indication that people in econ know that Columbia is a department now on the way up. But it will probably be years before that kind of upward momentum is registered in rankings. For example, the NRC rankings of graduate departments, I believe, may only be done once every ten years.</p>

<p>So in the meantime, would you downgrade Columbia’s econ department because of inexperienced teachers, or teachers unfamiliar with the way Columbia goes about teaching economics? (I wouldn’t, but it would seem to be a valid line of reasoning.)</p>

<p>

No – because they acutally hired more senior faculty than junior. That in itself is an indication that the department is capable of drawing some “names” because it is seen in the ascendency. However, that begs the question of whether the faculty can teach or not – which is why I’d look at reviews wherever possible. At Columbia, the student-run review site is public and extensive. But, also, the econ department posts some info from student evaluations filled out at the end of each semester. It’s one of the few departments that puts this info out there for everyone to see.</p>

<p>I think that in econ – as opposed to Egyptology – there are lots and lots of places where the education is adequate to good, by the way. Because students study it with so many different goals in mind, I think it comes back down to that elusive “fit” rather than department size.</p>

<p>Yup. Know about “names”. Taught (as a TA) at a U with lots of Nobel Prize winners, who gave their yearly lecture, did have one “open house”, with undergraduates, and then weren’t seen for the rest of year (except by selected graduate students.) Some students loved the experience knowing that those folks were around; others were dreadfully disappointed that they weren’t to be seen. Which is why I think students are in the best position to decide.</p>

<p>My d. majors and minors in two subjects - music composition and Italian - where there are wild differences in “quality” (including “quantity” of faculty, and offerings) among the LACs, even among what are normally considered to be the very best ones. The differences would have nothing at all to do with entering SAT scores, little to do with future Ph.Ds, and very, very little to do with where faculty got their Ph.Ds (and, in the case of composers, even whether they actually had any.) There are top 10 LACs which, in each of these areas, wouldn’t break the top 50 in “departmental quality”, regardless of metrics considered above.</p>

<p>In the case of economics, among the top 50 LACs where there is no graduate study, the faculty all come from the same pool, the same schools. Whether one ends up teaching at one place rather than another might have more to do with whether there happened to be an opening in the particular year the potential faculty member was applying, and whether they stayed might be a matter of departmental politics (including whether the chair graduated from the same school the younger faculty member did, and whether their wives, husbands, or lovers happened to get along.) There are many stories…</p>

<p>Another problem with ranking departments by how many students choose to major in them: </p>

<p>Think about how students acutally end up majoring in a given department at a given college. First they choose the college- a combination of where they applied, where they were admitted, and where they decided to go. These choices are complicated, and are driven by many considerations other than the top department in their area of interest. Many high school seniors know that they do not know their future majors, and do not worry about it that much in college choice. Others think they know their majors, only to change, perhaps several times, before they graduate college. So students end up at a given college for many different reasons. Once there, few transfer to a different college due to even large differences in quality of majors. Very few would transfer based on small differences. So consider, for most students, the choice of college to be fixed about 2-3 years before they make a final decision about major. The decision about major is then made from the options available, and the quality of the department as compared to other institutions is a very minor consideration.</p>

<p>Suppose the student is at, say, Oberlin. Does she really say to herself “I really want to major in English, but Oberlin’s English is only 2/3 as good as Kenyon’s, so I will major in physics instead”? </p>

<p>In reality students say to themselves “I now know that, considering interest, ability of this major to help me build the kind of career I would like, availability of courses, opportunity to study abroad, fit with my extracurricular interests, etc, etc, etc, of the choices available at my college, I want to major in X”. X might well be physics, but not because Oberlin has the “best” physics department. Had this student known that she wanted to major in physics when selecting colleges, this student may not even have applied to Oberlin. She might have had an entirely different list of colleges. She might have gone to, say Harvey Mudd, and ended up NOT majoring in physics because the first two years of college would have been totally different. This would not mean that physics at Oberlin is better than physics at Harvey Mudd.</p>

<p>For a given student, the fact that a department has a good reputation among the other undergrads probably is one of many factors. But “good reputation” will depend on what these undergrads value. Does it mean small classes, big labs, lots of students get investment banking jobs, lots of students end up teaching poetry in high school? </p>

<p>To respond to your list. For reasons we have been discussing, I don’t think it is meaningful to rank the English departments. However, I do agree that places like Kenyon and Middlebury market themselves to future English majors as great places for writing and thinking about literature, while someplace like Claremont McKenna does not. So English major types are more attracted to some colleges than others, and some of them end up actually majoring in English (and some in physics). </p>

<p>Out of curiosity, what happend to Amherst, Swarthmore, and Williams? Did they not make the cut as English departments, or are their admissions too selective to fit in with these other places?</p>

<p>repeating myself, but to flog the horse once more:</p>

<p>You’re unduly rewarding departments that are contained in one-dimensional, non-diverse schools.</p>

<p>here’s a hypothetical:
School A has a good English department, period. The other departments are not distinguished.
School B has an at least equal English department, actually,but has a lot more going for it: great music department, great sciences, etc.</p>

<p>Students come to school A for English, period, because that’s all it’s got going for it.
Students come to school B for its excellence in a much broader range of fields. So the proportion of English majors there is lower than at School A.</p>

<p>You conclude, test scores being similar, that school A is better in English than school B.</p>

<p>I say all it shows is that school A is a lopsided, one-trick pony. It may be a relatively strong department at that school, but that in and of itself does not make it better than the department at another school that has other things going on besides English.</p>

<p>Hypothetically speaking of course.</p>

<p>thumper-
Regarding your observation that SATs are probably not relevant to music majors. You may be right, although I think music majors tend to do well on SATs. SAT may not be as relevant when assessing the quality of a music program (or any of the arts, for that matter). </p>

<p>monydad-
You point out that Oberlin has a large conservatory which makes their English department enrollment proportionally smaller. My above calculation for Oberlin’s English department might therefore be invalid. If I subtracted 500 students from Oberlin’s total enrollment, the “quality score” for Oberlin’s English department increases from 40 to 49, increasing its rank from 7th to 4th in the above list. It was easy to correct. This isn’t a flaw in the concept. It’s important to identify the appropriate denominator by learning about the colleges on your list. The same is true of universities. When rating the English department in a university, it would be important to exclude conservatories, engineering schools, communication schools, and so on from the denominator.</p>

<p>You also point out that a school with an exceptionally large economics department would make an Engish department proportionally smaller. Say there is a college with 2000 students and 200 English majors (10%). If the econ department were cut in half from say 200 to 100 students it would only make a difference of about one half of one percent in the proportion that are English majors (200/1900=.105). I think that is negligible.</p>

<p>mini-
SAT scores do not measure income. They measure how smart and hard-working students are. If SAT scores are associated with income, it is because the parents may have been smart and hard-working too. Parents pass those qualities on to their children. In fact, I think smart, hard-working parents sometimes have lower incomes because they don’t value money and material possessions as much. There may be an inverse relationship between income and ability among the intelligent, in my opinion, although they are not likely to be poor. </p>

<p>You say SAT scores do not tell you what happens there [in the department]. You say they are an input, not an output. But SAT scores DO tell you something about what goes on in the department. Better colleges and departments attract students with higher SAT scores. Educational quality causes higher SAT scores (over time). My premise is that better departments attract more and better students, relative to the size of the college.</p>

<p>sac-
all good suggestions…good ways to evaluate the quality of a department</p>

<p>afan-
I excluded Amherst, Swarthmore, and Williams because they were so selective. I initially chose colleges in the 1430-1460 range (there were quite a few of them) and then added Kenyon (1420) and Middlebury (1500) because I knew they had good reputations in English and I wanted to see what would happen. Kenyon…Kenyon Review. Middlebury…Breadloaf School of English. I did not know much about English at Davidson, Haverford, or Colby. I chose colleges similar in selectivity to find out whether this method could discriminate among schools that looked similar overall.</p>

<p>True, students change majors. I have two comments about that. If the destination department is bad, some students will change colleges rather than change majors within the college. Moreover, changing majors happens at every college…it is more or less equalized among colleges. Changing majors does not explain why College A has 20 % English majors and College B has 2% English majors.</p>

<p>Bottom line: the method seemed to work pretty well this time. Kenyon and Davidson floated to the top.</p>

<p>“You say SAT scores do not tell you what happens there [in the department]. You say they are an input, not an output. But SAT scores DO tell you something about what goes on in the department. Better colleges and departments attract students with higher SAT scores. Educational quality causes higher SAT scores (over time). My premise is that better departments attract more and better students, relative to the size of the college.”</p>

<p>In the range of schools you are talking about, not in the least. </p>

<p>“SAT scores do not measure income. They measure how smart and hard-working students are. If SAT scores are associated with income, it is because the parents may have been smart and hard-working too.”</p>

<p>O…kayyy…so now you are judging the relative quality of the department by how smart and hard-working the parents of the students attending are. Hmmm. Anyhow, the measure of income, according to the CollegeBoard, is quite clear. All else being equal, they can predict SAT scores (in this upper region) based on family income. Now while that can’t be denied, a better argument might be that the quality of the department can be predicted based on the income of students’ families. It’s an interesting hypothesis. Never seen it tested, though.</p>

<p>The real test of this relative size theory is whether it works. I think it worked pretty well for the LAC English departments.</p>

<p>I have also done the calculation for Biology departments.</p>

<p>BIOLOGY
Haverford 56
Whitman 47
Colby 38
Davidson 37
Oberlin 33 (excludes conservatory from enrollment)
Bowdoin 26
Claremont McKenna 25
Macalester 23
Wellesley 20
Washington and Lee 19
Colgate 19
Kenyon 17
Barnard 14
Vassar 12
Middlebury 11</p>

<p>The ranking is very different from the ranking for English. So, it discriminates among departments at similar colleges. I think Haverford is known for a strong Biology department so that is perhaps some validation. Does the Biology ranking agree with what you know about these biology departments?</p>

<p>Here is the English ranking again, excluding for the conservatory at Oberlin from its enrollment.</p>

<p>ENGLISH
Kenyon 66
Davidson 55
Haverford 51
Oberlin 49 (excludes conservatory from enrollment)
Colby 47
Middlebury 42
Barnard 41
Vassar 36
Colgate 34
Wellesley 33
Bowdoin 33
Whitman 24
Washington and Lee 19
Macalaster 15
Claremont McKenna 13</p>

<p>How would your earlier rankings for Economics compare to your revised one for the same major at LAC’s?</p>

<p>For the English and Biology major, the absence of Amherst, Williams, Swarthmore, and Pomona distorts the validity of the rankings. Also, for Biology, where does Harvey Mudd rank? </p>

<p>BTW, your criteria for selectivity are not timely.</p>