Undergrad question

Um, my statement was regarding the med-school rankings. How exactly do the med-school rankings differentiate GPA’s amongst various undergrad programs? For example, the USNews med-school ranking selectivity indicator utilizes no information whatsoever regarding the undergrad programs from which the students attended: all it uses are the median GPA, MCAT, and acceptance percentage. Admitting students with high GPA’s (who also have high MCAT score) from easy undergrad programs while rejecting students with lower GPA’s from difficult undergrad programs will boost a med-school’s US News ranking.

https://www.usnews.com/education/best-graduate-schools/articles/medical-schools-methodology

I’m not sure what that proves: Cal State premeds will have lower median MCAT scores than UC premeds. Hence, a med-school looking to boost/maintain its ranking will likely admit relatively few CalState students.

However, the relatively few CalState students who do have high MCAT scores also enjoy the advantage of easier grade curves and therefore higher GPA’s.

But the fundamental problem is that rankings don’t factor in any such ‘need’ for an interesting background. Perhaps they should. Nevertheless, the bottom line is that med-schools - or law schools for that matter - who admit too many ‘interesting’ candidates who have relatively low GPA’s and standardized test scores will suffer lowered rankings according to the extant ranking methodologies.

Which only reinforces my central point: rankings influence ad-com behavior. Perhaps they shouldn’t. But they do. As you observed, (some) law-school ranking methodologies do indeed incorporate bar passage rates and obtain full-time legal employment; law-school adcoms are therefore well advised to admit students who they think likely will pass the bar and obtain full-time legal employment in order to preserve the school’s ranking. But let’s be honest: if the law-school rankings stopped incorporating bar passage rates and employment rates into their methodology, then law-school adcoms would not be quite so keen to admit such students.

The upshot is simple: we shouldn’t be so naive as to believe that universities are entirely immune to the influence of ranking methodologies. They are influenced. And we should be willing to admit that. Indeed, academia is replete of stories of schools deliberately trying to calibrate their programs - even employing sophisticated ‘Moneyball’-style statistical analytics - to engineer a higher ranking. For example, Northeastern has enjoyed a meteoric rise over the last 20 years in the US News ranking from #162 to #39 today. Such a rise was engineered via such tactics as:

(1)No longer requiring that international applicants take the SAT so that more of them can apply and hence more can be rejected, hence ‘improving’ selectivity. International students also tend to have lower SAT scores which means that it is better for Northeastern than they do not take the SAT at all rather than take it and get a low score and thereby hurting Northeastern’s selectivity score.

(2) Allowing the Common Application so, again, more students can apply and therefore more can be rejected and hence improve its selectivity score.

(3) Increasing the proportion of classes with a maximum class size specifically and precisely capped to 19 students. A key determinant by which US News determines the ‘Faculty Resources’ metric is the proportion of classes with fewer than 20 students. So by offering more classes capped specifically to 19 students, Northeastern boosts its Faculty Resources score.

Granted, offering classes with a maximum class size of 19 to the entire student body would be immensely expensive, for a university would need to hire more faculty and build more (small) classrooms to accommodate all of the students. However, it should be noted that US News treats all classes with 50 or more students equally (that is, a class of 50 students is treated equivalently as a lecture-hall class of 500 students). In other words, instead of having 10 classes of 50 students, you consolidate them into a single class of 500 students which is counted by USNews as just one class. A university is therefore well-advised to consolidate all of its classes with 50+ students into a few large lecture-hall classes, thereby reducing the proportion of classes of 50+ students.

A university can therefore optimize its USNews Faculty Resources score by offering a high proportion of classes of up to 19 students that serve a fraction of the student body, with the remainder of the students served by the relatively small proportion of consolidated lecture-hall classes. One can then invoke an elementary optimization algorithm to determine the best proportion of small and large classes that maximize its score whilst still accommodating all of its students.

(4) Introducing the N.U.in program where incoming students with relatively low GPA’s and SAT scores can spend their incoming fall semester abroad and become formal Northeastern students starting the following spring. As spring matriculants are not included in the USNews methodology, their low GPA’s and SAT scores do not hurt Northeastern’s selectivity score.

(5) Large-scale networking (basically, butt-kissing) of Presidents, Deans, and Provosts of other universities along with the school counselors at the nation’s USNews gold/silver/bronze-medal winning high schools: the very same people who just so happen to fill out the surveys that determine a school’s USNews Peer Assessment score.

I could go on and on. The upshot is that med-schools and law-schools, like universities, are incentivized by ranking methodologies. Insofar as med-school rankings continue to determine selectivity scores by applicant GPA’s without regard for the difficulty of the applicants’ undergrad program, med-school adcoms have less incentive to consider such difficulty.