USNWR definitely wants to punish those who they catch “cheating”, this is a huge cash cow for USNWR and anything that takes away or otherwise damages the goods is not something USNWR wants.
“But in Part I it claims a S/F ratio of 6 to 1, based on 9,866 students and 1,689 faculty.”
These are deceitful things colleges are doing no doubt, but do you know who gets hurt by this the most - the students, who think that being in class with five other kids is going to be the norm, and it won’t be.
Anyone interested in a very detailed account of a college “massaging” its data to improve for the ratings should read Chapter Two of Mitchell Stevens’ book “Creating a Class”, where an elite school (not named but outed as Hamilton) does this. For example, by changing the grade they use for “Students In top 10% of class” from 90 to 89.5, they raised the percentage of students from 53% to 55%.
(Not making judgments just providing reference. I still love Hamilton and yeah I think they all do it)
That’s what I was alluding to in post #145.
It also seems to me that many universities (including some public universities) provide dubious average GPAs. For 2017-18, UC SD reports an average entering GPA of 4.08. The CDS instructions state that the GPA is to be reported “using 4.0 scale”. To me, that means that the highest possible GPA is 4.0. It also means that a 4.0 average for an entire entering class is implausible, but that is the average GPA reported by UC Davis in 2014-15 and 2015-16. For 2017-18, UMCP reports its average HS GPA as 4.26. Whether these schools are intentionally ignoring the instructions to inflate their rankings, I don’t know; they may simply be passing on a weighted average calculated for internal reporting purposes that they can’t be bothered to recompute; maybe Davis is just rounding down from a higher weighted average. Granted, some state universities do heavily emphasize GPA in admissions, so their true GPA numbers may indeed be in the same ballpark as some of the Ivies etc. (the ones that report any GPA at all, that is). For 2017-18, Princeton reported 3.91.
Me too. But,
Miami has been using weighted for years. And I raised that fact a long time ago and was told on cc that they are still using a 4.0 scale…think about it.
The bigger issue with UC is the % of Frosh in top deciles of class. Even UC admits that the % number reported to CDS is an “estimate” since they do not have that data. Most California high schools do not rank, and UC does not ask for it (so they cannot calculate it if they wanted to.)
Correct, the UCs do not have or track actual class rank determined by the high school. Most likely, they are estimating based on the data they use for ELC.
Yes, GPA using “4.0 scale” reporting in the CDS is inconsistent across colleges and universities. It would be better if the CDS explicitly stated “unweighted GPA using 4.0 scale” to make the numbers more comparable. (But then it is entirely possible that some colleges and universities do not have this information, if they take weighted GPAs off of transcripts at face value, recalculate only weighted GPAs, or just look at transcripts holistically.)
The inconsistencies and gaps in reporting of high school record based measures (GPA and class rank) in college admissions stats probably leads both USNWR and many posters on these forums to rely too much on SAT/ACT scores to assess students’ admission competitiveness, even though SAT/ACT scores are commonly less important than high school record based measures and only rarely more important. The inconsistency that can occur with SAT/ACT scores (e.g. with respect to handling multiple test scores) is much less than with high school record based measures.
What a racket! The university president at our local State U. has a hefty bonus connected to the USNWR ranking - - PLEASE!
counterpoint…
I think the USNWR is a good thing and provides a scorecard that external stakeholders can use to judge the performance of institutions. What do you have without it? A few trustees that are fed data from the inside on how good they are doing.
Is the alogorithm without faults? No. But they do have a calibration (Harvard, Princeton, and/or Yale have to be near the top). It gives potential grant award decision makers, external donors, potential new profs, potential new students a data point that is vetted by an external organization (and one that theoretically is unbiased other than the calibration check) that tells you what schools are doing things well and what ones have room for improvement.
As far as @glido 's statement, trustees are usually pretty bright folks and the question of compensation is never taken lightly. I don’t know what state U., you are talking about, but as far as the health of the institution, you could do far worse than picking an unbiased arbiter to help decide your success.
As long as football coaches are being paid millions and millions of dollars I’m not going to get excited about university president compensation.
There was a kerfluffle in my local paper a few years back about how much the Dean of the Medical School at our state flagship makes. It died out after the letters to the editor all pointed out that the basketball coach, assistant coach, and various athletic “officials” (some with a HS education) were all out-earning the doctor by significant multiples.
^That’s the kind of thing that makes me crazy about the USNWR rankings. How does the fact that Nick Saban made 11 million dollars in a year (an amount equivalent to the average annual salary of well over 100 UA assistant professors) make Alabama a better school?
I think a better alternative is the old Princeton Review and Barons method of having tiers, coupled with the CDS and school scorecards that measure performance on specific measures. Throwing it all into a big vat and coming out with a measurement that claims that school X, which scored 87, is better than school Y because it scored 86 is what drives this impetus to game the rankings.
When my kids were applying to colleges, we relied largely on Princeton Review. I found their metrics much more usable and informative, and liked the fact that the book listed colleges in alphabetical order. And definitely much more useful for getting a sense of fit.
You won’t find conflicts between most ranking systems. Will #3 in one by 9 in another sure, but generally speaking they almost always agree with the groupings. The top 25-30 are usually the same suspects. I think when you see an outlier, that would required additional investigation, but in general a school that is good in one, is most often always good in another one.
I don’t think GPA is used in undergrad USNWR rankings, so I doubt the intention is to inflate USNWR rankings. I suspect it’s more listing the GPA that they use internally, which varies from college to college. For example, does A+ = 4.3 or 4.0? Does a 99% indicate 4.4 or 4.0? In UCD’s case, I expect they are using UC GPA, which is the same GPA listed for all UC’s at http://admission.universityofcalifornia.edu/freshman/profiles/index.html . I expect students are more likely to see this UC GPA listed on the UC website, rather than varied methodology CDS GPA.
It’s not just a public vs private thing. For example, in the 2014 CDS, Stanford reported their GPA as “4.16 weighted.” In the following year, the GPA decreased to 3.95. Harvard’s current CDS indicates a GPA of 4.18. Based on the lawsuit data, I’m guessing these means a converted GPA of 78.8 out of 80.
Correct. https://www.usnews.com/education/best-colleges/articles/ranking-criteria-and-weights indicates that direct measures of selectivity are 12.5% of the ranking. Of that 12.5%, 65% is SAT/ACT scores, 25% is class rank, and 10% is admission rate. Graduation and retention rates, which strongly correlate to admission selectivity, are another 22.5% of the ranking.
I do have to say that I give credit to USNWR for weighting the acceptance rate so lightly, as it’s one of the most gameable ingredients in the ranking stew.
“Yes H/Y are relatively new to engineering, but with the amount of money they can put into and the quality of students they get, how long do you think it will be before they are on par with Stanford.”
Harvard, maybe in half a century. Yale never. Remember that Stanford has a ton of money too and they’re not standing still not neglecting engineering.
As for law school admissions, maybe only YLS is small enough and can get enough high LSAT scores to afford to be holistic. Though HLS recently stopped requiring the LSAT. Still, get a 173 on the LSAT and even if you go to Poduck U, your chances at YHS (definitely CCN along with massive scholarships to other T14’s) are quite good.
This is true despite what the snobs may want to believe.
(Neither Harvard nor Yale is “relatively new to engineering.” Both of them established engineering and practical science schools in the 1840s. Yale awarded the first American PhD in engineering in the 1860s. Harvard closed its school around 1900, in part because the creation of the Massachusetts Institute of Technology has made it unnecessary, and Yale slowly merged its school’s curriculum with that of Yale College throughout the first half of the 20th Century, finally terminating its separate existence shortly after WW II. Unlike Harvard, Yale continuously maintained an undergraduate School of Engineering within Yale College from the 1930s on. It just didn’t receive a lot of investment between 1970 and 1990 (a bad time to disinvest in engineering).)
Yale had ABET accreditation for chemical, electrical, and mechanical engineering from 1936 until 1965. It regained ABET accreditation in 1982, 1973, and 1985 respectively and has maintained them until present:
http://main.abet.org/aps/AccreditedProgramsDetails.aspx?OrganizationID=163
Harvard had ABET accreditation in engineering science since 1962, and added electrical and mechanical engineering in 2013:
http://main.abet.org/aps/AccreditedProgramsDetails.aspx?OrganizationID=22
I’m a little surprised that no one has mentioned this but in addition to all the actions @Sue22 mentioned in post #10 (which included delaying admissions) and also taking in a lot of transfers (USC takes in roughly 3K traditional freshmen in the fall each year and graduates roughly 5K undergrads each year) as well as maintaining a Z list (Harvard and the U of C definitely do that) and using ED and SCEA to up yield (almost every elite private), you could also maintain a non-trad division that is the same as the traditional undergrad divisions in pretty much every way (including cost) that is roughly 1/3rd the size of the traditional undergrad student body. That would be Columbia with GS. You get an extra 2K tuition-paying students who pay the traditional rate (Columbia GS fin aid tends to be poor) to support your faculty without having to count their stats for USNews. Neat trick.
BTW, Harvard, WashU, NU, among others also have non-trad divisions but those divisions graduate a tiny fraction of the overall undergraduate totals and the tuition for those programs is also a fraction of that of the traditional undergrad program (Penn also has LPS but I don’t know how big that is). Plus, unlike Columbia GS, students in those non-trad divisions aren’t allowed to take many of the traditional undergrad classes.
When you come down to it, there are only 2 elite privates that don’t play games with admissions in order to bolster their USNews rankings:
MIT and Caltech.
To be fair, I believe Columbia GS is a nonpareil among standalone non-trad schools (with the price tag to match). Certainly a terrific opportunity for non-trad students (if you can afford it). But it would help Columbia in USNews to essentially have 8K full-time undergrads (paying the traditional undergrad rate) but only have to report stats (and allow admittance from HS to join) for 6K students.