@PurpleTitan Since most of the tougher course are in the first and second years, I can see that. It would be more of an apples to apples comparison to compare CC transfers vs 4 yr students just for the years they were actually both there.
I was a EECS and grading was definitely easier in upper division classes (especially senior year courses). Very similar to graduate level classes (in that anything less than a B is interpreted as failing).
The UC’s have deliberately been using a loophole to get around the law and in the process have substantially lowered the standards of UC B and UCLA. Today many of the graduates who transferred in later had HS SAT’s of 1600-1700 and simply aren’t up to the standard that used to represent these schools. Most people are unaware of how,the once high standard of UCB and UCLA have been so diluted.
I had one very good friend who attended Columbia GS. Had a very rough start in life, alcoholic father who died young, high poverty, scratch dirt western Pennsylvania, local crap high school. He went to GS after 4 years in the Army and a GED.
Brilliant guy. Ultimately went to Stanford Law School after Columbia, and what most impressed me was that he lived in NYC for part of his second year of law school and his entire third year. He earned approximately $70K in his third year working “part time” at one of the top 3 law firms in NYC, which was actually respectable money 25 years ago… He literally spent no more than two or three weeks in Palo Alto - and I should know (we shared an office - I also worked full time while living in Queens and attending a tippy top law school well outside of NYC).
Intelligence always finds a way. And don’t knock Columbia GS!
Since they have enough college records by the time of transfer, their SAT scores are considered irrelevant. After all, how well someone did in college is a better predictor of how well they will do in college later than anything high school based like SAT scores or high school grades.
The California Master Plan for Higher Education that defines the “business plan” for California’s public colleges and universities, including the heavy use of the transfer pathway from CCCs to CSUs and UCs, is from 1960, predating the first use of the term “affirmative action” in 1961, although the first use of that term in reference to actions that may be considered as racial preferences (in recruiting and hiring practices of government contractors) was in 1965. Such measures apparently did not start in colleges until the late 1960s (and even then, most colleges were much less selective than now, at least in terms of academic credentials).
Also, the racial/ethnic group that is the most transfer heavy at UCs is… white.
Upper division courses commonly have higher grades, but that is because (a) the weakest students have dropped or flunked out of college by that time, (b) the students have sorted themselves into majors in subjects that they are best in (e.g. the student who struggled to earn C grades in math and physics but A grades in something else may decide to change from an engineering major to something else that s/he did better in).
So Columbia has a school of general studies.
Georgetown has a school of foreign service with entering stats higher than stats for some of its other undergraduate schools. Most state flagships have honors colleges; some of them also have distinct undergraduate business schools, ag schools, and engineering schools, all with more or less different student profiles. Many of these universities (public and private) have legions of graduate students who vie with undergrads for the attention of top faculty.
The CDS is a simplifying statistical abstraction that masks these complexities. The US News national university ranking simplifies the simplification, in order to force an apples-to-apples comparison of complex institutions.
Arguably, outcome-oriented rankings side-step some of these issues and are less vulnerable to gaming.
With respect to criteria, weights, and data sources, they aren’t immune to controversy, either. What counts as a significant outcome? Does Donald J. Trump represent a good outcome for the University of Pennsylvania? Is Ted Kaczynski a bad outcome for Harvard College? IMO colleges are first and foremost knowledge factories (not leadership training camps); the most important outcomes, for purposes of college academic quality assessment, are academic outcomes. So I’d consider per capita PhD production a better indicator of undergraduate program quality than the “American Leaders” data. I’m sure some people here would disagree.
Still, as far as I can tell, any plausible rearrangement of criteria, weights and data sources is likely to result in a relatively minor re-shuffling in the set of top 50/75/100 colleges. The lion’s share of resources (top faculty, top students, new facilities, etc) go disproportionately to the richest schools. Most academic programs at most of these places look rather similar (at least until you fly below the radar of the overall rankings.)
TK, do the outcome statistics differentiate between actual doctorate degrees (PhD in Art History for U Michigan) vs. the low residence doctorates in counseling, educational administration, criminal justice? I have seen a significant uptick (just in real life, not in a statistically controlled environment) of adults “getting a PhD” which for some, means a ten year, leisurely stroll through some online courses, a “dissertation” which even they admit is a cut and paste summary of tertiary and secondary sources, and an online “defense” of said dissertation. The motivation is typically to qualify for higher pay for the last few years of their career as a public employee since the formula for their pension is heavily weighted towards their last year’s comp and there is a big bump in some job categories for having a doctorate.
I don’t think these “degrees” are necessarily the kind of academic output you want clouding the stats. If it’s a onesie, twosie phenomenon, then sure, it’s noise, and ignore it. But I know dozens of folks in their 50’s and 60’s who are completing PhD’s in order to boost their pay for those last few years in the workforce. And online universities make it very, very easy.
I know someone getting a PhD in Human Resources. I’ve been in corporate HR for over 30 years, and have NEVER met or worked with someone with a doctorate in HR (Phd in Clinical Psychology, yes). I can’t imagine what the “rigor” of those programs must look like!
The NSF/WebCASPAR data base tracks a massive amount of information about PhDs earned over many years in many fields. You can search by “broad” or by “detailed” field. With a little effort you can, for example, compare the number of econ, physics, or math PhDs earned by alumni from specific colleges, or from colleges with various Carnegie codes.
Here and there on the Internet you can find summarizing rankings of earned doctorates in a few fields. CC posters often cite the ones posted by Reed College. https://www.reed.edu/ir/phd.html
Sometimes these citations provoke debates about how the results should be normalized (e.g. by institution size or by program size).
Sue22 your argument about the UC simply defies reality. I am the first to admit that 50-100 points on an SAT is of little meaning but the difference between groups of students who score 1650 and 2300 is massive with virtually no overlap. What you are really saying is that if you went down to the Greyhound Station you could find a number of drivers who could become jet pilots. This just isn’t true and their are reams of outcome studies proving this. The UC system is precisely a way to,admit kids who could never gain admission based on normal measures of performance. Do you really think Harvard Law School or Stanford Medical School would equate the GPA from a CC to be of equal value as MIT or Cal Tech but that is precisely what the UC does. The UC does this because it’s a legal way to cheat and get around the law since this rule virtually guarantees all the transfers will come from the CC since it’s so easy to get a 4.0 or a 3.9 and because standardized tests are ignored. When you transfer to a top private school your HS grades and scores are required. Sue if it walks like a duck and quacks like a duck _____________.
Re, “outcome-oriented rankings” — I don’t think there should be a ranking at all, at least not a hierarchical numerical ranking. I think it would be much more valuable to be able to tie the rating to specific metrics, chosen by the user – and to have a rating system that is a numerical score or star rating rather than a ranking. Because a student who is planning to study engineering has different needs and concerns than a student planning to study journalism. And on many metrics, multiple schools are functional equivalents — so it makes no sense to rank them.
That’s why I found the Princeton Review approach so much more useful – but their underlying methodology is largely survey-based, so no particular indicia of reliability-- and not really based on outcome metrics either.
But the point is-- it’s not particularly useful information to anyone that Northeastern U. now has a US News ranking of #40 and is a whole lot harder to get into than back in the day when it was my daughter’s safety – but what it is important is that the school has unique offerings with its co-op program that may or may not be advantageous to its students. And Northeastern is a very different educational environment than Tulane - which shares the #40 spot in the rankings – although the two schools also share some commonalities. It’s a lot more important to provide information of value on what the respective schools actually offer than some sort of arbitrary determination that they are somehow better than RPI but not as good as Case Western because US News is going to put all the universities on a rank-ordered list (but consign the LAC’s to a different list, because Dartmouth is somehow a completely different sort of place than Williams.