This was posted on another CC forum. Oberlin looks fairly tough. Tougher than Stanford, Columbia and Yale. Anyone know if this Boalt Hall evaluation was ever updated?
In 1997 UC Berkeley's Boalt Hall School of Law
did a ranking of the toughest schools to get an "A".
Are they still ranking the schools accordingly?
The L.A. Times ran an article 7/16/97 "Grading the Grades:
All A's Are Not Created Equal "on how the admissions dept.
from UC Berkeley's Boalt Hall re-formulated the law school's
applicant's G.P.A. The formula ranked each college
according to how its students perform on the standardized
law board exam, the LSAT, and how common a certain
G.P.A. is at that school.
The following is UC Berkeley's rankings of toughest schools
to get an "A"
J. Hopkins 87.5
Wm & Mary 84.5
Bryn Mawr 83.0
U. Pennsylvania 83.0
Clrmt. McK. 82.5
Notre Dame 81.5
Wash. U. 81.0
U. North Carolina 79.5
Whitman C. 79.5
UC Berkeley 78.5
UC San Diego 78.5
SUNY Bing 78.0
Trinity U. 77.5
Boston College 77.0
UC S. Barbara 77.0
U. Washington 76.5
Santa Clara 76.0
Geo. Wash. 75.5
UC Davis 75.5
Michigan State 75.0
Boston University 74.5
Cal Poly SLO 74.5
Penn State 74.0
SUNY Albany 73.5
Ohio State 73.0
UC Irvine 73.0
SUNY Buff 72.0
SUNY Stony 72.0
Loyola Mary. 71.0
Arizona St. 69.5
CS San Diego 69.5
Catholic U. 69.5
UC Riverside 68.5
CS Chico 68.5
New Mexico 68.0
San Diego 68.0
CS Northridge 67.0
CS San Fran. 66.0
CS Sacramento 65.0
CS Fullerton 63.0
CS Hayward 63.0
CS Long Beach 63.0
CS San Jose 63.0
CS Fresno 62.5
St. Mary's 61.5
CS LA 58.5
San Francisco 57.5
Or actually, maybe the interest group filed a lawsuit, or threatened to.
But it wasn't literally or necessarily "toughest A", IIRC, they analyzed performance of Boalt students from the various colleges and these are the factors that seemed to equate their subsequent performance at Boalt. But there are other factors that could have led to the specific results they found in some cases, for example different majors at the same school may have different grading characteristics.
It's interesting, though, that so many schools, including Oberlin, were considered tougher grade-wise than their own undergraduate university. They scoped Oberlin at 83.0 and Berkeley at 78.5. That speaks well for the opinions (at that time, anyway) of the quality of an Oberlin education.
I'm going with the assessment of monydad. From my own anecdotal information this list looks suspect. Monydad seems to have concrete knowledge that the results are questionable. I won't use it to draw any conclusions.
"That speaks well for the opinions (at that time, anyway) of the quality of an Oberlin education."
... or poorly for Boalt Hall's analytics. Again it was not intended as a blanket comment on a school's entire grading policies, just what Boalt witnessed, for their very specific purpose, at Boalt. It wasn't their "opinion", it was the result of their perhaps questionable statistical analysis.They likely did not have enough students from many colleges in their program to generate a result that had a high level of statistical significance.
My D1 double majored at Oberlin, and had probably a full point difference in GPA between the courses in her two majors. Given this level of discrepancy, it seems evident to me that blanket statements about student GPAs for institutions as a whole can be highly misleading. Even moreso when multi-college universities, with drastically different students and programs of studies, are considered in the mix as well. How many Cal engineers applied to Boalt? (Answer: proportionally not that many). And what have cal's stats looked like to boalt if the engineers' (lower) GPAs were thrown into the mix, in equal proportions?
BTW I have no "concrete knowledge", I just recall reading something about this a long time ago.
Location: Brown '12 (Sc.B. Math-CS, Classics), University of Kansas '14 (M.A. - Classics)
I know that, of letter grades at Brown, roughly 67% are As. Thus, Brown's placement in this list makes me wonder if the data is rather suspect. Not to say that Brown is easy, but if the standard is "difficulty to get an A" rather than "amount people are learning," I don't think this is right.
Frankly, I wonder how many schools really have enforced, effective oversight policies over the grades that their individual professors are giving out, in aggregate. It seems more like individual professors do pretty much whatever they want, based on their personal standards and, where research is not king, desire to not get poor teaching evaluations. My wife is an adjunct in a grad program here, her school sent out some grading guidelines which she says are routinely disregarded. I've a friend who's a prof at a different school, he's told me that he has no incentive whatsoever to enforce rigorous grading discipline, quite the contrary, given teacher evaluations.
Blanket comments about grades for institutions as a whole can be pretty meaningless if the institutions as a whole are not really effectively controlling those grades.
I don't disagree. What impresses me is that this was an analysis from U of Cal, but they didn't seem to inflate their ratings of U Cal campuses. I don't question potential flaws in their analysis. I'm just impressed with what appears to be a lack of bias toward their own. That leads me to believe that they made an honest attempt at an impartial analysis, even though some people obviously don't like the outcome.
One can also legitimately say the USNWR Rankings were developed via a flawed process, certainly a highly subjective one. Yes, they used objective critieria, but the criteria selected were the result of subjective bias. That's why their rankings differ from those of other periodicals.
"What impresses me is that this was an analysis from U of Cal, but they didn't seem to inflate their ratings of U Cal campuses."
That may well have also impressed the California taxpayers who were filing suit.
Though IIRC it may have been an asian or hispanic anti-discrimination group who said somehow this practice resulted in de facto bias against their constituency. Don't ask me how, or why, that's just what I think I recall.
I want to go back to the original study. Here is what the numbers besides each of the school's names mean:
Thus, under Boalt's procedures, applicants' grades were put into a computer and adjusted according to a formula that assigned each college a "rank number" based on how its students performed on the LSAT.
Under this method, applicants from schools that were ranked 79 and above--such as Yale, Brandeis, Stanford and Harvard--were given extra points in the admissions process, while those from institutions ranked 71.9 and below--including most of the CSU campuses--had their GPAs lowered. Those schools with ranks in between neither gained nor lost points.
The numbering system was an attempt to take in the quality of the undergraduate institution. Don't everyone get mad at me now accusing me of saying that some institutions are better than others, I'm not saying that, I'm just conveying the argument for the weighting idea Boalt had in place:
"We think if you didn't consider the quality of the undergraduate school, it would be a travesty," said Michael Rappaport, dean of admissions at UCLA's Law School. "Let's be honest--a 3.5 from MIT in math doesn't mean the same thing as a 3.5 from another school, because of the level of competition in the student body."
Yes, Boalt had to abandon its system under the threat of a lawsuit
"The adjustment of GPAs was not supported by a sound educational purpose," said Joseph Jaramillo, a staff attorney for the Mexican American Legal Defense and Educational Fund, which filed a federal complaint against Boalt and its admissions policies this year. "It had the effect of disadvantaging minority students who attended many of the schools that were adjusted downward."
MALDEF filed its complaint with the U.S. Department of Education's Office for Civil Rights in March, alleging in part that Boalt discriminated against women and minorities by adding weight to the GPAs of applicants from elite Eastern colleges with few minority students.
"Thus, under Boalt's procedures, applicants' grades were put into a computer and adjusted according to a formula that assigned each college a "rank number" based on how its students performed on the LSAT."
thanks for that correction, it's been a long time since i read about this.
Again, though they mentioned "math major at MIT", this did not adjust for specific majors, or courses, within each institution, and was limited to those students at each institution who applied to Boalt. Not everyone at each institution takes the LSAT, or applies to Boalt. And the proportion of students who do will vary significantly among institutions.
Another thing it seems like Boalt wasn't doing was taking into account the grade inflation that already exists at some of these institutions. Harvard and Brown are notorious for grade inflation, I don't see how Boalt can justify giving applicants from these schools a further boost to their GPA.
Thanks for the clarification, Pea. I think the argument that the Boalt system discriminated against women is complete hogwash. Most of the "elite Eastern colleges" are predominantly female, which includes Oberlin. White female, but female nonetheless. Moreover, Bryn Mawr, Smith, and Wellesley are TOTALLY female, and they were all scored above 79. Not a single all-female college was scored below 71.9. So much for that argument. It's ridiculous on its face.
Now, when you look at the colleges below 71.9, there might be a prima facie case regarding racial minorities. There's only, what, four public schools scored above 79? And I would think the public schools have higher percentages of minority students.
If you're going to discriminate, rationally or irrationally, don't be stupid enough to put it down on paper, no matter how impressed you might be with your own genius. If it's not attorney's work product, it will be discoverable. Anyway, I'm betting the Boalt Hall admissions committee and those of other top law schools do in their heads what they were stopped from doing on paper.