Now moving over to Social Mobility which is a category that that many on the board are not as enthusiastic about. The national best score was an 86.8.
HBCUs are well represented on this list, as are a number of regional colleges. Agnes Scott, Loyola New Orleans, Trevecca Nazarene, and Bob Jones were the only private schools that are not HBCUs that made my regional top 25 while LSU was the only state flagship that made the top 25.
School
State
Social Mobility Score
Spelman
GA
47.5
Clayton State
GA
45.8
Tuskegee
AL
43.4
UNC-Greensboro
NC
42.3
North Carolina A&T
NC
40.6
Agnes Scott
GA
39.4
George Mason
VA
38
East Carolina
NC
37.7
UNC-Charlotte
NC
36.3
U. of Memphis
TN
35.4
Xavier U. of LA
LA
35
Valdosta State
GA
34.5
Georgia Southern
GA
34.3
North Carolina Central
NC
32.9
Dalton State
GA
32.7
U. of Louisiana-Monroe
LA
31.4
Kennesaw State
GA
30.1
Loyola New Orleans
LA
28.9
Old Dominion
VA
27.6
Louisiana State
LA
27.5
Trevecca Nazarene
TN
27.4
Western Carolina
NC
27.3
Bob Jones
SC
27.1
Virginia Commonwealth
VA
25.8
U. of Alabama-Birmingham
AL
25.5
Here’s the methodology:
Social mobility salary-impact score (67%): This multiplies “Years to pay off net price” and “Salary impact” by a metric reflecting the proportion of students at the college who receive Pell Grants. “Years to pay off net price” and “Salary impact” are given equal weight within the social mobility salary-impact score.
“Years to pay off net price” combines two figures—the average net price of attending the college, and the value added to graduates’ median salary attributable to attending the college. The value added to graduates’ median salary by a college was estimated on the basis of the difference between the median earnings of the school’s graduates and the median earnings of high-school graduates in the state where the college is located and across the U.S. in proportion to the ratio of students who are in-state versus out-of-state. We then took the average annual net price of attending the college—including costs like tuition and fees, room and board, and books and supplies, taking into account any grants and scholarships, for students who received federal financial aid—and multiplied it by four to reflect an estimated cost of a four-year program. We then divided this overall net-price figure by the value added to a graduate’s salary, to provide an estimate of how quickly an education at the college pays for itself through the salary boost it provides. Our analysis for this metric used research on this topic by the policy-research think tank Third Way as a guide.
“Salary impact” measures the extent to which a college boosts its graduates’ salaries beyond what they would be expected to earn regardless of which college they attended. We used statistical modeling to estimate what we would expect the median earnings of a college’s graduates to be on the basis of the exam results of its students prior to attending the college and the cost of living in the state in which the college is based. We then scored the college on its performance against that estimate. These scores were then combined with scores for raw graduate salaries to factor in absolute performance alongside performance relative to our estimates. Our analysis for this metric used research on this topic by the policy-research think tank the Brookings Institution as a guide.
We multiplied each of these scores separately by a metric reflecting the proportion of students who receive Pell Grants at each college, to reward colleges that both take in a high proportion of students from lower family incomes and do a great job of boosting their salaries, while minimizing costs.
Social mobility graduation-rate impact score (33%): This multiplies “Graduation rate impact” by a metric reflecting the proportion of the students who receive Pell Grants.
“Graduation rate impact” is a measure of a college’s performance in ensuring that its students graduate, beyond what would have been expected of the students regardless of which college they attended. We used statistical modeling to estimate what we would expect a college’s graduation rate to be on the basis of the exam results of its students prior to attending the college and the proportion of its students whose family income is $110,000 per year or higher. We then scored the college on its performance against that estimate. These scores were then combined with scores for raw graduation rates to factor in absolute performance alongside performance relative to our estimates.
We multiplied this by a metric reflecting the proportion of students who receive Pell Grants at each college, to reward colleges that both take in a high proportion of students from lower family incomes and do a great job of making sure they graduate.
Last category currently available: Best Value. The best score nationally was a 100.
This list will probably feel more comfortable for a number of regular CC readers, but there were still surprises here. Dalton State George Mason, West Virginia, Clayton State, UNC-Charlotte & UNC-Greensboro, LSU, Ole Miss, and Georgia Southern are names that get relatively little playtime on the board, but are amongst the best value options in the south, per WSJ.
School
State
Score
Georgia Tech
GA
98.2
UNC-Chapel Hill
NC
96.1
Vanderbilt
TN
92.1
Dalton State
GA
91.3
U. of Georgia
GA
90.6
Davidson
NC
89.9
George Mason
VA
89.2
Virginia Tech
VA
88.7
NC State
NC
87.9
U. of Virginia
VA
86.4
West Virginia
WV
84.5
Duke
NC
84.3
William & Mary
VA
83.9
Emory
GA
83.7
Washington & Lee
VA
80.4
Clayton State
GA
79.5
UNC-Charlotte
NC
77.7
UNC-Greensboro
NC
77.4
Clemson
SC
76
U. of Alabama-Huntsville
AL
73.9
Louisiana State
LA
72.4
U. of Mississippi
MS
71.5
U. of Richmond
VA
70.1
James Madison
VA
68.7
Georgia Southern
GA
68.5
Here’s the methodology:
Our methodology for this ranking was developed and executed in collaboration with our research partners College Pulse and Statista. This ranking scores colleges based on “Years to pay off net price.” This measure combines two figures—the average net price of attending the college, and the value added to graduates’ median salary attributable to attending the college. The value added to graduates’ median salary by a college was estimated on the basis of the difference between the median earnings of the school’s graduates and the median earnings of high-school graduates in the state where the college is located and across the U.S. in proportion to the ratio of students who are in-state versus out-of-state. We then took the average annual net price of attending the college—including costs like tuition and fees, room and board, and books and supplies, taking into account any grants and scholarships, for students who received federal financial aid—and multiplied it by four to reflect an estimated cost of a four-year program. We then divided this overall net-price figure by the value added to a graduate’s salary, to provide an estimate of how quickly an education at the college pays for itself through the salary boost it provides. Our analysis for this metric used research on this topic by the policy-research think tank Third Way as a guide.
We also display the following figures to provide context. These are the components of “Years to pay off net price” as explained above:
Average net price: The average annual overall cost of attending the college, including tuition and fees, room and board, and books and supplies, taking into account any grants and scholarships, for students who received federal financial aid.
Value added to graduate salary: The value added to graduates’ median salary attributable to attending the college. Estimated on the basis of the difference between the median earnings of the school’s graduates and the median earnings of high-school graduates in the state where the college is located and across the U.S. in proportion to the ratio of students who are in-state versus out-of-state.
This category seems to have very little spread compared to the others. That suggests to me that most of the facilities and social life ratings could be fairly similar and diversity may be accounting for most of the differences between schools.
One of the key issues is not just the weighting of 33% but whether the scores you are rating are scaled from 0 to 100, or are clustered from say 60 to 80. The effective weighting of clustered scores is much diminished compared to scaled scores and the methodology doesn’t seem to indicate what the variability is for each individual component.
That is something that I thought about, but I’m not entirely sure. Looking at the schools from all 50 states, there are schools that are racially diverse that are not scoring well.
On page 24 of 25 of the national rankings in this category (highest to lowest), there are some schools that get some frequent mentions on CC. One has 40% White, 16% Asian, 8% Hispanic, 5% Black, 4% 2+ races, 24% nonresidents, and 3% unknown. A different one has 51% White, 21% Hispanic, 14% Asian, 8% 2+ races, 3% unknown, 1% black, and 1% nonresident. A different one is 49% white, 13% Asian, 13% Hispanic, 9% nonresident, 8% Black, 5% 2+ races, and 3% unknown. These schools are scoring in the 50-52 point range, so being racially diverse in and of itself is not sufficient.
Since the diversity score includes the quality and frequency of interactions between races that might be a factor at play (as in, perhaps some of the “racially diverse” schools aren’t necessarily having great interactions between the groups). Additionally/alternatively, maybe some of the schools that don’t get a lot of mentions on CC are doing something right with respect to the facilities and/or community and social life.
This category does seem much more compressed than others. The national high score was a 75.5 (out of 100) with the lowest score a 42.1, and unfortunately there does not appear to be more granular data available at this time (as in, what the subscores were for each component of the overall score) but I believe that all of their scores are out of 100.
I think I’m missing a key concept in your last two sentences though, so if you could explain it like I was a 5th grader that would be great.
If you have a composite score that is calculated as a 50/50 weighting of two components, and the first component is randomly distributed across a uniform range from 40 to 60 and the second is randomly distributed across a uniform range from 0 to 100, then the effective importance is not equal (50/50), because the total is much more heavily dependent on the second score.
In other words being at the 25th percentile of the first component (45) and the 75th percentile of the second component (75) for an average of 60 is much better than being at 75th percentile of the first component (55) and the 25th percentile of the second component (25) for an average of 40.
And being at the 10th percentile of the first component (42) and 60th percentile of the second component (60) for an average of 51 is actually better than being at the 90th percentile of the first component (58) and 40th percentile of the second component (40) for an average of 49.
I suspect, without actually knowing, this is the basic explanation for how adding social mobility measures at relatively small weights seems to be having a fairly significant impact on the relative rankings of a variety of colleges in various popular rankings.
In this review, the top Social Mobility score was nearly twice the bottom in the studied set, and the national best was nearly twice that again. None of the other measures appear to come close to that sort of variability. Again, I suspect this might be systematically true of social mobility measures.
Quite possibly, especially as their “social mobility” score is an abstract concept which could well be determined on something like a 0 to 100 scale. Whereas if you count up survey responses, most will be in the 60-90% range (few want to admit the college they are paying for is terrible) and if you assess facilities based on say 5 points for a gym, 5 points for a swimming pool and 5 points if all freshman get housing, etc, you won’t see the same differences.
There are plenty of Best Colleges lists out there telling parents that the Ivies, Ivies+, and the public Ivies are the “best colleges”.
Without getting into the weeds, real value of this list, IMO, is that it might get parents to realize that there are lots of other colleges from which their kids can graduate and do very well. That alone is a significant contribution to the conversation by the WSJ.
Read their methodology and criteria. Guven what they’re measuring, it certainly makes sense.
Obviously you don’t have to agree with them. And it does shake things up. But when you understand what they’re saying, it can prove to be very useful in certain ways. Especially these days when many people are looking at ROI due to their concerns about the cost of college.
The WSJ rankings are so nonsensical as a ranking of ‘best colleges’ that they are laughable.
If one wanted to rank ‘colleges which help their attendees improve socioeconomically’ you might be a bit better off but still weak. Some might phrase these rankings as an alternative view but they’re not.
They fit well into a pretty obvious formula of “How does one keep HYPSM at the top because otherwise we’ll be instantly discarded while creating drama to sell clicks.” By that measure they have succeeded, by any other they are useless. I would be curious as to how much they had to tweak their model to obtain these results.
I want to mee the person who, dollar for dollar, is going to Babson instead of Yale, Harvard, Stanford, MIT, UPenn, Cornell, Columbia,Duke, Vanderbilt, UChi, etc. The reputation alone opens more doors, the financial aid is avail, and the options for participating in research is incomparable. If you want a good deal then you can’t compare Babson to going to your own state school, and last but not least, WSJ did not label their ranking loudly and clearly, as in: We are leaving out prestige entirely. So while prestige may be superficial, in this case prestige is about getting name recognition by aligning yourself with said brand,(school), which is full of oppportunity. And that does work in the job market particularly immediately after graduation or upon applying for a masters or PHD before one has had the opportunity to show a decline in achievement. Ranking in itself is for prestige so if the WSJ is attempting to point out something other, than they didn’t even need to bother with a ‘best of’ list in the first place as everybody can get the best deal by staying in state in the first place.
For the excellence of the education. Babson is not strictly a business school in the way that most business schools are. There is a heavy emphasis on the liberal arts - almost like a small liberal arts college for business majors. There is also a heavy emphasis on learning by doing and real world problem solving. I love the freshman course where each student is part of a team which is given the task of starting their own business and given a budget (dollars) to get you started. I can see a lot of learning happening there. I also like the fact that learning in teams is not unique to that course. Team learning is part of the 4 year experience there. I know people who’ve gone to Babson. They loved their 4 years there and have been engaged in very successful careers since graduating.
Here are some of the non-business majors (concentrations) available at Babson:
Environmental Sustainability
Global & Regional Studies
Historical & Political Studies
Identity & Diversity
Justice, Citizenship, & Social Responsibility
Leadership, People, & Organizations
Legal Studies
Literary & Visual Arts
Social & Cultural Studies
These are not the kinds of majors which we’d expect to find at a business school. Babson is different,
Agree, great point. If a list like this makes students consider SJSU or UC Davis, that’s probably a good thing. These students will also apply to UNC or Washington, depending on their state of residence. It’s not like they’re going to forget UVA all of sudden as a top public, because of the WSJ ranking. Students go to college for different reasons, and for sure, many go for starting a career, so having a ranking based on ROI is pretty reasonable.
The WSJ methodology is a false narrative due in part to the fact that WSJ left out of the title of the list what it is based on and that takes advantage of non critical thinkers who wanted a T20 school and need to feel good about not getting in.
News Corp/WSJ came up with a brilliant methodology that allows them to gain benefit and sales by promoting a top college list that isn’t entirely reliable as they buried the methodology further down in the description of the list and kept it out of the title.
Of course not everybody is easily groomed to believe such a list, but the less informed family who lacks the critical thinking skills is taken advantage of.
The top private high schools across the country show an 80-100% acceptance rate to Babson College. The trickle down effect of this to public schools across the country means it is a highly reachable school for all. The list implies otherwise but lacks the substance to explain further.
The round tables that happened at the WSJ over this list were filled with skepticism and focused on the methodological integrity vs public perception with News Corp insisting they did not want to lose either of and thought about whether or not they should place the methodology in the title of the list. They knew if they placed the methodology in the title of the list it would no longer be intriguing.
People that want a T20 school do not care as much about the salary vs investment years out, it is widely recognized they want the brand, achievement, and the prestige that comes with it and while Babson is a perfectly fine college, neither brand or prestige comes with the Babson brand when compared to the T20. This leaves the believers of this list more exposed as irrelevant. And in the world of branding and marketing, both things that Babson should be teaching, irrelevancy is the kiss of death.
Without sharing with a complete stranger on a public platform what my personal connection to the news industry is, I can only say that in our industry we publish articles, and attempt to create new perspectives through many ways including shock value in order to generate buzz but without losing credibility.
In this case the buzz was about best college lists. the list needed to gain attention away from other lists. Shaking up the industry of best college lists in this case has created many, many new subscribers to this particular entity. This is a win. And this is why the list was created. To gain readership but without losing credibility. It is a balancing act and takes a lot of effort to come up with something like this.
After many round tables it was decided to use Babson as something significantly different from the usual top T20 schools with proven track records.
The list needed to go against the norm and grab people’s curiosity. It accomplished this while being relatively simple to substantiate based on the fact that Babson’s focus is business.
Babson is not the type of school students/families are going to accept as an equal replacement for an Ivy. Hopefully people can see through the reasons why this list was written in the first place. It was for News Corp first and by chance created some interest in Babson that would not have otherwise received such attention. Babson is a good school. Nothing wrong with it.