Hundreds of Colleges Provide No Income Boost

And you just hit on the thing that annoys me most about CC—the apparent assumption that the top 100 in each USNWR category are the only schools in existence (and woe unto the teenager who decides to go to SUNY Buffalo or to Albion!), which leads to all the stress about playing the college admissions game without realizing how much of college admissions is actually noncompetitive, and most of even those noncompetitive schools are producing productive members of society.*

  • Respectively, tied for 99 in national universities and 100 in liberal arts colleges, and both quite excellent. ** And not just productive members of society—look at the undergraduate degrees held by the engineers and doctors and successful business owners and such you have around you, and you'll find a lot of non- or minimally competitive schools there.

Of course different academic admissions criteria are correlated with one another. The study used regression analysis to separate the correlations and understand which ones were the driving force behind the academic success. The whole point of the study was trying to understand why being black was correlated with a lesser degree of academic success by using regression analysis with such correlations. They found that when they considered the rest of the application. the regression coefficient for being black dropped extremely low, indicating that being black was not the driving force behind dropping out of “tough” majors Instead it could almost be fully explained by differences in the rest of the application, indicating differences between black and white students the rest of the application (affirmative action) was the primary cause. An extremely similar effect occurred with test scores as did with being black.

For example, attending a private HS school is correlated with a greater degree of HS academic rigor and preparation than attending a public school. So one could try to predict whether a student is going to drop out of “tough” major by looking at whether they attended private or public school, and see a significant correlation. However, if you compare public/private among students who had a similar degree of academic rigor and prep, then the regression coefficient of public/private may drop very low, indicating the driving force is whether students have good academic rigor/prep; not whether they attended public or private school. If this result was found, it would suggest looking at whether students had good academic rigor/prep is a better way to predict dropping out of “tough majors” than looking at test scores. In the Duke study, they found this same effect for test scores. When they included all available admissions variables, they measured the following regression coefficients with staying in the “tough” major:

Female: -0.19 (0.05)
HS Course Rigor/Prep: -0.15 (0.05)
Harshness of Grading: -0.08 (0.025)

Application Essay: -0.07 (0.04)

Test Scores: -0.03 (0.03)

Test scores were among the least influential factors for switching out of a “tough” major. I chose this Duke study because you suggested the study had a completely different conclusion, not because it had a unique conclusion that fit with my views. Instead every single study I am aware that filters for both a measure of GPA and course rigor came to a similar conclusion. Some more examples are below:

CUNY Study – http://www.aera.net/Newsroom/RecentAERAResearch/CollegeSelectivityandDegreeCompletion/tabid/15605/Default.aspx

“The ATT indicates that there could be a small SAT effect on graduation (2.2 percentage points for a standard deviation increase in college SAT), but this does not reach statistical significance. The ATU is much smaller in magnitude and is not significantly different from zero.”

The Bates test optional study at http://www.bates.edu/news/2005/10/01/sat-study/ and the NACAC study at http://www.nacacnet.org/research/research-data/nacac-research/Documents/DefiningPromise.pdf found no notable difference between GPA and grad rate among test score submitters and non-submitters at test optional colleges, even though the non-submitters had significantly lower test scores than the submitters.

Even studies controlling for just HS GPA and SES (not course rigor) found SAT I added relatively little additional information. For example the Geiser UC studies found that a prediction model using HS GPA, SES, and SAT I could explain only ~4% more variation in cumulative college GPA than a prediction model using just HS GPA and SES. Had they included a control for curriculum, SAT would have almost certainly added far less than the measured 4%.

In the United States, use of IQ tests is not a significant political issue that I am aware of, but one does not need a unique political position to come to this conclusion. The authors of the studies I referenced were quite explicit about how to interpret their results, and that conclusion should not vary with political view.

Considering the number of references I’ve provided in this thread any others, by now you should have an excellent view of where the information is coming from.

The study you referenced is comparing the correlation between a cognitive ability test and an SAT-like achievement test. I don't doubt that such a correlation exists. Instead my claim was every study I have ever seen that includes a measure of HS GPA and HS course rigor/preparation found SAT I scores adds little predictive ability of academic success during college, including the earlier Duke study you referenced that triggered this tangent.

Blossom, how much of your particular experience is because your employer is a very large player,with little time to spare and- maybe- not much need for individualized creative contributions?

Nothing wrong with SUNY Buffalo. There is periodic interest in it on CC. And we certainly know (well, ok not everyone) that top notch professors and programs can be found in all sorts of corners. But some are simply not an intense overall experience that polishes. A moderate effort can be seen as cream. (Oh, now I’m reminded of that neighbor kid who works at the market. Graduated with top honors from another local uneven college. She’s not worldly in her knowledge or other skills. And the surprising number of young adults I know from the local Ivy, who are not on fire.)

So much of this, in the end, depends on the individual. And some luck. Blossom’s firm notwithstanding.

This is such a great true thing I can tell my students who get in trouble with alcohol or whatever and have to leave a “CC top college.” Sometimes that kid gets out of rehab and ends up at an OK flagship or local directional. But the kind of kid who got into Princeton is going to be a superstar at Whoville State. He’s still a Princeton personality, even without a brand name. He will probably get into the same grad school that he would have if he’d stayed at Princeton. 20 years from now, he’s likely to have the same kind of life as his classmates who graduated from Princeton. There are real perks that come with these elite colleges, but bright, ambitious people can get from Point A to Point B without those perks.

I am a SUNY Buffalo fan.

Just sayin’.

Anyway- I’m not sure what Lookingforward’s question is. My company needs tons of individualized creative contributions, but that doesn’t mean we need to send a recruiting team to U Montana or Western CT State to plow through a stack of resumes to find one or two kids we’d like to hire (even though we know there are talented kids at U Montana and Western CT State.) It’s cheaper and a better use of corporate resources to send a team where we know we’ll find a dozen or more. Was that your question?

I have no doubt that there are very talented and ambitious kids at SUNY Buffalo.

Luck is not a strategy even though ask any successful adult and luck for sure has played its part. But kids and parents can do MUCH more than I see them doing to create and ensure a successful launch. Recent conversation with a young relative (whose parent asked me to find out if there had been any progress to figuring out what happens next June after commencement)

Blossom- “have you met with a counselor in career development to start working on a resume, or to start figuring out what you’d like to do when you graduate?”
Kid- “but it’s only September. I won’t have a second to spare until Rush is over, and then it’s midterms and then my frat is planning a ski trip in early December and then it’s finals. So maybe January.”
Blossom- “Do you know that a lot of companies do first round interviews during first semester?”
Kid- “well I don’t want to work for those companies”.

If this were a trust fund baby this would be a fine plan. But he’s not. And even if he doesn’t want to work for any of the 50 companies which are coming to his campus this Fall (I checked the calendar which is online- at least 50), maybe having an interview under your belt is good experience for when he’s interviewing with a company or organization he IS interested in working for.

Sigh.

That was more of a question for @blossom , who in [reply #114](Hundreds of Colleges Provide No Income Boost - #115 by blossom - Parents Forum - College Confidential Forums) indicated that s/he only really considered graduates of either elite schools or graduates in majors whose skills are specifically needed for the type of job to be worth recruiting.

Indeed, a tiger parent who reads such posts may see them as confirmation of the worthiness of goals of tiger parenting, in heavily pushing the kids to get into an elite school, or heavily pushing them into a marketable major otherwise. Most posters here (including I) do not agree that such tiger parenting is a good thing in general, but it is easy to see how a tiger parent can have his/her viewpoint confirmed by reading posts like reply #114.

Outside of the tiger parent context, perhaps it is not too surprising in the context of ideas like those in reply #114 that most pre-professional majors become more common at less selective schools.

Major in bio if you’re interested in bio. It’s a great preparation for any of the allied health fields if in fact- that’s what a kid wants to do. Major in bio- even at a second tier, third tier, or fourth tier school- if you’re interested in bio but not sure what the next step might be. Pharmaceutical sales, PR for a device/life sciences company, fundraising/development for a large academic medical center, community relations for a managed care company, patient advocate at a nursing home/assisted living facility, etc.

Don’t major in bio if you dislike bio because Daddy claims that you’ll be a barista for the rest of your life if you don’t.

Not sure how I’m promoting Tiger Parenting here. I’m sure I didn’t express myself clearly if that’s your takeaway, for which I apologize.

And you continue to conflate my description of how we use Adcom’s to screen for us as a short-hand for elite. There are “schools generally considered elite” on CC where companies I’ve worked for DON’T hire from. For lots of reasons. And schools (I mentioned MST as one) which many folks have never heard of which are fantastic.

How am I contributing to the elite frenzy?

A parent whose kid hates math is going to have a tough go of it getting that kid into and out of an engineering program. But that parent can help the math hating kid understand that even if math isn’t the most fun thing in the world- majoring in history but taking the standard micro/macro economics sequence in college is a very good thing to do for future employability. Or that a kid who is majoring in comparative literature will improve their job prospects by taking a semester of statistics.

I don’t think this is Tiger Parenting- this is helping an 18 year old understand reality. How is this elitist?

“but it’s only September. I won’t have a second to spare until Rush is over, and then it’s midterms and then my frat is planning a ski trip in early December and then it’s finals. So maybe January.”

You wouldn’t believe how often I heard thinking along those lines from students at a T14 law school.

However, in reply #114, your example schools like Swarthmore, Princeton, Rice, and Pomona are all elite schools, and you mention that you recruit there because their admission committees did much of the work for you, implying that you recruit at a subset of elite schools for jobs that are not major-specific (even though you do not recruit at all elite schools). Your mention of Missouri S&T is only in the context of engineering as an example of recruiting for specific majors for jobs that require those skills and knowledge.

For non-major-specific jobs, does your recruiting list include Missouri S&T or other schools of similar admission selectivity?

http://mjperry.blogspot.com/search?q=GRE+scores
This is my back-of-the-envelope approach. The lower down the list, the fuzzier it gets, to use Charles Murray’s expression. Notice, for example, that philosophy and economics are ranked higher than many of the STEM subjects. My conclusion is that the dichotomy between STEM vs. non STEM is a straw man at best.
I am quite confident with the data because they dovetail nicely with the Duke study, the Murray essay, the Bock interview, and the comments of a former recruiter for a major management consulting firm.

@Data10 As I said earlier, you are taking the data too “literally”. If the population is already pre-selected on the basis of a certain attribute, that attribute will no longer appear to be significant in the rank ordering of that population. No “second or third derivative” type of analysis would alter that reality.

Since I am starting to make predictions for the Rio games, a more visual presentation of the problem is readily available. Here is one of the world’s fasting sports where reaction time is essential:
https://www.youtube.com/watch?v=qKUN1TQbXfI
If you were to rank order these professional athletes based on reaction time, you would find it pales in importance to other variables such as level of competition, coaching, training time etc. because they have already been pre-selected, directly or indirectly on that basis. For an amateur player starting out, however, reaction time is a good predictor of his potentials and future progress.

“This is my back-of-the-envelope approach. The lower down the list, the fuzzier it gets”

I don’t think this is so useful when the range within each field is so broad. No one who knows what they’re doing thinks that studying public administration at the Kennedy school is “fuzzy,” or that average candidates in that field get in.

I listed many studies, including studies without notable pre-selections. For example, the CUNY study starts with the Beginning Postsecondary Students (BPS) Longitudinal Study data, which is not pre-selected for any academic criteria. They they look at what variables are good at predicting whether this group that was not pre-selected will be academically successful in college. When comparing students with a similar background to those who did not attend a selective college, they found the effect of SAT score on graduation rate “is not significantly different from zero.” The NACAC review includes some less selective colleges with test scores that are similar to the national average and accept the vast majority of applicants and came to a similar conclusion. The Geiser UC studies compares all UC campus, with varying degrees of selectivity, and all colleges with varying selectivity show a similar pattern. The studies without pre-selection and/or involving less selective colleges showed the same result as the studies focusing on more selective colleges. All of them came to a similar conclusion and found that some academic and other criteria had notable predictive value for academic success during college, but test scores added little to the prediction beyond those criteria.

I expect you’d be surprised at the ranking order if you did a regression analysis where you divide the regression coefficient by the SD to compensate for the lack of variation among the pro athletes. Yet if you do divide by the SD in the pre-selected Duke study (“pre-selected” with a noteworthy 141 point difference in average M+V SAT score between white and black students), SAT score still remains near the least predictive.

^^I don’t have any problem with your data; I just have problem with your interpretation of them. We are dealing with conditional probabilities here, with very restricted conditions to boot. I think you are over-interpreting the data- social sciences are different from the physical sciences because they are a lot fuzzier. Then there is the problem of practical significance, and a lurking variable that I was alluding to…
It is a bit silly for lay persons like us to argue over something neither one of us can claim expertise. If you are so sure of your analysis, why not submit it to a psychological journal and have it judged by a panel of experts? At the very least, present it to the psychometrists at your alma mater or a local university and see what they think. I personally think you are wrong, but they may think differently. If you are right, a century’s worth of research in psychological measurement may soon be toppled. Think about that.

@Hanna An apple to apple comparison would be Kennedy school vs. Harvard physics. Everything is relative, no? This brings up another great point concerning standardized testing…while English majors on average score lower than math majors, I can not say that English majors must necessarily score lower. I must go through the transcript, or simply check the test score. That is where the truth lies.

@Dave_N Here is what a former recruiter for an elite consulting firm has to say:

“If you don’t have at least 750 on the math SAT, you’re out. The most common score is 800. Math plus verbal scores should be well over 1500, and typically over 1550. GRE, GMAT, and other scores should be scaled similarly”.

I guess it doesn’t matter which test is presented.

@prospect1 Both the SAT and the ACT are highly g loaded, but I don’t know if it is to the same degree. Here is a study that partially answers your question:
www.sciencedirect.com/science/article/pii/S0160289608000603

“I am quite confident with the data because they dovetail nicely with the Duke study, the Murray essay, the Bock interview, and the comments of a former recruiter for a major management consulting firm.”

Oh - there’s your mistake. Recruiters for consulting firms aren’t gods – they’re merely skilled in finding the particular traits that are important for success in their particular field, just like recruiters for any industry are. I’m not unsophisticated enough to drool at the feet of McKinsey or BCG as if they recruit the best people in the world – they recruit the best people for their particular needs, which may or may not be the needs of other organizations. Their criteria are relevant only for them, and for no one else.

Your blunt analysis makes as little sense as saying because physics majors score higher on GRE’s than psychology and communication majors, that physics majors would be better choices for therapists or for writing press releases or running social media campaigns. Different people are good at different things, and different industries / fields have different needs. The continued desire to rank from “smartest to least smart” tells me that you have just a very blunt data view of the world - much like Sheldon Cooper.

"Here is what a former recruiter for an elite consulting firm has to say:

"If you don’t have at least 750 on the math SAT, you’re out. The most common score is 800. "

Goody goody gumdrops for them. Why would anyone conclude that because this consulting firm likes to / prefers to look at such scores, that every industry or field should value them the same way?

Canuckguy - people with actual personalities are actually able to interview people and make conclusions about their suitability for a job, even without knowing (or without heavily weighting) their standardized testing scores. What is it that is so frightening about talking to people and evaluating them that you have to resort back to scores?

All studies I am aware that have the described filters reached a similar conclusion, so why would it be necessary to submit a study that repeats their conclusions? Why is it necessary to ignore the conclusions listed in the existing studies I have provided, including conclusions by experts in their field that have been published in peer reviewed journals?

For example, the CUNY study without pre-selection was published in the peer-reviewed American Educational Research Journal. The primary author was Dr. Scott Heil, whose doctorate was in sociology wrote my previously quoted statement:

"The ATT indicates that there could be a small SAT effect on graduation (2.2 percentage points for a standard deviation increase in college SAT), but this does not reach statistical significance. The ATU is much smaller in magnitude and is not significantly different from zero. "

No, nobody is toppling a century’s worth of measurement. Instead I expect it is you who are misinterpreting the research. It isn’t that complicated. If you look test scores alone, there is a correlation between various measures of academic success during college. It isn’t a huge correlation, and is a lower correlation than other portions of the application, but it is a significant one.

For example, the referenced Geiser UC study found that SAT I scores + SES could explain 13.4% of variance in 4th GPA among ~80,000 students within the UC college system, a significantly lower percentage than HS GPA alone. Explaining 13.4% of variance is not huge, but it is significant. However, the Geiser study, the other studies I listed, and all others I am aware that looked into the issue found that the predictive value of SAT I drops tremendously if you also include a measure of HS GPA and HS curriculum/preparation/rigor. In short, the research found SAT I adds little predictive value of academic success beyond what is available in the other portions of the application, particularly HS GPA and HS course rigor.

This does not mean that all studies will show SAT I has little predictive value, and a century’s worth of studies that show such a correlation are wrong. Instead you will see a strong correlation between SAT score and various other standardized test scores. And you will see a smaller correlation between SAT I alone and various non test score based measures of academic success during college.

FWIW, according to LinkedIn, skills trump school name in college recruiting:

“Companies are looking for ways to extend their brand beyond the so-called list of target schools.
And good for them, because data suggests skills (e.g Python programming, creating infographics,
using Excel) trump the school you went to any day of the week.”

http://talent.linkedin.com/blog/index.php/2014/06/3-takeaways-from-the-college-recruiting-bootcamp

@Data10 Let me give it one last go. Here is the study that started the whole debate:

http://public.econ.duke.edu/~psarcidi/grades_4.0.pdf

My reading is that the authors wanted to see if the weaker students really caught up to their stronger classmates, as claimed by universities, or something else is at work. The study was not designed to rank order the admission variables. I don’t see that stated in the title, the abstract or in the conclusion.

If they have intended to do the latter, the population chosen would be a randomized selection of high school seniors across the nation applying to post secondary institutions of all type. A Duke student body is hardly representative of that. I would also use Bayesian methods rather than standard statistics. Just me, I guess.

What you have done is take the data out of its native context and use it in a way the authors have never intended, and then claimed that as their conclusion. When questioned, you start to offer studies out of places like Bates for support. Have you ever heard of advocacy research? The problem is particularly acute in sociology and education.

Bates is one of the test-options schools. These institutions use the strategy to game the ranking on one hand, and to pick up wealthy borderline applicants on the other. Hardly a source of unbiased research if you were to ask me. Then you wonder why I think you are politically motivated?

This conversation brings up something else. Bock alluded to this when he talked about holistic thinkers and deep functional experts, and how combining deep structured thought process and creativity give one great options because there are a lot fewer of those around. From my perspective, this is nothing but a reworking of Dr. Johnson’s Renaissance Man. I hope this youtube is a joke, or we have serious work to do:

https://www.youtube.com/watch?v=p0wk4qG2mIg

Canukguy: If I am following you correctly, (and perhaps I am completely mistaken) you are looking at what qualities the most competitive-to-get-hired-to consulting firms are screening for and then extrapolating to general hiring practices. Since kids I know have described the interview process at these hyper-competitive consulting firms, I think you are for the most part correct about those particular firms.

However, mainly a very self-selected group of “top” students wants to work at those firms.imho. Some of those that do, deliberately create the profiles those firms want. Others may just end up with those profiles. Some of the second group may end up working for those firms by default. jmho.

Because I’m at a stage where I have been hearing about all sorts of tippy-top students going through the interview process at all kinds of places - I don’t really know if the hiring practices of these consulting firms are a norm or should be used as any kind of measure of anything.

Graduates can be highly intelligent, well educated and extremely creative. It seems to me this is what these firms try to measure. I am not really sure they have adequate tools to do that. I certainly wouldn’t see the SAT as a very good tool. Based on the completely anecdotal stories I’m hearing, companies are impressed by skills they didn’t even know they were looking for until the applicant demonstrates them.

This job seeking phase reminds me of the college application phase and looking for the “and” – and I’m not so sure this is something parents can really impact all that much. jmho.