The Most Regretted Majors All Had One Thing In Common

I think you propose an interesting question, especially within the context of the thread title. These are the types of questions I have thought about every day of my kids’ lives/educations. As a homeschooling parent, I am their primary educator through high school. I have spent over 25 yrs designing 8 kids’ educations around developing higher order critical thinking skills. Bloom’s taxonomy, Socratic discussion, etc influence my daily decisions around our educational choices.

In simple terms, yes, I believe there are more reasoning skills than logical, rational, and mathematical. 2 that come to mind are visual-spatial and creative.

But what struck me from this conversation is the connection to majors. I dont see the issue as related to majors as much as connected to how the US educational system teaches. Our system focuses on the lowest level of thinking, knowledge, vs training students’ minds to move through to higher order skills. The older Jesuit and classical models (not so much their contemporary replicas) were focused on training minds to think vs simply instilling a body of knowledge. Unfortunately, today education seems to be defined by the latter vs former. I think that is an unfortunate “progression.”

But, college majors are self-selecting. Statistically, difficulty in succeeding is probably the main filter, but individuals are more complicated than statistics and critical thinking skills should be the focus from early childhood, not college majors.

FWIW, my kids are mostly science/math-oriented. (ChemE/physics) But my foreign language loving dd is as good at math as they are without that interest. She loves puzzles. (Literally…builds 2000 piece puzzles as a way to relax.) Language and analyzing literature are things she loves. She follows allusions like a detective for understanding inferences. Her siblings have nothing on her ability to think. Her major wasn’t an “ease” filter, but an interest one.

Canuck- you honestly don’t know any stellar math and physics students who pursue a path other than doctorate???

Wow.

Just out of my kid’s MIT friend group I could describe 10 different career paths that either involved one of your allegedly “easier” fields, or no grad education at all. You are aware that there are smart people with “elite” math and physics degrees doing all sorts of things, no?

This whole “doctoral creep” reflects a strange bias on CC which I don’t think is reflected in the real world. In some fields, the strongest of the strong continue on to a PhD. In other fields, it’s the “I don’t know what else to do so I’m applying to grad school in the hopes that I figure it out”. And much of it is timing- if you graduated in 2009 when jobs for new grads were hard to come by, sticking around for a PhD (especially if you got funded) seemed like a no-brainer. By 2016, you had lots of very lucrative options… so you are describing a cohort responding to economic factors, NOT some built- in “only bad math/physics BS holders end up in industry”.

1 Like

Nothing in the universe is really definitive. The reason humanities and social sciences help develop critical thinking skills is because in those disciplines the answers aren’t definitive, unlike the STEM subjects at the high school or freshman college level, which may be more challenging for most students. However, once beyond the basic level, most STEM subjects require as much critical thinking, if not more. The science can no longer describe the world definitively. The best science can do is to describe the world, and the events within it, probabilistically. More imagination becomes necessary in both sciences and math.

@Canuckguy, I take your points, but you’re filtering out, just for starters, cultural effects. Your reliance on standardized test scores as if they’re definitive reflections of ability is problematic, to say the least.

@1NJParent, I think anyone who’s gotten deeply into STEM stuff at all can verify that once you get to the level of the interesting stuff, answers to STEM questions also aren’t definitive right-and-wrong sorts of things. I mean, at the more basic levels, yeah, that’s a difference between the humanities/social sciences and STEM fields (for the most part), but only at the very basic levels.

(And the fine arts, so often ignored in all these discussions, look on, amused by all this talk of “answers”.)

A larger portion of math students pursue grad school than law. If you believe that the larger portion of students who pursue grad school to also be the stronger students and those who pursue law school are weaker students, then maybe the GRE is the more appropriate test to use to compare majors, rather than the LSAT. A summary of 2019 GRE scores by major for the 2015-18 period is at https://www.ets.org/s/gre/pdf/gre_guide_table4.pdf . The sections have different scalings, so I expressed in terms of Z-score. +1.0 (0.3) would indicate that the average score is 1.0 SDs above the overall test taker average, with a tight distribution and almost all majors above the average. The 3 GRE sections are VR = Verbal Reasoning, QR = Quantitative Reasoning, and AW = Analytical Writing .

2015-18 GRE Scores by Major
English: VR = +0.8 (0.8), QR = -0.4 (0.9), AW = +0.8 (0.9)
Humanities: VR = +0.7 (0.9), QR = -0.3 (0.9), AW = +0.6 (0.9)
Social Sciences: VR = +0.3 (0.9), QR = -0.2 (0.9), AW = +0.4 (0.9)
Mathematics: VR = +0.3 (0.9), QR = +1.1 (0.6), AW = +0.1 (0.9)
Physical Sciences: VR = +0.1 (1.1), QR = +0.6 (0.9), AW = -0.1 (0.9)
Life Sciences: VR = +0.1 (1.1), QR = -0.2 (0.8), AW = +0.3 (0.8)
Engineering: VR = -0.1 (1.1), QR = +0.6 (0.9), AW = -0.2 (0.9)
Computer Science (13k): VR = -0.3 (1.1), QR = +0.5 (0.9), AW = -0.3 (0.9)

Of the major groupings listed above, English majors had the highest average on the Verbal Reasoning and Analytical Writing section, closely followed by humanities majors. Math majors had the highest average in the Quantitative Reasoning section by a significant margin above the other listed major categories, including significantly higher averages than the engineering and physical sciences groupings.

The combined total of the 3 sections was similar for both English and Math majors, but the average sectional totals were very different. English majors were far more likely to be skewed towards the VR and AW sections, while math majors were far more likely to be skewed towards the QR section. Some of this skew pattern relates to self selection in who chooses the be English and Math majors, some of it self selection in who chooses to take the GRE, and some of it relates to differences in the major curriculum. It’s also useful to note that there were many exceptions to this general trend, as can be seen by the percentages in the linked table. For example, while math majors generally did well on the QR section, a minority of math majors scored below the overall test taker average on this section. Similarly while English majors generally did well on the VR and AW sections, a minority of English majors scored below the overall average on either section.

It’s from the perspective of statistics. If 29% of variance is explained by a combination of all of the controls, then it suggests that a large portion of the result depends on criteria beyond the listed controls. That’s the case with major switching behavior. Yes, there is a correlation with scores, interests, application reader ratings, gender, … ; but each of these are only one small piece of the puzzle. Major switching also depends on many other criteria beyond the above. You will not see, “If students’ switching behavior is dictated by changes in interest, then I would expect the switching direction to be statistically random.” because it depends on many factors beyond just changes in interest. Similarly, you will not see students switching behavior determined solely by “more difficult, associated with higher study times, and are more harshly graded.” Both are one of many contributing factors.

Also note that R^2 = 0.29 was not for test scores alone. It was for the combination of large number of factors, including SAT subscores. Test scores alone would have far less explanatory power. With full controls, the Duke study found 5 variables that reached statistical significance at the 10% level for switching out of the listed STEM majors – listing an undecided major on application, being female, harsh grading within specific Duke classes, lower application reader HS academic curriculum rating, and not being Asian. SAT score did not reach this level of statistical significance with full controls.

Plenty of studies outside of physical sciences go far above 29% of variance explained. For example, the author of the Duke study also analyzed the Harvard admission sample. One of his analyses is at http://samv91khoyt2i553a2t1s05i-wpengine.netdna-ssl.com/wp-content/uploads/2018/06/Doc-415-1-Arcidiacono-Expert-Report.pdf . His model 6 was able to explain 64.9% of variance in Harvard admission decisions, even though he did consider key parts of the application, like essays. Chin’s reference at https://www.researchgate.net/publication/311766005_The_Partial_Least_Squares_Approach_to_Structural_Equation_Modeling has over 16,000 citations and may be considered a good general standard for regression analysis. He calls an R^2 of 0.67 “substantial”, 0.33 “moderate”, and 0.21 “weak.” The “substantial” R^2 of 0.67 occurs in one of his social sciences example studies.

Perhaps a bit off-topic, but @Data10’s discussion of R² values (indirectly pointing out, quite usefully, that R² is an effect size measure, not one of statistical significance) reminds me of an amusing anecdote: I was presenting research I had conducted in linguistics (specifically something well on the social science side of the field) at a conference, and showed results where I’d gotten the astonishingly-high R² value of 0.89 for one factor—and I exclaimed, “I couldn’t even have made up data that fit so well!”

Got some shocked faces in reaction, but let’s all of us social science researchers be honest, that’s pretty much precisely what we’re thinking when something like that comes up.?

A better distinction might be whether a hypothesis is testable. Is it possible to run an experiment that could disprove a hypothesis? You get that in STEM, at least with most of the stuff which has real world applications.

@Mom2aphysicsgeek What you are talking about are things such as creative thinking, reactive thinking, artistic thinking etc. However, critical thinking they are not. What I have seen passing off as critical thinking is really social criticism in disguise.
Here is how a group of experts define critical thinking. If you just want a definition, go down straight to p 27:

https://www.insightassessment.com/wp-content/uploads/ia/pdf/whatwhy.pdf

@blossom Of course I know people in physics and math who have gone into other areas, but they are not the best of the best. If you were to look at Nobel/ Field Medal winners in the recent past, few if any do not possess doctorates. Furthermore, few are not affiliated with universities or research institutions. My friends and acquaintances are brighter than I, but still operating in that how-can-I-max-return-on-my-degree range of ability.

@dfbdfb It is not so much I filter out culture or other environmental factors, but that empirical evidence suggest they are somewhat random. If a testable hypothesis can not be formed, it may be wiser to keep it out of science as I understand it.

More later, got to go.

I wonder if you take employability or income out of the equation, most would still regret their majors or not?

Based on this response, I am going to assume you did not understand my post and are unfamiliar with Bloom’s taxonomy and research on teaching higher order critical thinking skills in an educational environment. Pages 8 and 9 of the article you linked define critical thinking skills as interpretation, analysis, evaluation, inference, explanation, etc.

Bloom’s taxonomy represents an educational hierarchy for developing critical thinking skills based on essentially those exact same processes. https://www.open.edu/openlearncreate/pluginfile.php/5915/mod_resource/content/1/Bloom_s_Critical_Thinking_Across_the_Curriculum2.pdf

@Canuckguy, I’m kind of in the same boat as @Mom2aphysicsgeek here, but on a different topic: You seem not to have understood my post, and may be unfamiliar with the work that’s been done on the importance of social/cultural factors in terms of recruitment to an persistence in STEM fields (pretty much all fields, for that matter—it isn’t just a STEM thing).

And this is the sort of thing where not only has there been a lot of research (some of it already posted to this thread!), but where that research has certainly involved tested hypotheses. But to sum up, the research is pretty strong that test scores and such aren’t solely explanatory when it comes to things like major choice or persistence in a major or career field, nor do things like perceived ease.

Yes, sociocultural factors may be a bit “squishy”, but face it, things like standardized test scores are, too.

Even outside the realm of college, such influences are noticeable. People are more likely to go into the professions of their parents than one would expect from general chance, perhaps due to both positive influence (familiarity with parents’ profession from an early age, privileged entry paths or knowledge of navigating entry paths) and negative influence (parental or social pressure against going into something else).

@Data10 I looked at the GRE you posted and the first thing I noticed was the AW. When you mentioned the combined score of the English and Math majors are similiar, I went back and look at the old GRE:

http://mjperry.blogspot.com/search?q=GRE+scores

In total score, math majors rank #2 and English majors #15. So by taking out Analytical and put in AW, they have succeeded in blunting the difference between the two majors. This is consistent with the trend in standardized testing. How many times have they reworked the SAT again? I still remember the days before they put in the writing portion. Who are they trying to help? Who are they trying to handicap?

Does anyone know of any study on the MAT and majors? I suspect the results would be similiar to the LSAT, but I have been surprised before.

I think you are correct on the second point. When I was thinking R, I was implying the model overall and not the 2 variables’ interaction. By introducing extraneous variables, one can easily push the R^2 way up. @dfbdfb is that what you were doing? LOL Did you give them the adjusted R^2?

The idea of general rule for variance bothers me. It all depends, does it not? Just from looking at the alpha, I can say a small R can be significant and a larger R may not be. From where I stand, it depends… on the situation, the objective, and the dependent variable, no?

Btw, that chapter 4 is above my payload. I did not have a major or minor in stat. In fact, the last time I looked at stat was a decade ago, going through my kids’ school texts to see if I was getting good value for the money, after the fact. If my memory is still good, the R^2 tends to hover under 0.25 in commerce research.

Curious to know how many posters managed to get through the chapter with comprehension.

You know, this almost sounds conspiracy-ish, and I don’t think that’s what it was—it was actually that nobody* cared about the Analytical subsection. I mean, seriously, if you looked at graduate admissions standards before they took that out and added Analytical Writing, it was more common than not for programs to state outright that they were going to completely ignore your score on that section. So the reason for the change seems to me to not be some kind of bizarre attempt to make English majors look good, but rather simple market demand.

  • Well, **nearly** nobody—my own field (linguistics) was often one of the exceptions. But there aren't a lot of linguistics programs, and most of them are quite small.

p.s. The correlation I mentioned upthread was just two factors: Year of birth vs. the median value of one of the formants (I forget whether it was the first or second) for a specific vowel, among male speakers. (There was, as I recall, a different, more interaction-heavy thing going on with female speakers.)

The blog post references an old exam format. The GRE hasn’t had a non-writing analytical section in nearly 20 years.

Many reviews have found that the analytical writing section of the GRE and former writing section of the SAT are the most predictive subsection of the respective tests. For example, the ETS validity report at https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12026 found that the analytical writing section was more correlated with graduate GPA than both the VR and QR sections overall. However, there was significant variation between grad programs. In most programs, AW score had the best correlation with GPA, but math graduate students were an exception. Among math grad students, the QR (quant) section had the best correlation, and AW was 2nd. The author states, “We note that the Analytical Writing section, introduced in October 2002, is often the strongest predictor of GGPA.”

Similarly multiple studies have found the SAT’s old writing section was the one best correlated with college GPA overall . For example, the College Board’s validity study at https://secure-media.collegeboard.org/digitalServices/pdf/research/FYGPA_Validity_Summary_keyfindings.pdf states, " In fact, SAT writing alone has the same correlation with FYGPA as does SAT critical reading and SAT mathematics taken together." The Geiser UC studies at https://escholarship.org/uc/item/7306z0zf had similar findings. The SAT writing section (at that time it was a subject test) was the SAT section that was best correlated with both cumulative 4th year GPA and graduation rate, among tens of thousands of UC students.

Rather than assuming that including writing in standardized testing is an effort to help humanities majors and handicap math majors, the better question might be whether the format with including writing is more predictive of whatever the test is trying to predict. Yes, analytical writing might not be the most predictive section for math majors, but it appears to be the most predictive section for the majority of other fields. What weighting should be given to the 3 sections depends on what you are trying to predict. The regression analysis in the earlier link doesn’t provide enough information to estimate the optimal weighting, but it doesn’t suggest a need to add more weighting to the quant section. Among master’s seekers overall, the quant section was able to explain 2% more variance in college GPA than undergrad GPA alone. While the quant + QR + AW was able to explain 7% more variance than undergrad GPA alone.

If anyone is interested, the specific majors with at least 1000 test takers that had the highest average scores on the analytical writing section were as follows. As previously discussed, there is a bias in who chooses to take the GRE.

Fields with Highest Average GRE AW Score

  1. English Literature – 4.3
  2. European History – 4.3
  3. Philosophy* – 4.3
  4. Public Policy Analysis – 4.3
  5. American History – 4.2
  6. Creative Writing – 4.2
  7. English Language – 4.2
  8. History: Other – 4.2
  9. International Relations – 4.2
  10. Neurosciences – 4.2
  11. Political Science – 4.2
  12. Religion & Theology – 4.2
    *Philosophy sub-grouping of philosophy broad category

I will have to scale back my involvement here because the market is getting exciting again. Here are a few quick responses:

@Mom2aphysicsgeek I was exposed to Bloom’s taxonomy in psychology courses in passing. You are right I did not delve into it more. My original point is that if LA grads have better critical thinking skills than others, this should show up in superior life choices. I would expect lower rates of family problem, better money management etc. I have not seen evidence that this is so.

@dfbdfb I am not an academic so yes, I am not as up on this topic as I could. The big problem with social sciences, nonetheless is replication. Until they can replicate research results over time like standardized testing, I can only tether my boat to that which is known. I am sure you are aware of the problems in social psychology. Education research is probably not much better.

The changes made to tests like the SAT are not intended to make English majors look better per se. Together with policies such as holistic admission, legacy, test optional admission etc. , they are intended to maintain the status quo. I grew up in boarding houses, so privilege is something I understand. See how the culture I grew up in influence my view of the world? It also influences my view on human nature as well.
Of course culture is important. I just don’t see how I can call it science at this stage, that’s all. Psychometric is still “squishy” but it is the least “squishy” of them all. At least I can say they have decent validity and reliability.

@Data10 As you can tell, I am very skeptical of social science research. What I want to see are results that are duplicated so often and consistently that researchers are tired of it, as mentioned by Kuncel in his TED talk. Failing that, I would look for results from various disciplines or angles that happen to dovetail over time, ensuring that there is little possibility of collusion. So yes, when I see the old GRE dovetails with the old SAT, LSAT, SSCQT, and the AGCT, that is where I think the weight of evidence is. When I look at this study with the CLA over the 6 CLA tasks, I see natural science is tops in 5 tasks and tech, engineering and math is tops in the reminding one. That just add to the weight of evidence and I feel the results are not at all surprising. (The authors are very careful, only mentioning that health, business and ed do worse than the other majors. I feel they can be more granular than that). The graph is on page 8 for those who care.

https://cae.org/images/uploads/pdf/Majors_Matter_Differential_Performance_on_a_Test_of_General_College_Outcomes.pdf

I was commenting on the fact that the issue is not narrowly defined by college majors. Our educational model focuses on transmitting knowledge as the primary objective, not moving kids through higher order levels of critical thinking. The problem should be concerning all the way down to early childhood, not focused on college majors.

But, whoa, you made a leap in the second part I quoted. Are you saying that people who graduate with STEM degrees have superior “happy with their life” outcomes, more stable family relationships/less dysfunction, and better long-term fiscal outcomes that are unrelated to pay discrepancies based on field?

^No. I was saying that if LA teaches critical thinking better than other majors, as is often implied on CC, I have not seen any evidence of it. I think another poster was expressing a similiar doubt earlier on this thread, but I cannot remember who.

As far as teaching higher order critical thinking skills to children, I agree completely.

It’s my understanding that the SSCQT was used by the military during during the period from 1951-1967, including for draft exemption. The GRE didn’t even start using the old pre-2002 analytical section until decades later. How do you know that it dovetails with the SSCQT better than other exam formats? Given how old the format is, there is limited information, but intuitively I would not expect giving only 1/3 weighting to the VR section as used in the 1990s GRE would improve the correlation with other exam formats that give 1/2 weighting to the verbal section. Instead I’d expect VR+QR to better correlated with other exam formats that have 1/2 weighting on verbal and 1/2 on math/quant.

That said, the SAT and GRE have a specific and well defined goal – to improve prediction of academic success during college (and/or measure college preparedness). They are validated against that goal. This validation might include reviewing how well SAT or GRE predicts GPA during college. It would not involve reviewing scores on CLA tasks by major or maximizing correlation with a military exam from the 1950s.

In isolation, the writing section appears to be the section that is most successful at the test’s stated goal of predicting academic success during college overall (specific subgroups may differ, such as among math majors or in predicting math course grades). And this result has been duplicated over and over. Some additional example validation studies are below. There are many others.

https://files.eric.ed.gov/fulltext/ED582459.pdf – SAT Writing is section with best correlation with FYGPA in isolation
https://files.eric.ed.gov/fulltext/ED563124.pdf – SAT Writing is section with best correlation to 2nd year GPA in all major groupings except for math & engineering.

If the major categories have a similar distribution to the national population, then the natural sciences category with the highest average CLA is majority biology majors, which did have especially high averages in any of the SAT/GRE score combinations. It is not correct to assume that natural sciences is mostly physics majors. Assuming a nationally representative distribution, the “natural sciences” category with the highest CLA average would have included more agriculture majors than physical sciences majors.

quote=“Canuckguy;c-22553684”.

[/quote]
This is not a conspiracy to avoid stating saying STEM majors are superior. The author clearly explains that he evaluated statistical significance using Tukey’s Honestly Significant Difference post hoc test. Health, business, and education had statistically significant differences from the rest. The differences between some of the other higher scoring categories were not statistically significant.

@Canuckguy, your skepticism of the social sciences does not mean that you are entitled to simply throw out results you do not agree with while simultaneously accepting those you do.

That’s totally not legitimate inquiry, that’s deck-stacking.