I’m not sure I should jump in without having read all 24 pages, but I won’t have time to type later, but I will be able to fit in minutes here and there to finish reading the thread.
We are not parents who guide our kids toward careers. They develop their own interests and make their own decisions. 2 of older kids are in STEM fields (chemE/physics grad student). Another is in an Allied Health field. Clear tragectories.
Our foreign language (Russian and French) majoring dd was just as strong of a math student as her older brothers, but she would never have wanted to pursue a similar career path as they have. Last semester she spent hours reading, researching, analyzing, writing in multiple languages. No, her brothers do not possess the same skill set she does. Their science/engineering focus suits them and their interests. She would have been miserable following similar fields.
She has also been fully aware of the need to do more than pursue a “major.” She knows she has to cultivate marketable skills. She does face more stress than her siblings in terms of seeing a clear path from freshman yr to career. Does that mean she will have less satisfaction with her decision? She made her choice fully aware of what it means and that developing her choices into an employable skill set falls on her. She actively pursues that as an objective along with coursework. Am I worried she will be unemployed or severely underemployed? No. She has built an impressive CV and has multiple future options she is considering.
@Data10 I have no major dispute with your data or your opinion here. As long as the conclusion/opinion is logical and consistent with respect to the data, it is all good. Thinking about it, I have made very few comments about your posts in recent years for exactly that reason. The only one I can think of, and it was not directed at you as such, was my frustration with the popularity of variance. I always feel strongly that the natural unit is SD. Variance is convenient because it is additive, but it plays tricks with my size perception.
My main peeve has always been grading. It bothers me that grading varies not only from school to school which is bad enough, but from department to department within the same institution which I find disturbing. It has been explained to me that one must look at a university as an economic unit and that grades are used to adjust the supply and demand calculus to ensure the proper functioning of the entire unit. This may be fine and dandy for the school, but it creates havoc with the students.
One way to control the damage is standardized testing. Another one is to put the average or median grade of the class right on the transcript. Still another one is to have all students write a standardized exit exam. Personally I don’t think any of them is going to fly; there is too much vested interest in the opposite direction. The privileged are going to be even more privileged. You have any interesting data and opinion on this issue?
This was then followed to a link that referenced, if not the survey in the OP, then at least a very similar one (and also done by the same company—Payscale).
So let’s break this down, shall we?
Yeah, 248k respondents sounds impressive. But if you go to look at the report of findings, down at the bottom there’s a link to their methodology. Click through on that, and you get some tidbits that should send shudders down the spine of anyone with training in study design (lots of ellipses, but I solemnly swear I haven’t changed the meaning of anything):
Please note: That involved discussion of whether groups are represented in the survey, but that really does appear to be specifically and only defined in terms of the number of respondents in each group, not the quality (i.e., randomness) of the subsamples—and as you may remember from stats classes, a large sample that’s non-representative is still a non-representative sample, it just looks more impressive.
So yeah, I’m still not convinced that this survey of most and least “regretted” majors tells us anything whatsoever.
@blossom My siblings and I were first gen college students. One of them attended Toronto to study commerce. None of us knew at the time that the Rotman program is highly competitive. The potential candidates were put through the pace of pre-requisites. One of them, Calculus and Linear Algebra for Commerce (?), was designed to limit enrollment to 200 in the second year.
By Christmas, she noticed Convocation Hall (where the class was held) was mostly empty. (It was packed in September). She was one of the survivor. For the rest of her undergrad years, she faced the other survivors on a daily basis, fighting for those precious As that were given out sparingly (20% of the class max). She completed the program without fanfare, with a GPA of 2.9x.
She enjoyed business law as an undergrad and did well in it. So, naturally she looked to the possibility of studying law. She wrote the LSAT and scored at the 93%tile, but discovered that her undergrad GPA pretty well disqualified her from all but the lowest ranking law school in the country. In those days, almost all Canadian law schools were admitting students using an equation, where 80% of the weighing was on the GPA and 20% on the LSAT. If it was the reverse, the outcome would have been different.
If she had the fatherly advice of Anderson, she would have been told to choose between Rotman or underwater basket weaving at a lower tier school, Either way, she would have known the consequences of her decision before playing, and not have to learn the rules while the ball is in play. Rookie mistakes, or more accurately, first gen mistakes. I know she could not have been the only victim; there has to be a lot more like her.
Quite a few colleges list contextual information about grades on transcripts. A summary of recent policies at highly selective colleges is below (not certain whether all summaries below are up to date).
Cornell – lists median grade of course on transcript
Columbia – lists % of students who earned A grade on transcript
Dartmouth – lists median grade of course on transcript
UNC: CH – lists median grade of course on transcript
These policies have been reasonably successful, with the main negative issue being students using the grade reporting to choose leniently graded classes. I think what is not going to fly as well is a highly selective private college giving out few A’s since the relatively lower GPAs reduces rate of post graduate success in some fields, even when the transcript spells out median grades like above. This is particularly true in professional school, like your example above. Princeton tried to limit A’s to 35% of students in lower level courses several years ago. It did not go well to say the least and was abandoned. The more common scenario is average GPAs gradually increasing over time, without corresponding changes in student quality. Any college that breaks the pattern puts its graduates at a disadvantage in post graduate outcomes. I think more subjectively graded classes can be more prone to grade inflation because there is less objective control.
Whether a potential employer would use or care about this type of contextual grade information or available standardized testing is a different matter. Employer surveys suggest that many employers do use a 3.0 GPA type resume screen, but employers generally do not focus on minor differences in GPA above the screen. Far more influential in hiring decisions are things like having key skills required for the job (which can limit hiring to particular majors), having relevant experience/internships, and interview performance. An applicant who excels in these criteria is likely to be hired, rather than hiring decisions following small differences in GPA or using standardized testing (it sounds like consulting is an exception to some extent). For example, someone who hopes to be a software engineer almost certainly has a better chance of reaching that goal if he sticks out the CS major in spite of getting many B’s, rather than switching to the major in which he thinks he would could achieve the highest GPA.
Back when Cornell had public information about median GPA of courses, the regression analysis at https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?article=1002&context=student was performed to review what criteria predicted median grades of courses. Some of the statistically significant predictors of a class having lower median grades were:
*Experienced professor teaches class, rather than grad student or lecturer
*Class is lower level (and has non-majors)
*Class has larger number of students
*Subject has many objectively graded problems and few subjectively graded papers. For example, Chemistry, Economics, Math, and Physics all had lower median grades, beyond the controls above.
Princeton Median Grades in 2004 – Prior to Grade Deflation
Humanities = 3.4
Social Sciences & Engineering = 3.3
Natural Sciences = 3.2
100-200 (lower) Courses with Fewest A’s – Sciences, Psychology, Math
300-400 (higher) Courses with Fewest A’s – Bio (EEB), Econ, Physics
100-200 (lower) Courses with Most A’s – Languages, Music
300-400 (higher) Courses with Most A’s – Languages, Music
Princeton Median Grades in 2018 – A Few Years After Ending Grade Deflation
Humanities = 3.6
Social Sciences = 3.5
Engineering = 3.4
Natural Sciences = 3.3
@Data10 Good analysis. To be honest, I am not sure having median grades on the transcript solve the problem. Leaving aside the sophistication of the employers for the moment, median grades only tell me how well the student is doing within that class. I still don’t know how well he is doing within the college student universe. Let us assume the student is admitted to Berkeley, that is already a 2nd derivative problem. Let us further assume that he is admitted to CS as well. Now we are looking at the 3rd derivative, are we not? How can we compare quality of a student on the right tail of the right tale as a function of those who sit in the normal range of the curve? Grades as they are constituted now can not really do the job.
Princeton is an interesting case, but I don’t think it represents well the problem of asymmetrical grade inflation. Since the students are almost always from the right tail in cognitive ability, grade compression in the top end is inevitable. Clearly a “range restriction” problem.
In one of the articles I posted, Tim Taylor talked about Kevin Rask at Wake Forest who found “chemistry department gave the lowest grades over all, averaging 2.78 out of 4, followed by mathematics at 2.90. Education, language and English courses had the highest averages, ranging from 3.33 to 3.36”. I think this is a better representation of a normal college class, if such a thing even exist.
If people still don’t see that as a problem, I don’t know what would.
PS- Elite firms such as Goldman Sachs, D.E. Shaw etc. asked for SAT scores from job candidates, even those with a decade of experience or more.
“Princeton is an interesting case, but I don’t think it represents well the problem of asymmetrical grade inflation. Since the students are almost always from the right tail in cognitive ability, grade compression in the top end is inevitable. Clearly a “range restriction” problem.”
This raises the question of how many colleges have both A+ and A grades which could potentially loosen this range restriction (though without affecting GPA)? My S’s college does, my D’s doesn’t.
Looking at some scholarships, I see many winners use the number of A+ grades as one indicator of superior performance, eg “Her transcript includes nine A+ grades“, “He has earned five A+ grades“, “She has earned 14 A+ grades”, “Her transcript includes 16 A+ grades“
(all from https://physics.osu.edu/sites/physics.osu.edu/files/2015-2016%20Churchill%20Scholars.pdf)
If we are talking about Berkeley CS, college course grades have best a loose relationship with the skills and experience necessary to be successful at a particular SV software engineering type position. How well the classes correlate with required skills will vary tremendously on both the particular position and the particular set of courses. College grades also don’t tell you which applicants are still an expert on the course content years after the class, and which applicants forgot the course material a week after the final. The difference between a 3.7 GPA in CS vs a 3.5 GPA in CS doesn’t tell you much about which applicant is more likely to be successful on the job, even if the grades were perfectly calibrated and standardized.
As such, SV CS employers typically require applicants to have a CS major or similar background and may use grades for a simple resume screen (for example GPA must be above 3.0), but they generally do not focus on small differences in grades. Instead SV CS employers typically have a series of complex interviews that involve answering a variety of technical questions, as well as some less technical interviews. They might ask some quick and easy tech questions in a phone screen, then have some longer coding problems on site. Example on site interview question for SV software engineer positions, as listed on Glassdoor are below. The applicant is expected to use coding to solve the problems, so coding style is also evaluated. Algorithms and data structures seem to be a common theme, although there are many exceptions.
Google Interview Question
*“Given a sorted matrix where the number below and right of you will always be bigger, write an algorithm to find if a particular number exist in the matrix. What is the running time of your algorithm?” *
Facebook Interview Question
*“Display the sorted output of a merge of any number of sorted arrays. Then do it again, more efficiently” *
Apple Interview Question
*“Collapse a binary search tree into a sorted list.” *
The CS employers above give applicants their own test on site that more closely aligns with the skills required for the jobs. It’s a similar idea with the consulting PST test I mentioned earlier – again the employer gives their own test that’s more specific to the job.
That’s a reference to the study at https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?article=1141&context=workingpapers . The study says the grades by major are from a " northeastern liberal arts college." Wake Forest is located in NC and not a LAC, so it’s clear the grade distribution is for a different college. I suspect Colgate University since the grade distribution is from 1997-2007 – ending the year Rask left Colgate.
All the examples I listed in my earlier post also had lowest grades in math/science and higher grades in more subjectively graded fields, particularly languages and music, so the result is not surprising. Objectively graded fields tend to have lower median grades than more subjectively graded fields, particularly objectively graded fields that have a large number of non-majors taking lower level classes .
“Elite” finance/consulting positions are the exception, not the rule. In nearly every other industry, it’s uncommon for applicants to be asked about SAT scores, especially years after graduating.
@Twoin18 Those students are impressive. They make me feel like an amoeba.
@Data10 I forgot you work in tech. Perhaps I should stick with underwater basket weaving and theoretical physics instead. My point is quite generic-that an A in weaving may not be the same as one in physics. They may be quantitatively and qualitatively different. Is that really true though?
This forces me to look at the thorniest question posted earlier. Are some disciplines inherently more difficult than others? Maybe a better question still is whether students entering certain disciplines are stronger than others.
Let us assume that students in all faculties are comparable in ability. Then it is not unreasonable to assume that English majors should outperform math majors in the LSAT. After all, they have four years of intensive writing practice whereas the math majors probably do not.
But a quick look tells me my assumption is incorrect. Math majors are much stronger than English majors in the LSAT, surprisingly so. Does that mean the opposite must be true?
I am not one that is happy with one data point. Someone mentioned major switching up thread. Perhaps that may give us additional clues. If students’ switching behavior is dictated by changes in interest, then I would expect the switching direction to be statistically random. Arcidiacono’s study at Duke showed that is simply not the case. He found students at Duke are switching out of natural sciences, engineering, and economics because they are considered to be “more difficult, associated with higher study times, and are more harshly graded".
My feeling, then, is that students do not choose their majors out of passion, but are looking for the sweet spot between what they can handle on one hand, and how well compensated on the other. I wonder if that is true.
A number of colleges give out A+ grades. How meaningful they are depends on how rare these grades are given. The pluses (or the minuses) may or may not be included in GPA calculations. For example, Caltech and MIT both assign plus and minus grades (few A+'s are given for exceptional work in both places), but at Caltech the pluses and minuses are used in GPA calculations (A+ = 4.3, A = 4.0, A- = 3.7, etc.), while at MIT, they aren’t used in GPA calculations. Earning an A+ at one of these schools is clearly highly meaningful either way.
@Canuckguy The LSAT involves logic. A brief is more akin to a proof than to an English essay. Law is a relatively intellectual field, some practice areas moreso than others.
The LSAT has a logic puzzle section that most people seem to forget exists. It is probably no surprise that math (and philosophy) majors do well on the LSAT when this is considered.
Note also that the writing sample in the LSAT is unscored, though it can be used by law school admission readers. It is also focused on argumentative writing rather than literary analysis, so philosophy and rhetoric majors are likely to have had more practice in that than English majors.
Question, which I figure the data must be out there for, but I don’t know where to look for it: What proportion of math majors vs. English majors sit for the LSAT? There’s a possible self-selection confound if a lower proportion of the one sits for it than the other.
One also needs to consider the effects of self selection. Only a small portion of grads within a particular major choose to take the LSAT, and it’s not a random sample. For example, the average MCAT subscore by major is below. This is from several years ago when the sections were physical, biological, and verbal. In more recent years, the AAMC stopped publishing this level of detail of scores by major.
Average MCAT Scores by Major
Math Majors: Physical = 10.6, Biological = 10.4, Verbal = 9.3
English Majors: Physical = 9.6, Biological = 10.1, Verbal = 10.2
Chemistry Majors: Physical = 9.5, Biological = 10.0, Verbal = 9.0
Biology Majors: Physical = 9.0, Biological = 9.7, Verbal = 8.7
English majors had significantly better average scores than biology majors on every section, including the biological section. English majors also had higher scores than Chemistry majors on all sections, which was the most harshly graded field in the study linked above. Is this because English majors have better biological ability than biology majors? Or that the English major curriculum better prepares students for the biological exam than the biology curriculum? Or that the English major is tougher than the biology and chemistry majors, so the weaker kids switch out?
The far more likely explanation is self selection in which English majors choose to be a pre-med. Most pre-meds choose a life sciences major, partially due to the overlap with pre-med requirements. A similar principle applies to chemistry, with weaker effects. Pre-meds who choose English majors are more rare. There is little overlap between English major requirements and pre-med requirements, so the student has to do all the pre-med requirements on top of the all the English major requirements. The English major pre-meds may have more total courses required, and they need to be highly successful in courses from very different fields. The students who pursue this rare choice and get good enough grades to persist all the way to med school applications are a rare breed who tend to be outstanding students.
If you want to evaluate the strength of students in the major as a whole, then you need to use something resembling a random or balanced sample of students from that major, rather than just those who take the LSAT or MCAT.
There is no doubt that there is a different average strength of students in particular majors. Many colleges have different degrees of admission selectivity by major. For example, SJSU lists their eligibility index cutoff for different majors at http://www.sjsu.edu/admissions/impaction/impactionresultsfreshmen/index.html . EI is based on a combination of grades and test scores. A partially summary is below. Obviously SJSU CS is far more selective than mathematics, so I’d expect a much higher concentration of stellar students in CS than mathematics at SJSU.
SJSU EI Cutoffs by Major
Computer Science – 4675 (4.0 GPA + 1475 SAT)
Mathematics – 3000 (2.7 GPA + 840 SAT)
Major switching is based on a combination of many different factors. Change in interest is one key factor, so is “more difficult, associated with higher study times, and are more harshly graded", and so are many other things. Each of these factors is correlated with switching behavior, so if you look at any one alone you will see a correlation in switch behavior, but no combination will explain anything close to 100% of switch decisions. Arcidiacono doesn’t list R^2 in the referenced study, but in another study he did among UC students (less range restriction), he was able to explain only 29% of variance in who switched out of STEM.
For example, the regression analysis in the Arcidiacono study you referenced found that by far the most statistically significant predictor of switching out of the STEM grouping was being female. After controlling for admission reader ratings, test scores race, harshness of grading, and other factors; females still were far more likely to switch out than males. Other studies that have controlled for class grades found that women were more likely to switch out then men who received the same in major grades/GPA.
Arcidiacono doesn’t give any ideas about about why this gender difference occurs besides saying, “The higher proportion of females relative to males leaving sciences is an empirical regularity that has been analyzed in Carrell et al. (2010). They show that professor gender affects female students’ propensity to persist in the sciences.” This different major switching between gender is often the primary focus of major attrition studies, as it was for the Kevin Rask study you linked above. The abstract states, "Results suggest that gender effects are important, both in terms of the influence of the absolute and relative grades received, and in some cases in terms of the peers in the course and the gender of the instructor. "
That said, in general students are more likely to switch out of majors in which they receive lower grades and switch into majors in which they receive higher grades. So if a major is more harshly graded than others at the college, the grading pattern is likely to increase the number of students who switch out. As stated earlier, objectively graded fields like math and sciences tend to be more harshly graded.
If you are going to call one of my sources (along with the peer reviewed scientific studies it references) “agenda driven garbage”, I would love to see some proof for making such a claim. Otherwise, you are just letting your (non-scientific) opinion mislead others reading this thread by wrongfully discrediting facts. If you have solid proof to accuse me of posting disinformation, I would love to hear it.
Also:
“Although, it does confirm my original point that at the time we are forcing women into STEM (late adolescence), there is a significant gender gap in spatial ability.”
If you read my post and the source, you would see that it is quite the opposite. It seems women, particularly building up in adolescence, have societal pressure away from most STEM subjects.
Here is an additional source for you to digest, if you would like:
Wow lots of very informed posts here! Am thinking of my daughter’s career future and possible majors (she’s a HS senior) so very interesting to read.
In the LSAT realm, I can throw out that, in my experience, the LSAT (like the SAT) is pretty game-able. I was a college Eng/Psych major who took zero high level math in HS (which I did not care for).
I took a straightforward LSAT prep course and practiced, practiced, practiced (particularly the logic sections).
Ended up with something like a 95% but it had nothing to do w/my college major (or math skills). Agree that logic absolutely essential to law school & the practice of law - but that can also be generally honed via practice.
@evergreen5 I have read time and again on CC that LA trains critical thinking. To me, critical implies discriminating, and thinking implies judgement. Outside of logical and rational verbal reasoning, I can think of no other kind of of critical thinking except mathematical reasoning, can you?
The number of physics/math students taking the LSAT is indeed small, but they are not the only ones that are small. If population size is a concern, we can substitute SAT for LSAT since their correlation is high (R=.89). The results are still the same. Wai found that regardless of the standardized test used (he looked at 4, including the SAT), and when it was administered (he looked at results from 1946 to the 2000s), the results show STEM tend to be on top regardless.
@Data10 Just two points. Your analysis concerning English majors doing pre-med is spot on. Looking at the evidence I noted above, students that can transition from easy majors to hard majors (based on standardized test scores) are an exceptional bunch. These folks are likely to be at the right tail of the right tail in terms of cognitive ability.
Those students transitioning from math to law, in my opinion, is transitioning from a hard major (the hardest next to physics, imho) to a softer major. I suspect they are the weaker students in the universe of math/physics majors. The strong ones are most likely to continue with math/physics/cs etc in grad school. At least this has been my experience, anecdotal they may be.
Your statement concerning the Duke study, where only 29% of the variance for switching out of STEM can be accounted for, is in need of further comment. In terms of SD, this would be .54 give and take. From the perspective of physical sciences, this is not very impressive indeed. I get that.
From the perspective of social sciences, this is just about as good as it gets. As I said earlier, the best we can do is in psychometric (standardized testing) where R=.5 or a little above. This is the gold standard. Anything more is very very good. The problem, I suspect, has to do with confounding variables that we may not even be aware of. I doubt the problem will be resolved in my life time.