See my response above. Admission based on “Stats and tests” is very different than admission based on one singular subject-specific test, blind to all other stats and grades. That was the proposal I was specifically addressing. I stand by my statement that that kind of admission standard will not yield good results. in fact, I think it would yield anti-intellectual ones
I appreciated your comment for what it was.
In the same vein and with a bit of levity, our CFO (who is a CPA) often uses the joke: What is the difference between a computer and an accountant? The computer has a personality.
Someone upthread commented that those who enjoy the most success in life (and likely in college admissions) are those who excel in a wide number of activities, including academics, athletics, and other extracurriculars. I took your comment as being similar.
When those same STEM students take their college courses, do you feel that there should be final exams for those classes, or do you feel the professors should make a holistic assessment for their grades based on all of their personal qualities, hobbies and other interests?
Apples, oranges. Admissions and assessments are two very different kinds of evaluations – one predicting potential based on as much information as possible, and the other measuring mastery of a specific set of skills and content.
Is it really apples & oranges? How do tests go from irrelevant to meaningful?
If Harvard crafted their own math test for potential math majors, and they admitted the applicants with the top 40 scores, would that really be a guess?
The admission process is far from perfect, but I would wager people who work in admissions at competitive schools have looked at the data of which students end up being successful
They probably have greater insight than people on an internet board. There is a reason why it isn’t strictly test-based.
I see both ends. I have students who are determined that they should get into a school like UF when the bottom 25% SAT score is 1320 and they barely have 1200. They think their extracurriculars will boost them and the A they got in a sociology elective should counteract the B- they got in honors math. The thing is, they don’t get in.
I also see very bright students who are exceptional test-takers
and do everything that is asked of them. Sometimes, they don’t do anything more. They are more likely to get in, but they don’t always.
Delete
Yes, it’s apples and oranges. Even admission to a major is not the same as admission to a college, and at Harvard, students don’t choose a major until the end of sophomore year, after they’ve had a chance to build a track record in that discipline. So even then, it would be an assessment of skills and content in a particular area.
But Harvard would also argue that college is not simply about the major – it’s about building a broad academic background and experience. Which is why admissions is more than a test in content knowledge.
Because their usefulness depends on what they’re being used for.
College isn’t, and has never been, to my knowledge, strictly about academics. It does neither the student body nor the school as a whole any good to admit students according to one single metric; a test score on a specific subject (which is what has been posited here). Even the most hardcore proponents of the SAT will tell you it’s only a predictor that a given student can handle the generalized courseload at a school. Apples.
A class in college or anywhere else is meant to teach students a certain subject. In that context, a test measuring if the student has mastered the subject can be appropriate (though even there, of course, grades are based also on papers, assignments, ect). Oranges.
There are a number of college classes which are already doing a holistic assessment.
Any of the engineering labs- the hands-on, build a solar oven which can be used in rural desert communities and cost less than $5 each to manufacture-- classes of that ilk-- already use holistic.
It is rarely the top exam scorer who rises to the top in these hands on classes.
One of my son’s MIT friends (an ME as it happens) was the first legit startup millionaire of his class (as far as anyone can tell). He had the lowest GPA of anyone in their fraternity. The second millionaire (also in the frat but a somewhat more respectable but still not high GPA) was definitely the “all around” type- literature, sports trivia, classical and pop music, arcane historical facts and minority Supreme Court opinions, art. I don’t know how much of this helped him conceive of his invention- but according to his friend-group, it was critical for getting VC funding and then being acquired by a multi-billion dollar public company.
This kid could talk to ANYONE about ANYTHING and rather than having the know-it-all personality, came off as someone genuinely interested in learning what the OTHER person wanted to talk about.
So yeah, in addition to the “highest scorer” grading, any of the hands-on “invent your own blank” courses give plenty of opportunities for the “not the highest scorer but I have other talents” to shine.
Already happening-- and been happening for decades. I recently read about the “wearables” category of medical devices… I’ll guess that the team working on the cap to alleviate severe depression knows more than just biology and neuroscience…
Pretty sure they also had very high scores since they were at MIT…
For selfish reasons, I would love to know his standardized test scores.
Professors usually know a lot of other professors, often at a wide variety of schools. I suppose professors who have only been at elite institutions and don’t interact much with the riff-raff might be an exception here. But I gotta say, I haven’t really observed positive correlation between professor / teaching quality and the prestige of an institution. And I’ve heard this observation from lots of other professors.
I’m not saying there are not other important distinctions between “mediocre” schools and elite institutions. There clearly are. But professors in general, regardless of their pedigree (I hate that word) are big giant nerds who are obsessed with their subject matter and are committed to educating. Some of the best teaching in this country is done at community colleges.
This is why I’ve never been too concerned about the quality of teaching in the college search. I’ve seen behind the curtain and know that professors everywhere are mostly really committed to this and good at it (certain huge R1s are excepted but that varies a lot by school/dept).
My spouse teaches at a middling LAC and our son’s friends at Purdue are always texting him for help on their chemistry (and me on their biology). This is NOT a sincere diss on Purdue, but he has joked that they should have attended his “mediocre” school if they wanted good teaching
.
Teaching isn’t everything and there’s a lot more to college and choosing schools. But teaching isn’t nothing either.
no that’s not at all how any of this works and is not my point at all.
I think that probably applies to most any US college or university – at least those with distribution requirements. All the engineers still have to take English/lit and a social science or two, and all the English majors have to take some math and science courses. The goal at the vast majority of American colleges is to provide a broad education. They are not just career centers.
yes - my daughter has top grades, rigorous classes, excellent essays, and great ECs and has been rejected or deferred at schools where her stats are well above average and almost as high as they could be . we didnt submit scores though so maybe that played a role - though we know others who didnt with lesser grades that got in
I know that undergraduate admissions differ from graduate admissions in important ways. This thread isn’t about graduate admissions and I don’t want to derail it. I do want to note some things about graduate programs in STEM that may be relevant. These programs have generally had wide latitude to tailor their admissions process however they want.
You would be surprised to discover how hard it is to predict who will perform well in their PhD. All the application components give us some useful info, but can you imagine how much it kerfuffles a bunch of STEM profs that they can’t reliably and consistently predict who will do well in their graduate work? It’s like you’re playing the lottery each time you take on a student. There are few things more frustrating (for everyone involved) than having an unsuccessful grad student in your lab.
After decades of obsessively trying to figure out how to predict PhD success for admissions, programs are increasingly dropping standardized testing as a requirement for a few reasons. The reason that is most germane to this thread is that it doesn’t seem to be predictive of PhD performance. The programs that are retaining the requirement are often using it as a binary wherein they just want to see a score above a pretty low threshold that a typical college grad would easily meet.
I don’t want to get into a debate about what qualities are associated with STEM PhD success, but I was really surprised to discover how bad standardized tests at predicting it. And that has also colored the way I think of the information that standardized tests can provide for undergrad admissions.
I am actually surprised doctoral programs use any standardized tests at all-it should be pretty obvious that everyone applying meets the minimum bar, and there should be a body of research ( maybe publications), recommendations, and extensive substantive interviews with a subject matter expert.
It’s not like there are 100k applications to sift through.
At a lot of universities, the GRE is used just to establish a floor, so to speak (below a certain threshold, a student is probably not qualified), and also for allocating funding. In my graduate program, the faculty decided whom to admit to their department, probably using lower-limit guidelines set by the grad school, but the grad school admissions department handed out funding, using grades and scores to do so. This was just initial funding for admitted students – additional funding later in the program was decided by the department. I agree that the GRE is really not a great predictor of how students will do in grad school, though I have no idea how well the LSAT and other tests for professional schools work for that purpose.
The tests seem important for medical and law school still, which are more like undergrad in huge numbers of applicants, many directly from undergrad. Business schools have the advantage of applicants with several years of relevant experience plus interviews, though med schools do that too