I wonder what would happen if coaches were required to recruit their athletes like this. You may only recruit runners by how they rank in their high school, and only look at PE grades beyond that. No timekeeping allowed - to level the playing field. Time-blind apps. After all, everyone who arrives at the finish line has shown they can run, right? And they have an A in PE, too, so what’s the problem?
It depends on the sport and specific situation. For example, suppose a college made submitting 40-yard dash times optional for football recruiting. Potential recruits could either choose to submit their 40-yard dash time or choose to not submit their 40-yard dash time. Some potential recruits might choose to not do a time-recorded 40-yard dash, so they have nothing to submit. When a potential recruit did not have a 40-yard dash time instead the coaches would need to rely on things like performance in games (ECs), external awards and external star rankings (awards), communication and comments from high school coach (LORs), communication and comments from scouts/recruiters (interviews), video highlight reel (optional additional material), etc. And of course what position the recruit played and how much the team needs a player in that position with his unique skillset (major and needs of particular institution). I wouldn’t assume that a college using this recruiting system would be unsuccessful.
Test optional obviously has many differences from football recruiting, but the key is whether the combination of available material used to evaluate test optional applicants is a desirable way to evaluate applicants. This includes more than just HS GPA in isolation (without consider rigor , which courses were taken, harshness of grading of particular HS, …).
I think the better analogy would be evaluating students without standardized tests is like recruiting football players without watching any game film. You have no idea how they do against their opponents in either case.
I think a sports analogy is valid here because admissions to highly selective universities is a competition. Standardized tests provide a rock-solid data point as to why someone’s child ‘lost,’ and that is upsetting to many people.
You labor under the misconception that college admissions is a prize to kids that get the “highest GPAs” and “the highest test scores”. It isn’t, so comparing that to athletes who are recruited specifically so that the college can win competitions against other colleges is not even close to valid.
However, if you want a sports analogy here is one. You are, essentially, saying that, since most athletes can run better than non-athletes, colleges should recruit athletes based on their running abilities. More than that, you are saying that colleges should recruit all of their athletes based on their results in a 200 yard dash. It doesn’t matter if they are great at high jumping, hitting balls out of the park, or even running long distance. You have a single criterion that you know is correlated with athleticism, i.e., running, and then making the claim that, since running is correlated with athleticism, the speed that they run 200 yards will tell us whether or not they are good athletes, and, for every athlete, their 200 yard speed should be a top criterion in deciding whether to recruit them, be it for sprinting, running long distance, playing football, or archery.
Every time somebody raises the fact that the speed that a person runs 200 yards is not the best way to measure whether they will be a good linebacker, you pull out an study which shows that athletes run, on average, faster than non-athletes.
I presume the “opponents” are the student’s classmates in this analogy. It’s a cutthroat competition between classmates; the kids with the highest SAT score will rise to the top of that competition, and you have no idea how the test optional kids will do? That’s not at all accurate. It’s been well discussed in other posts, so I won’t go in to a lot of detail. Some relevant considerations are the limited predictive ability of SAT in student performance, particularly for metrics beyond first year; and the limited additive benefit of SAT beyond other considerations used to evaluate test optional applicants (more than just HS GPA in isolation).
Regarding the watching game film analogy, one benefit of SAT is being a standardized metric Game film isn’t standardized. Game film is also subject to different strength of schedules for different leagues. For example, the HS team that plays in the state championship probably has game film on a different level of strength of schedule than a random HS team from a small town that was not particularly successful.
In contrast, 40 yard dash times are standardized. You can compare 40 yard dash times from players at different HSs who see very different strength of schedules in their games. 40 yard dash times also have a significant correlation in isolation between both certain game performance metrics and draft position (degree of correlation varies widely by position). However, 40 yard times having a positive correlation with certain game performance metrics in isolation does not mean that it’s impossible for a college team to recruit successfully by using a combination of other available evaluation methods, if submitting 40 yard time is optional instead of required.
Every highly selective university that has gone back to standardized tests says its own analysis proves this claim to be incorrect. It is baseless and can’t be put forth in any honest discussion.
Ohh this is fun, I can do this all day!
While I would love to be able to say with a straight face that I used to labour until this moment under the misconception that athletes are prioritised in admissions because they have shown they have such superior dedication, team working skills and time management and clearly are better at academics than the students who are actually better at academics because of they had actually spent more time at academics they would have been better at academics than the students who are better at academics whom we may or may not know what else they spent their time on but there is some mysterious reason that only the initiated can know why all these qualities may only be shown in athletic pursuits the college actually fields a team in and that the student is good enough in to help that team win, having read hundreds of posts (no I don’t think I’m exaggerating) to that effect…
I’m glad you’ve spelled it out: colleges recruit athletes so they can help their teams win!
Athletics have cultural importance in education in the US, and prioritising them in admissions got absolutely nothing to do with their academics, and that’s why athletic associations have created the most elaborate restrictions on recruiting practices, among other things academic minimum standards which, if I remember correctly, usually require standardised test scores to make sure those student athletes are actually comparable.
Seriously, let’s play this through.
Note that I used “runners” in my example, as in track and field or cross country, not football players, and I claim that requiring coaches to only recruit on the basis of the students’ PE grades and whether they have won intramural races but not on running times is exactly comparable to judging a students academic potential by looking at high school grades in academics, using past experience or educated guesses about the school’s grading standards in PE and coaching standards in track, and a student’s rank.
Now most colleges, certainly the CC-discussion-worthy ones, don’t openly recruit by subject area (=sport ie football) or major (=position on the field ie linebacker) but pretend that students, after 13 years of schooling, must be blank slates, ready for any liberal art subject that may strike their fancy. However, their app has to tell a cohesive “narrative” which usually points very clearly into an academic direction and colleges very carefully curate their intake in order to fill their schools and departments.
So imagine that on that admissions committee there are the “academic coaches”, boosting their recruits, all of whom, at the colleges CC discusses, will have very similar grades.
Some will have standardised times and results in fundamental athletic measures (running times, swimming times, ball throwing, weight lifting = SATs), some will have more comprehensive standardised results or stats in specific sports (AP scores), a lot won’t or will just have a transcript that says they were coached to a specific standard (AP classes). Some will have competition results, most won’t. Some will have game videos (portfolios, videos, published writing or research) most won’t. Some will come from high schools known for their great athletics, most won’t. Everyone will have a piece of creative writing about something, and I guess some wrote about their sports. All will have reviews from their coaches.
But the majority will just have their PE grades and maybe one of the other, more objective criteria to show, and, crucially, the coaches are not allowed to penalise them in their recruiting! How do they decide whom to boost?
This is news to no one. Colleges pay coaches to field winning teams. Coaches who don’t win are unlikely to keep their jobs for long because unlike professors, they can’t be tenured. (Yes, I understand most profs in the US aren’t tenured/on tt anymore.)
NCAA does not require potential student athletes to have a test score to be eligible to play NCAA sanctioned varsity sports. Obviously test required colleges do require scores. Some coaches at test optional schools require scores from potential recruits (generally based on direction from admissions.) Some colleges actually have a lower hurdle for academic admission for non-athletes than the NCAA required GPA for potential recruits (Mississippi State is one example where a non-athlete applicant needs a HS GPA of at least 2.0 for admission, but athletes at this D1 school would need to meet the NCAA minimum of 2.3 GPA in order to play their sport.)
Now back to this thread which continues despite the seeming agreement of most posters that colleges can choose the admission criteria that are best for them.
This is simply not true. All relevant published analyses show “limited predictive ability, particularly for metrics beyond first year.” Specific numbers from the related analyses have been posted numerous times in this thread. For example, In the Dartmouth switch to test required supporting analysis, the metric they are evaluating is first year college GPA. The Dartmouth analysis found SAT in isolation explained 22% of variance in first year college GPA.
This is consistent with other studies. Several others colleges have found ~20% of variance explained in first year GPA, including among colleges who chose to switch from test required to test optional based on results of analysis, or found SAT added little additional additive predictive value beyond the combination of other metrics that could be used to evaluate test optional applicants.
Analyses that want to show the maximum benefit from SAT emphasize first year GPA and do not emphasize other metrics beyond first year that often have lower correlations. For example, the UC study found SAT explained 4% of variance in graduation rate.
If SAT in isolation explains ~20% of variance is explained in first year GPA, that means a much larger ~80% is not explained by SAT in isolation. If SAT in isolation explains 4% of variance in graduation rate, that means a much larger ~96% is not explained by SAT in isolation. How is that not “limited predictive ability”?
Depends on the school’s priorities—first year GPA or graduation rate. Or both. High first year GPA tends to keep one on an academic track. Low first year GPA often results in “easier “ major choices. You get less freshman angst with better first year GPAs. Plus high graduation rate as well.
Depends on the school’s priorities—first year GPA or graduation rate. Or both. High first year GPA tends to keep one on an academic track. Low first year GPA often results in “easier “ major choices. You get less freshman angst with better first year GPAs. Plus high graduation rate as well.
My earlier point was it doesn’t matter what metric or combination of metrics you choose – SAT in isolation explains only a small minority of variance in prediction of that metric – “limited.” However, for test optional, the key question is not how much variance is explained in isolation. It is more how the much additive value SAT has beyond the combination of metrics used to evaluate test optional applicants. This is typically much smaller. For example, the Ithaca study found a 1 percentage point difference in cumulative GPA variance explained by combination of metrics with SAT vs combination of metrics without SAT (43% vs 44%).
Regarding switching to easier major choices, there have been much fewer studies that evaluate this. I am not aware of any studies that compare major switching behavior between test optional vs test submitter. There is a difference in major distribution, but it’s not clear how much of that difference occurred at admission vs switching out while a student. Duke published a study that evaluated predictors of switching out of math heavy majors. With full controls, the strongest predictors were being female, admissions reader HS curriculum rating, and harshness of grading in Duke classes. Being female was the strongest predictor, which may relate to things like negative experiences in classes or lack of role models. Other schools may show different patterns for a variety of reasons.
The colleges that are the focus of this thread typically have special programs to better accommodate students from relatively weaker HS backgrounds, giving them a better chance to be successful in math-heavy majors, in spite of HS background. For example, based on placement test, goals and discussion, a Harvard student might start at any of the following math levels – Math Ma,b; 1a,b; 19a,b; 20; 21a,b; 23a,b; 25a,b; and 55a,b. The lowest level (MA) is a half normal speed calc/pre-calc type class, while Harvard’s website describes math 55 as “probably the most difficult undergraduate math class in the country”. Harvard’s senior survey reports a median GPA of 3.9 out of 4.0, with only a very small portion of students who do not have A/A- averages. It’s not a throw everyone in the deep end and see who sinks and who swims type atmosphere, as may occur in certain other colleges.
Others have already pointed out that recruited athletes have lower tests scores, often substantially so, than do “unhooked” admitted students, and the NCAA requirements are minimal. Notre Dame was “boasting” that 10 of their recruited football players had GPAs of “3.4 and higher”, and that the lowest GPA was a whopping 3.25. Unhooked applicants are not competitive with GPAs under 3.9.
NESCACs do have limitations, which are part of the “band” system, but they have a much higher percent of athletes, and the are DIII, so the level of athletic standards are lower than those of the DI colleges, allowing them to raise the academic standards.
Where did you get this idea from? I have been in academia for decades, and I have never heard this. It is actually pretty ridiculous, because if colleges somehow believed that students were “blank slates” they would be teaching first-grade level courses, not expecting students to actually have a full 12 years of education.
At large public universities, students are also accepted to the College of Liberal Arts and Sciences. However, the Art and Sciences college at, say OSU, has 16,000 students, versus Harvard’s 7,000 total which includes its engineering. It is a matter of cost in admissions as well as in how departments are funded, and how tuition dollars are assigned.
There is no nefarious story or conspiracy. Admissions directly to major are the result of large student bodies, funding that is based on enrolled majors, Answering to the state government rather than to the board of trustees, etc.
The SATs are far more narrow than that. So let us accept you comparison and say that they are running, swimming, and jumping. But running is only a 100 yard sprint, jumping is only the high jump, and swimming is only the 50 yard breaststroke. And you need to do well on all three to play football.
To extend it further, these competitions are all held on the same day, but wealthier athletes can take them in their neighborhood, while low income athletes will have to wake up at 5 am to take a two hour bus to take the same test. You are also allowing kid whose parents can pay a private doctor to allow their kids to have more time to complete that sprint or adding a few inches onto every height they jump.
In fact, the ACTUAL analog of this “general test” that you are demanding is the GPA. The speed that a student runs, the height that they jump, the distance that they throw a javelin, is recorded by the coach for that activity, and added to their “transcript”.
No colleges admit based on AP scores, and there is no place on the Common ap to add them. A very small number did accept a few of these in lieu of SAT/ACT scores, but otherwise they don’t care. So they don’t fit in our analogy.
The PE grade is not the “athletic GPA”, because the PE grade is not for athletes, it’s for student who are not athletes. Well it’s like the academic GPA for the athlete in our analogy - to show that they actually attended classes and passed most of them, bust, since it is not the reason that they are being recruited, is just has to be decent, not excellent.
So you really haven’t demonstrated that athletic recruiting needs a standardized test of a very narrow set of athletic abilities which really provide very limited amount of information for the recruiter.
This is not what is recorded at all by using a grade.
Instead, each teacher is writing the equivalent of “excellent”, “pretty good”, “fair”, and “poor” relative to other students in the class. Going back to the track analogy, somehow a college needs to be able to take the arbitrary grade by a PE teacher to find the next champion 100m sprinter.
Given this situation, what colleges will tend to do is fall back to high schools that they know produce great track stars, or academic students.
Some colleges (too many to mention) do use AP test scores as one factor in the admission process. And yes, the common app testing section includes an area for AP test scores…it’s been that way for a very long time.
The other factor with AP exams is that not all students are properly prepared for the AP exam even though they take the AP course. Some schools expect the student “pay” more and “prep” for the AP final on their own time. Kids can do the AP course and not the AP exam.
The point is this again favors wealth. Lots of kids can’t pay for the AP exams or the prep needed to get a decent score. But, they are told to take them to pump up the GPA.