The Misguided War on the SAT

keeping a high GPA ended up being way more stressful for my kid than studying for the SATs

4 Likes

AOs do realize context is important, which is why they can adjust for these other factors, and look for example at rigor within the context of what the high school offers, and GPA/class rank within the context of a particular school, or (hopefully) valuing the McDonald’s experience and not necessarily the boondoggle EC.

But when it comes to test scores, parents and students do not believe that context is important. So they are outraged and confused when their kid with a 1500+ SATs get rejected, while kids from different demographics with 1300+ SAT get accepted. Or they see the extraordinarily high 25-75 numbers and don’t bother to apply because they don’t think they have a chance. Or they assume a high score means more than it really does and they do everything possible to get that score, only to be disappointed when it doesn’t mean what they think it should mean. Then they assume that because it didn’t mean what they think it should mean that their child was wronged or otherwise discriminated against, and their high scorer deserved admission over the others from another demographic.

No matter how much schools say “these tests aren’t that important” parents and families refuse to believe it! Some think the very act of conveying this truth must be full of discriminatory intent! Parental and familial expectations do not align with how the are actually tests are are used and this itself creates barriers and complicates, muddies, and frustrates the process.

Given that the AOs can get much of the same information from other parts of the application, is it any wonder they have chosen to try to deemphasize these tests and break down these barriers?

The idea that the test results are “objective” conflicts with the notion that the results must be reviewed subject to the context of each student. Therein lies a large part of the disconnect.

3 Likes

We paid for zero test prep and my kids have done minimal on their own. Son is currently a junior, his SAT went up 90 points from Aug to Oct and 200 from his psat10. Daughter’s SAT also went up 200 from 10th grade PSAT and 2nd ACT was a perfect score, she did one practice test. No superscore- straight scores. It is possible to improve significantly without paying any tutors, or killing yourself studying.

6 Likes

I totally agree with this. My kids go to a private school with a fair number of affluent kids. SAT scores are the least gameable (not sure if this is a word!) of all the factors. Grade inflation at our school is crazy now post pandemic - everyone seems to think they deserve an A and most teachers seem happy to comply. I feel bad for the few teachers that do have standards, as most kids and teachers complain about them and avoid their classes to protect their easy As. Very difficult to distinguish top academic students anymore at our school based on GPA (my kid has an AP class where the teacher gives so much extra credit that the kids who get 70s on their tests get an A in class just the same as kids who get 95s).

And then so many people have private college counselors that give these same kids substantial help on their essays and tell them what extras to do from 9th grade on to present well on their college applications.

I laugh at the number of non-profits and research opportunities these kids supposedly start or do, that are all just set up by their connected parents and their friends.

I will say that in most cases the one thing these kids can’t game is their SAT scores (even with private tutors) - but now they just go test optional, and they can look equally or as good as much more impressive middle class kids (whose SAT would distinguish and confirm their strong academics, and whose extras might not look as impressive because not all set up for them).

I don’t understand why they don’t keep standardized testing but make kids report all their scores (that way there is context - more impressive for kid to get higher score who has taken test only 1-2 times than kid who was taken it 3-4 times). This is what used to happen.

Test optional has further broken college admissions system - just look at huge jump in number of applications schools get now versus 2019. Colleges probably love it but terrible for applicants and just adds so much more stress for everyone.

10 Likes

Personally, I think there is very little a tutor adds to a strong student beyond a couple of sessions. Those can be useful to help the student identify certain types of questions and the best strategy to find the answer.

We used a tutor once a week for an hour, which I would hardly call grinding out. Basically I was paying so my D would actually make brain space for prepping, otherwise there was always a paper, a test, a club meeting that took precedence. My other two were a little more disciplined and the free khan academy practice was great.

As for the notion of prepping, it’s like any other school test. You review the materials before hand. The idea that it’s unfair because some kids are preparing themselves for the test is absurd. Is everyone on equal footing? No. Are kids from the same school generally expected to meet a certain number? Yes.

5 Likes

Perhaps parents and students do believe context is important and thus are surprised by some of the admission results.

Reminder that discussion of race in college admission can only be discussed in the political forum. Posts edited to be in compliance.

I believe in context. A 1300 in a school fro rural Texas where kids barely manage to graduate? Absolutely. I think it’s harder to understand when those 1300 and 1500 kids are from the same private HS.

7 Likes

Test blind is irrational. A test result is a datapoint which can be weighed in context in light of the college’s priorities. Why suppress knowledge of a datapoint? Are some things too dangerous to know?

7 Likes

I’ve read similar sentiments in other threads, and I don’t fully understand it. Why is a 1600 more meaningful if a student achieves it on their first try or their second or their seventy-fifth? If the SAT is the objective measure that many believe, why does it matter when the score was achieved (short of the type of kids who get such scores at 11 or 14)? For the ordinary high school junior or senior, is the idea that getting a high score on the first sitting is a demonstration that the student is “smarter” than a kid who gets it on their third sitting or who super-scores? Why? If they know the material on their third sitting but didn’t on their first, who cares? They know the material now, right? Isn’t that what matters? If colleges value the score as an objective measure what makes it more meaningful the first time? The main reason, I would think is one of money. I know that low income students can get a fee waiver for two sittings of the SAT after which, I believe they must pay the full cost of the test. So in terms of $$$, time spent studying (instead on other activities), and perhaps tutoring resources, multiple sittings do advantage wealthier kids, but we know that affluent kids score higher on the test already, and I suspect that multiple sittings is only a small part of that difference.

I will admit that I sometimes wonder if a few wealthy families are getting unneeded accommodations, and I could see how a jump of hundreds of points between score 1 and score 2 could reflect that the student didn’t have extended time for the first and did have extended time for the second sitting. But I think that is pretty rare, and other than that circumstance of gaming a rule not meant for your own child, I don’t really see how the “context” of multiple sittings tells admissions officers anything useful about the qualifications of applicants. Furthermore, kids who received accommodations on standardized tests may very well get accommodations them in college; therefore, I would think that their performance on the SAT with extended time would predict their performance on college assessments with extended time. However, I don’t know enough about the subject of accommodations and extended time to be sure.

But many posters seem to feel that if a student comes from an upper middle-class or affluent school, they must submit scores to have a shot at selective colleges, and that “test optional” is not really test optional for kids over a certain income line. That has not been my observation (that it is impossible for unhooked high income kids to get into such schools without submitting test). However, I do think admissions is more likely for such kids if they submit their scores and the scores are high. So if most wealthy kids are submitting scores (and they are the ones more likely to be accepted) then why is it a concern that some wealthy kids are not submitting them (and more likely to be rejected)? Is it the transparency issue? The stress issue? That those TO but upper middle-class kids are wasting their time taking a shot at an application that is likely to be rejected? Is it that the public (as consumers of the colleges) just have a right to know who is getting in with what score and why?

From everything that I have read here and elsewhere, it seems to me that the primary disagreement about the “misguided war” is a disagreement about who benefits and who is harmed by test optional and test blind policies. There are some other concerns that come up, but that is the one that seems most frequently mentioned. I get why the murkiness is frustrating. At the same time, my guess is that most of those stressed out students do find a home at a college that is a great fit for them. Also I think there are much bigger issues of concern in college admissions and K-12 schooling than whether or not any individual college decides that they need tests or they don’t need tests to make their admissions decisions


3 Likes

No set of information is perfect. I was just thinking that if one concern is that the Sat favors wealthy kids in part because they have time and resources to take test more times so better chance of having higher scores, just report all tests taken and admissions officers can use that context too.

I did admissions years ago and we did see all the scores then and it was helpful in some cases. An aside about accommodations. At our private school - a number of wealthy families have their kids assessed by a private psychologist in 10th grade so they can get extra time on SATs - and our school signs off on all of these. Kids who are getting all As suddenly have time and a half on SATs. I blame college board for letting a bunch of private schools/kids get away with this. In the overall scheme of things this is a very small percentage of accommodated SATs, but another case where privileged people find a way to take something that is a neccessity for some people into an unneeded advantage for themselves


Reality is there potential for gamemanship/and some people are advantaged in almost every metric - never understood the focus on eliminating this one metric -SAT- when it is probably most objective metric when used properly (ie taking into account context, ie. How mit uses).

3 Likes

Looking ahead, as a practical matter, I wonder whether there will be a gradual shift from scores being a missing data point for test optional applicants to scores being assumed to be lower than the college’s published score range.

I do feel badly for students advised not to bother testing, as I think this does them a disservice. They should have safeties on their list where their scores would be fine to submit and might even help with merit possibilities.

Yale’s admission officer said exactly that (i.e. if scores are missing, they are assumed to be low).

3 Likes

There are fairness arguments that convinced a judge not to allow UC campuses to be test optional (i.e. test required or blind only).

Caltech mentioned something similar, but it is unusual in that the SAT and ACT are likely irrelevant to indicate whether the applicant has the necessary academic strength for Caltech, which is likely the real reason.

CSUs admit by formula, and presumably did not want to have to come up with a formula to handle some applicants with SAT or ACT and some without.

What does that even mean though? I don’t know Yale’s Class of 2027 test range, but per the Class of 2026 ACT mid 50% was 33-35.

The Yale AOs know (or should know) vast swaths of kids are being told not to submit anything below a 33. So are they assuming a non-submitter has a 32?
And then is the connection that by assuming a non-submitter has a 32 that would hurt the student’s app? I find that to be a stretch.

If the readers are not assuming a 32, then what number is a given reader inferring or assuming? How could Yale ever control for that?

I find that entire podcast insufferable. You do you Yale, but don’t make the mistake that what you do should impact or does impact other schools and their policies.

For the record I have heard dozens of AOs say they don’t assume anything about a non-submitter’s score, nor do they even assume the applicant took the test.

2 Likes

In the case of Yale and others approaching test optional admissions this way, the problem with test optional policies, then, is lack of clarity from the colleges on when to submit scores, as they might accidentally disadvantage themselves in not submitting if their score would have helped support their academic preparedness, as the Yale AO mentioned.

1 Like

This lack of clarity is purposeful IMO. It drives up apps and test score ranges, and drives down acceptance rate. All important for the mystique and eliteness some of these schools revel in creating/maintaining.

Data show that students who have good scores in context either don’t send them or don’t even apply, which we’ve already talked about on this thread.

What we don’t know is how much test scores play in to Yale’s admission decision. Any student with an ACT of 28+ could succeed there, some with much lower scores would succeed as well. One of my former students had an 18 and is excelling at Cornell (not in a STEM major).

1 Like

I guess your point is that the admissions process has never been a transparent process for very selective schools. There has also never been external objectivity, since only the AOs know what their exact institutional priorities are. Increasing transparency/objectivity is not a good thing for the institution, since it opens them up to criticism and potential litigation as their methodology is more easily reviewed and dissected.

Isn’t the current admissions process the best of all worlds for very selective schools as they now have an unfettered ability and opportunity to curate their annual class?

2 Likes

Probably so. Many don’t care how private schools build their classes, I know I don’t. The ‘very selective’ schools overall are successful at educating and graduating students of their choice, which seems ok to me.

2 Likes

There are many reasons why test blind may be favored over test optional, but a common one is to ease the fears of test optional that have been well discussed in this thread such as the following. With test blind, there is no decision of when to submit or not submit, no groups that may need to submit scores even though the official policy is optional, and no risk of the admissions being biased towards/against submitting scores.

  • Students don’t know when to submit scores and when not to submit scores. Less resourced students will decide wrong and be disadvantaged.
  • Colleges will assume your test scores are low, if you do not submit them.
  • It’s only test optional for hooked kids. Unhooked kids need to submit scores, to be accepted.
  • Colleges need scores for lower income kids. Lower income kids who do not submit scores won’t be accepted.

However, only a small handful of selective, private colleges choose test blind over test optional. This list includes Caltech, Cornell (Life Sciences, Architecture, and Business schools) Pitzer, Reed, and WPI. Each of these colleges have their own reasons for making this decision.

For example, the podcast mentions that Caltech favored test blind over test optional partially because of input from faculty. The faculty wanted to go all in with test blind, rather than take half measures. WPI switched to test blind based on seeing 10+ years of results and outcomes with test optional, giving them time to compare performance between submitters and non-submitters.

2 Likes