Again, my family is just starting our college search and is new to this.
I was recently in a group of strangers whose children were wrapping up their college searches.
The parents all had access to Scoir Scattergrams for their high schools. And the parents were clearly from multiple high schools.
The parents seemed to be of the consensus that the Scoir Scattergrams (which show admissions decisions by test score/GPA for YOUR high school) were wildly inaccurate.
The conversation moved on before I could ask why they thought this. I thought that I would ask here. Unless people are submitting bad data…theoretically this seems like an excellent data source. We were planning on using Scoir as a key tool to pick schools to apply to.
What do you all think? Are there data problems with Scoir Scattergrams? If so, what issues exist?
In the case of schools that admit by major or division, a scattergram for the entire school may obscure such differences or give seemingly contradictory data points.
For example, a scattergram for San Jose State University (which does not consider SAT/ACT scores) may show some admissions at 2.7 HS GPA, but some rejections at 3.7 HS GPA.
These data are only accurate and complete if the counseling staff input accurate and complete data (and if students are giving them accurate and complete data.)
Some noted issues regardless the accuracy/completeness of input issue are knowledge/visibility to hooked applicants, whether or not a given student applied with or without test scores, and test score data is the highest single test, not a superscore.
I think Scoir/Naviance scattergrams are only helpful for schools that admit primarily on stats (and keeping in mind ucbalumnus’s point about schools that admit by major).
For selective schools that use a large range of factors, these scattergrams are inaccurate because they don’t include all the factors these schools look at in making admission decisions. In these cases you’ll often see that the stats of those denied admission are pretty close to those granted admission.
I think if you are a reasonably cautious user of uncontrolled frequency data, there are some interesting insights to be gleaned from SCOIR scattergrams.
I think if you are instead relying on their prepopulated percentage lines to predict your individual chances, then yes, those could often be very inaccurate used that way.
Like @DadOfJerseyGirl , we found the Naviance scattergrams were only useful if a relatively large number of students from my kid’s HS had applied there within the time range that they cover. It was primarily useful for colleges which put stats at the top, but that included a great number of colleges with “holistic” admissions. As a rule, I would say that they are good for almost all public universities, including many that have pretty low admission rates, which are supposedly have holistic admissions.
You can identify colleges which primarily use stats, even though they have supposedly holistic admissions by the fact that the top right corner of the scattergram (high scores and high GPA) i almost uniformly green, so all or almost all applicants with a test score/GPA combination above a certain threshold are accepted. This is really useful, if your kids stats fall within that mass of green check marks, that means that the college is a safety or a likely.
It was less useful for colleges which have a mixture of accepted/rejected that reaches throughout all ranges of GPA and SAT. However, the scattergram also provides the acceptance rates for your kid’s high school. You can compare the acceptance rates of your kid’s high school to the average acceptance rate for that college, and then look at the distribution of test scores and GPAs of students accepted to that college to the college’s mid 50% range for these.
These data will give you an indication of the reputation of your kid’s high school at the college. a higher acceptance rate for your kid’s high school will indicate a good reputation, especially if the accepted students from your kid’s high school had stats within the range of the accepted students. It won’t be as informative, since the colleges is more holistic, and you have no idea where your kids stands, in regards to other factors, relative to their peers at their school or elsewhere. However kids can compare where their stats were compared to the average at any specific colleges as well as to the stats of students from their school from recent years. It doesn’t tell them what their actual chances were, but it tells them whether their stats are high enough, and roughly where they would be, relative to other applicants from similar schools with similar profiles.
Again, all with the caveat that if there are fewer than a total of 40 or so applicants to a college across the previous 30 years, very little information can be gleaned from that scattergram.
I think Scoir is moderately useful. For schools that admit primarily on stats and for less selective schools, the data can be very helpful in determining what might be a match or safety for your student. Scattergrams for the most selective schools aren’t that helpful - partly because admitted & rejected students often have similar stats and there is no way to identify which students might have had a hook (legacy, athlete etc).
Moderately or less selective schools are likely to have stat zones that are entirely or nearly entirely admit, stat zones that are entirely or nearly entirely reject, and perhaps a “maybe” zone in between where other factors come into play. However, the stat zones may be less distinct if the school admits by major or division, and selectivity varies greatly between different majors and divisions.
For the most selective schools, there would only be the entirely or nearly entirely reject zone, and the “maybe” zone in the upper right corner.
And I think that is potentially useful information–outside the maybe zone, unless you are hooked or otherwise a clear special case, maybe look elsewhere.
D24’s school uses Naviance, and we did not find the scattergrams terribly useful. Part of this may be that the school is very small.
One of our state’s big public flagships (NC State) admits by major, so those results will be all over the place due to the super popular majors and programs with tiny capped enrollments.
For UNC Chapel Hill, more kids than you might initially expect were accepted, but there is also a guaranteed transfer admission program (and it is common to graduate with an associate’s degree from her school).
I did find it interesting to see if ANY kids had been accepted (or even applied) from her school to some of the more out there colleges over the past several years.
My daughter’s school is a bit odd. Somewhat small, but also an athletic powerhouse. Lots of D1 athletes.
The smaller student N means lots of missing data for a lot of schools. But for bigger schools…it was quite shocking to see people get admitted to great schools who I am assuming are athletes and held to FAR lower academic admission standards.
We had similar volume problems for many specific colleges even being a sort of medium-sized HS (for a private).
I recall someone in our circles saying something like they thought this data was better for red flags than for green flags. Like it doesn’t take a lot of unsuccessful applicants in your range to at least warn you there is a potential issue. But prior applicants going like two for two in your range really isn’t useful for ruling out the possibility it could be two for three after you apply . . . .
Not surprising, since a decently large number of applications and admissions are required to get a useful picture.
Depends. As I wrote, it is also pretty good for identifying safeties and likelies. My kid’s good Chicagoland public high school had a range of stats above which, all applicants to UIUC had been accepted, and a similar one for UMN, Wisconsin, UI, and a bunch of other moderately selective colleges. It’s interesting, because some of these publics, like UIUC, are more selective for engineering.
Yes, although I feel like you need a decent sample size for that. But for colleges where we have dozens of data points, it can be pretty clear in some cases.
Admittedly my kid goes to a reasonably large school (between 1600-1800 students) 98% all college bound so for many schools lots of data…With the more common schools you can really rule out some schools as “beyond reaches” and confirm some as “likely” with it… Plenty of schools have no or extremely little data though. Can you tell a low reach from a hard target? no. Personally I think it is useful as a gut check, as long as you know CS/engineering/Nursing etc are often harder, and some kids are athletes, or legacy etc and mentally adjust that way. I think a lot of parents/kids say it is way off because their kid doesn’t get in to a couple of targets despite being in range, which is statistically not wrong
Also, it confirmed to me that AOs must consider this school quite rigorous. Based on these boards you’d think you need an UW 3.98 and a number of APs to get in to PSU or Syracuse (as random examples). That is clearly not the case with this school…
Larger variations in selectivity by major, and larger effects of hooks like recruited athlete and legacy, could have the effect of expanding the “maybe” zone, as the applicants to more selective majors show up as rejects in what is otherwise part of the “admit” zone, and the hooked applicants show up as admits in what is otherwise part of the “reject” zone.
The shape of the “admit” / “maybe” / “reject” zones may also differ depending on how the college considers GPA and test scores. For example, is “admit” (or “maybe” for the most selective colleges) a rectangle, everything except a rectangle, a zone delineated by a diagonal line, or something else?
Test optional and test blind admissions may also have effects on the shape of “admit” / “maybe” / “reject” zones.
Our high school uses Scoir. It’s useful to some degree. We keep an eye on the last 1-2 years of Scoir data for our kid’s top choice schools. Admissions rates have changed so much in the last 10 years, I wouldn’t look at data pre-COVID for example. And, it’s helpful to look at the data for Regular Decision vs Early Decision applicants to look for trends. The more applicants from your high school to a particular college the better the data.
There is so much not shown in Scoir–extracurriculars or hooked applicants etc. Sometimes you’ll stumble on outliers for a school that don’t make sense based on GPA/SAT/ACT alone, but that’s the college admissions. It’s subjective. Sometimes, that alone is a helpful take away.
The “target/maybe” being really a reach if CS is part of what I meant for mentally adjusting for major… I def could/should have elaborated on that more. I am personally less concerned about impact of something looking a reach but really is a hard no …I think one should be considering reaches as not that likely anyway given vagueries of schools with lower acceptance rates anyway.
There are some schools in our scattergrams where there is virtually no rejections. (And the ones that are there are VERY low). I see no reason, for any major, not to consider that a likely for anyone.
And as others pointed out, many schools look at lots of things so “within range” at all, is almost always a maybe depending on EC, recs, essays, institutional needs etc.
I also, of course, pay more attention to the school were 2/3 of the kids apply each year vs. school that has 8 data point over 5 years (which is not really helpful at all).