How do you think about acceptance rate?

Well, I provided my data sources.

But for sure, all of this data is so imperfect none of this can be proven to a scientific standard. So if you don’t find it persuasive obviously you don’t have to consider using it yourself.

Anecdotally, it did seem to work out well for my S24. But that is a sample of one, so again not exactly proof, just an illustration of one possible scenario.

1 Like

Well, just to be clear, cross-admits (Parchment) are not the same as Targets vs. Reaches. But just going back to my thought experiment: “X is my dream school, but in case I don’t get in there, I will also apply to Y”, you have to admit there is a lot of sunlight, not to mention snow and black ice, between Wesleyan and St. Olaf; are you saying Connecticut College was your S24’s dream school?

Sure, but as I said from the outset:

If you don’t have a super strong locational preference (or can talk yourself into that state of affairs), you can often find otherwise comparable colleges in less generally popular locations where the admit rates are significantly higher.

So if you have a strong preference for the state of Connecticut over the state of Minnesota, and cannot talk yourself into a different attitude, then you obviously won’t be willing to consider St Olaf as an arbitrage opportunity with Connecticut College.

This was part of what I was referring to later when I said:

But if you feel like it, maybe you also add St Olaf as potentially a Likely

If you have a sufficiently strong location preference, you may not feel like it. And that is fine.

Correct, and I did not suggest otherwise.

The categorizations I was hypothesizing were based on the framework here:

https://support.collegekickstart.com/hc/en-us/articles/217485088-Differences-Between-Likely-Target-Reach-and-Unlikely-Schools

The first part of the hypothesis was that Wesleyan was a Reach. That is automatically satisfied if the college in question has less than a 25% admit rate, and Wesleyan does, so that one is easy.

The second part of the hypothesis is that Connecticut College was a Target. It first needs an admit rate over 25% (it does), and then second your academic profile has to put you in their middle 50th or higher.

The third part of the hypothesis was that St Olaf was potentially a Likely (depending your qualifications). For that to be true, it needs an admit rate over 50% (which it does, albeit barely), and your academic profile would have to put you in their top quartile. That could or could not be true–we know by hypothesis your profile put you into Connecticut’s middle 50th or higher, and if it is high enough that would make St Olaf a Likely, but it might not be high enough in which case it would likely be another Target instead (although there is a narrow slice where your qualifications would be mid-50th for Connecticut and not for St Olaf–see above).

Oh no.

I discouraged him from having a “dream school”, but from the beginning his stated favorite was Yale, to which he applied SCEA, and then got deferred.

I was fine with him liking Yale (again, not a fan of calling that a dream school though), but both his college counselor and I encouraged him to think broadly about why he liked Yale, and what other colleges might make sense for him.

And initially he thought of some of the other “usual suspects” among the Ivies and NESCACs (both popular in his HS), but again both his counselor and I encouraged him to think broader than that. And once I adopted this working hypothesis about locational arbitrage, I encouraged him to think specifically about colleges he might like in other regions. Some regions he was not willing to consider, but he was willing to look down the East Coast to North Carolina, and out into the Midwest.

So we visited a bunch of colleges, some he liked, some he didn’t. And eventually he applied to colleges in all the regions he looked at.

And then he got into colleges in all the regions he looked at. But as it happens (and admittedly this is a bit too perfect for this story), he got into exactly none of his Ivy or NESCAC colleges. To be fair, his NESCACs were Amherst, Williams, and Middlebury, and he did get waitlisted by Amherst and Middlebury. So maybe if he had dug deeper into the NESCAC lineup one would have admitted him. Same with Ivies–he only applied to three (Yale, Penn, and Brown) so didn’t fully test what could have happened.

But in any event, outside of those colleges, he was admitted everywhere else he applied. I mentioned WashU, Carleton, Vassar, and St Andrews, and he was also admitted to Haverford, got offered the Monroe Scholarship at William & Mary, got merit offers from Rochester and Pitt, and got admitted to Wake (no merit, though).

Frankly that was overkill, but point being he had lots of great offers to consider in different areas. And WashU had really emerged as his favorite after a visit, and he revisited with an offer and that confirmed it, so he is at WashU.

OK, so my point when I said this “did seem to work out well for my S24” is that by not just sticking to his original preferred region, by being willing to consider, visit, and apply to other colleges outside that region, he ultimately got great offers and ended up at a college he really likes.

But again, this is just one anecdote. Still, I consider it a successful implementation of my locational arbitrage strategy.

1 Like

Peer assessment would seem to represent a recursive method for evaluation, however. If any school were to sustain a substantial change in acceptance rate, its PA score would adjust to a degree consistent with the extent of this recursion; and current PA scores may already reflect acceptance rates.

Schools receive a student’s info when the student adds the school to their My Colleges list on the common app, so there’s that. So, if the add happens on Dec 15, a school could see that as suspect, especially if there had been no other engagement with the school prior to then (mailing list registration, virtual or in-person visit, etc.)

I don’t know. All the college data Parchment has is student self-reported (and not verified.) So, kids can say anything. Or adults can pose as kids. I will say sometimes the data seem reasonable, but I wouldn’t draw any conclusions from the head to head results. Garbage in, garbage out.

3 Likes

As examples like the ones I gave illustrate, there is likely some sort of positive correlation, but it is far from perfect, such that you can identify many cases of the kind I identified (colleges that rank similarly by peer survey do not rank similarly by acceptance rate).

This makes sense for various reasons. For a long time now, acceptance rates have not been an input into the US News rankings directly. And I think the people being surveyed by US News understand these issues as well as anyone–there are simply too many other factors that go into determining relative acceptance rates for them to assume lower acceptance rates automatically merit a higher score in terms of “overall academic quality” (the specific question they are being asked in the peer survey).

OK, so we know that what I described is happening–schools with durably similar peer survey results can have durably different acceptance rates. The question then is what else is determining acceptance rates besides whatever is measured in the peer surveys?

And again my working hypothesis having observed a lot of these cases is that location is at least one of the other big factors.

That makes sense and I have heard suggestions like that before. I note Middlebury does say it considers “Level of applicant’s interest” in its CDS, so I guess that is fair warning anything they have the ability to see could be taken into account.

Just to be clear, at least in this context I wasn’t suggesting people take too seriously the percentages they report. I was just using them for a much more limited purpose, seeing how many matchups they have at all.

To my knowledge they don’t report that directly, but when they generate a report it can come in three flavors.

If the numbers are red and green, it means they are at least claiming they have enough matchups in their data for a statistically significant result (“If the results are in color, then the difference is statistically significant at a 95% confidence level.”). I note even in those cases, they will report error ranges, and sometimes the ranges overlap, meaning you don’t actually know who should be red or green. But in any event, red/green numbers mean they are claiming to have a decent number of matchups in their data.

If the numbers are grey, it means they have some matchups, but not enough to claim their results are statistically significant. Again there will be error ranges, and as expected those tend to be very large for grey number results.

Finally, there may be no numbers at all, and they will then say, “No matchups yet. Please try another search.” So that presumably means no matchups at all in their data, which as I noted before doesn’t mean there were no cross-admits in the real world. There just weren’t any reported to Parchment.

I want to emphasize I fully agree this is not very high quality data. Still, if a St Olaf and a Connecticut College have no matchups, I do think this is at least some confirmation of the plausible hypothesis there are not a large number of kids applying to both of those specific colleges.

In contrast, if you run Cal and UCLA, not only are the numbers in color, the error bars are relatively small. Again, I would also advise taking those numbers with a grain of salt. But I do think this is some confirmation of the plausible hypothesis a lot of kids apply to both of those specific colleges.

And so on.

Parchment was useful for us to establish match, reach and far reaches (flagship was an auto admit safety). This was before I stumbled on this site or even knew what a CDS was. Not sure if the tool still exists, but it was a good aggregator of where the student stood based on academic factors.

As to the original question, if we are looking for a quick shortcut to determine desirability, I would just look at the US News or other published ranking.

Agreed. Why are we even discussing acceptance rates when all of that went out the window along with Cass-Birnbaum guidebooks about 40 years ago? :flushed:

I’ve yet to see a Chance Me thread where a 5% acceptance rate was even a factor in the OP’s self-assessment.

People were talking about Liberal Arts Colleges, and we should remember that around 1/3 of their students are varsity athletes, many or most who are recruited.

That means that these students have a very different admission process, and therefore admission rate, than other students. For starters, these are an EXTREMELY self-selected group. To even start the process of recruitment, these students already have to be top performing athletes. So the pool of such applicants goes through an extreme level pf culling every year.

To a smaller extent, some of these LACs have students who are accepted through Questbridge or Posse who also have very rigorous selection processes which start long before the applications arrive at the colleges’ admission offices.

3 Likes

That’s a good point, although I think for some of the LACs the percentage is significantly lower. At Carleton I believe the percentage of varsity athletes is 19-20%, but that includes walk-ons, so the recruited figure would be lower.

1 Like

According to most sources, 25% of Carleton students are varsity athletes. Fewer than the typical 1/3 of, say, NESCAC colleges, but still a substantial chunk.

1 Like

Carleton varsity athletes are about 20% of the student population, and not all of those were recruited. Equity in Athletics is the official source for these numbers, colleges are required to provide these numbers to the feds each year. 402 total unduplicated athletes, 2,007 total undergrads for the 2022-23 school year (the most recent available.)

https://ope.ed.gov/athletics/#/institution/details

2 Likes

According to this website, Midd has 26% unduplicated athletes (706 of 2,736 students), which doesn’t make sense to me - that seems too few. Other NESCAC colleges have numbers which make more sense.

Carleton seems to have an inordinate number of athletes in multiple sports - 93 of their 402 athletes playing more than one sport. Also, since their track athletes are all grouped together, we’re talking about very different sports.

In any case, we’re getting off track. Back to admission rates.

1 Like

On this thread, we’ve talked a lot about how high quality schools in “less desired” locations can have higher acceptance rates, making them good back-up schools if students are willing to be flexible about location (and if they actually prefer these “less desired” locations, even better.)

Another factor that also serves to raise admission rates is a school being single gender. For example, Wellesley, Smith and Mount Holyoke all have admission rates about 10 points higher than I think they “should.” So if a student is willing to be flexible on the gender composition of a school, they can identify some great schools that are more likely (and again, if they actually prefer a single gender college, even better.)

Likewise, weird curricula. If it gets weird enough (e.g. St. Johns) then you can still have a co-ed school in an East Coast setting and still have a relatively high admit rate.

And then there is a school like Reed that refused to play the USNWR rankings game and paid the price. Great education. Again an admit rate much higher than it “should” be.

5 Likes

Other demographic factors may also reduce the appeal of a college, reducing its admission selectivity. For example, if the students are predominantly one race/ethnicity, the college may be seen as unattractive by applicants of any other race/ethnicity, reducing the applicant pool and making the college less selective.

1 Like

Some top schools, depending on your economic situation, may be more affordable than they look - if need based aid is factored in.

This topic was automatically closed 180 days after the last reply. If you’d like to reply, please flag the thread for moderator attention.