Thinking out loud about algorithms: if multiple top schools used the same one, they’d have the same results. Maybe that would simply mean that the same applicants would be screened out on a first pass (which may make sense, if the first pass is some sort of minimal level of academic qualification) but beyond a first pass, I would be skeptical.
Can we talk more about who should test and why among disadvantaged students? If state universities, even regional ones, award some scholarships by considering scores, shouldn’t most kids with grades above, say, 3.0 test? Perhaps assuming no learning disability issues. How do counselors decide whom they’d advise to test?
Adding, “discrepant” splitters (low GPA/high score) are typically around 15% of test takers. A score might be helpful information for them, not necessarily for admission to a highly selective school, but when they consider their education plans in general.
Everyone should test. Some may choose not to submit the scores in TO cases, or to not apply to test required schools, but everyone who can, should test. It is just one Saturday morning for many ( or a school day for some) and it can yield great returns.
I think the case they were making in the paper was basically they instead admitted 20-25 more advantaged kids with high test scores and/or from trusted high schools and such.
I note while I believe Dartmouth remains the whitest Ivy, I also believe they finally slipped before majority white into just plurality white. And so they can change at least somewhat over time, and they have very long-term strategies, so I could see how anything that might help a bit here or a bit there would still be something they would value to help keep things moving in the right direction.
The second more detailed discussion I supplied seemed to suggest Dartmouth did just that–in Dean Coffin’s terms, they “insourced it”.
As I understand what Coffin was describing, they seemed to be trying out different “algorithms” and “logic trees” and such for generating academic assessments, and then seeing how the output compared to human assessments of the same cases. Apparently by “algorithm 27” (which might not have been the actual number), they were getting happy with the results.
And even then, the human reviewer can second-guess the algorithm–it apparently just wasn’t happening much once the got the tool to the place they wanted.
Coffin used the term AI at one point, I used the term Big Data, but these are not necessarily precisely defined terms. Whatever you want to call it, he seemed to be describing programs that turned large sets of data into academic assessments, and a process of humans evaluating those assessments until they were happy enough with the accuracy to use it as a tool during admissions.
By the way, this Holy Cross video might be useful context:
That video is now a few years old, but it shows some of the steps their reviewers were taking to process application files. It was pretty labor intensive, and I can see how if Dartmouth and the like can automate a lot of that–whatever you want to call that–then it will save a lot of human time.
Coffin seemed adamant that is all they were using it for, which I agree makes sense.
I was actually reminded a bit of the UCAS tariff point system in the UK. You can do that with just a spreadsheet in the UCAS system because things are so much more standardized there. But then when you have that system in place, you can do things like The Guardian publishing the average UCAS entry tariff for all the different courses at the different universities. That kind of thing is very helpful for UCAS applicants who need to get their list down to 5, as it gives them a much more realistic sense of whether they are truly competitive.
To me, it sure seems like widespread adoption of a customizable program like this could serve a similar function in the US if it was then made public by each college, ala the NPC. It would not tell you if you were going to be admitted, but it could tell you if your academic qualifications evaluated in context in the way a given college, and possibly college/major, liked to do it would allow you to be competitive.
So this highlights one of my pet peeves…how colleges report race. (Note I do recognize that there can be FGLI students in all race buckets.)
Colleges will generally include Asians in their statistics when reporting ‘students of color’. Fundamentally I disagree with that, so I rework the numbers to see what the reality is. Let’s dissect D’s numbers from the fresh off the presses 2023-24 CDS, section B2.
For class of 2027, we have to back out the internationals because we don’t know their race. So domestic class size is 1,032. Asians are 17.7% of the domestic class, Whites 52.2% (recognizing there are Asians and Whites in the 174 international group, but we don’t know how many).
I don’t know if D has increased the enrollment of non-whites and non-Asians over past years, and I do recognize change is slow. If they did improve this measure over the test optional years (the last 3 years), I would say they probably don’t need to require tests to improve on this metric.
Note as I mentioned above, we don’t know how many of these students are FGLI, which are also an institutional priority…so they could be doing great on that measure.
So, my point is, it’s important that people understand what many colleges are including when they say something like ‘55% of our incoming class is students of color’.
.
I agree but I think this also means tests should be available in every school during the normal school week. With the way they are heading in terms of format, I think this is a realistic goal.
I am definitely not disagreeing with the point you are making, but I do think different people care about different things when it comes to ethnic diversity. Like, I do think some people would be more comfortable at a college that was 35% white and 20% Asian than 55% white. And then some other people would be comfortable at neither.
In any event, I certainly didn’t mean to suggest everyone had to agree Dartmouth was all fine now in terms of ethnic diversity. I was really just trying to point out that THEY might see this as progress, and would be happy to be able to make more progress in the same direction even if it was just incremental.
fyi – standardized tests are not even offered on Saturday’s by any of the High schools in our suburban district; only the local community college offers testing, but spots are extremely limited. Nearly all of our students have to travel to neighboring communities. And that ws before CA publics eliminated testing. With the CA demand for testing and therefore test sites signiicantly lower than pre-covid, students may have to travel across the county to find a site. Extremely difficult for low income students.
back to the thread topic: Dartmouth accepts ~5 students per year from our County.
This is what many California counselors are saying. It’s a difficult sell/a non-starter for many schools to run Saturday testing (any SAT/ACT testing really) when all the UCs and CSUs are test blind (and all the other west coast schools either test blind or test optional).
Folks…running testing is a huge hassle for counselors. You may say it is part of their job, but remember in a state like California, many (most?) of the counselors at the public HSs are social emotional counselors, and that is their priority. And now digital only SAT requires each student have their own computer. If the school can’t provide one that will allow downloading Bluebook, CB will provide one, but again…hassle.
Yeah, I interpreted Coffin’s description of “AI” (especially in the context of “I was a humanities student”) as “a few folks in the admissions office know about this thing called ‘conditional formatting’ in Google Sheets and figured out how to import CSVs from our applicant tracking system”.
I’m old school enough to remember AI discussions from way, way before generative AI took off, indeed from before deep learning took off. I gather some people are now feeling the older stuff doesn’t even merit the name.
In any event, what is clear is Coffin is talking about automating most of the work of turning transcripts and test scores into comparable academic assessments using a variety of contextual data including school profiles and long-available demographic information.
Maybe it’s a populous county, but with >3k counties in the US, five sounds like a decent result.
Test center issues have long been a problem for CB, and I have to wonder whether the change to digital from paper will simply trade old problems for new ones.
If California parents find that situation acceptable, then their kids will live with the consequences, which may be fine with them. States far poorer than CA manage to hold the tests during school.
Given what I see at various types of high schools, if domestic Asians at Dartmouth represent just 17.7% of the class, then Dartmouth discriminates heavily against Asian applicants.
yes, 3+ mill, ~100 high schools
not sure that you can make that claim without seeing the number of Asian applicants to Dartmouth.
I note this was an allegation in the Harvard litigation, there was a huge “battle of the experts” over it, and Harvard won that part of the trial. And then that verdict was not subject to the Supreme Court’s decision.
Of course you don’t have to accept that conclusion, but my point is actually proving ethnic bias is very difficult given the complexity of holistic review admissions.