Do we know for sure? I don’t think so, but I can say a few things with reasonable confidence. I think they know kids are doing it, but I also think they don’t want to read stuff written by a robot.
-
It’s fairly easy to spot whole essays written by AI. Especially with supplemental essays. They are just too general.
-
AO’s read a LOT of essays. I guarantee they can tell most of the time when a kid is using mostly AI and when they aren’t. I wouldn’t be one bit surprised if the kids NOT using AI are actually going to soon find that advantageous.
-
I suspect that colleges are going to come out with guidelines for AOs about how to handle this issue. Frankly, when Chat GPT is doing the work, the essay loses relevance. I would bet that AOs are rejecting a lot more apps that show a student has used AI to do all their writing.
Think about it this way: Johnny has put a lot of effort into creating unique and specific responses, maybe not totally grammatically correct or punctuated properly. Jimmy wrote generalized but correctly punctuated stuff with good grammar. As an AO, I’m going to feel that Jimmy couldn’t be bothered to do a better job. I’m going to admit Johnny, provided all other attributes are present in the application.
We also should remember that AOs aren’t marking essays for grammar and punctuation. That isn’t the purpose of them. I think there is a basic expectation that they should be readable. For more selective schools, they need to be more than that, of course.