I agree. I’m a technologist who works with LLMs and this is an important point. GenAI doesn’t “understand” things the way humans do.
It’s trained to predict the next word in a sentence based on patterns in huge amounts of text. Through that process it can seem nuanced or knowledgeable, but it isn’t aware of what it’s saying - it’s using statistical patterns, not human-style understanding or lived experience. So subtle points can easily be missed.