We had an interesting conversation a while ago on the AI teaches itself to cheat thread that touched on the disconnect between our expectations of AI and how it actually works.
When AI “cheated” to win at a game, some were appalled, but I pointed out that “cheating” is a human concept not instilled in the “reasoning” of the AI model. In this particular case, no ethical guardrails were given, so the model was unrestrained, just another version of garbage in/garbage out. If we want to avoid unintended consequences, deep thought and appropriate rule sets need to go into training AI models.
Another poster noted in that thread that:
“60 Minutes” did a piece on AI, they asked a program to write a research paper. The journalist looked at the bibliography and discovered that several of the references were made up - they didn’t exist!
Again, the mistake was expecting accuracy when the model may just have “reasoned” that it needed to produce a list that looked like citations without concern for content, form over function.
At this stage in the AI game, the lesson is to critically evaluate what the AI tool you’re using produces and impose your own expertise on your interaction with it to refine the output. It’s a conversation, not a vending machine of infallible answers.