The article @Mwfan1921 references focuses on AI’s ability to deceive humans. False information that deceives humans into believing untruth is nothing new and has been happening long before AI came came on the scene. The issue here is we now have a non-human agent with no inherent moral code or belief system that has “learned” how to deceive based on its (human) training:
…this behavior can be well explained in terms of promoting particular outcomes, often related to how an AI system was trained.
AI systems are trained to produce optimal outcomes (“winning,” for example). “Deceit” is simply one way to achieve an outcome, just as “cheating” may be the best way to guarantee a win. Without decision-making guardrails and rulesets that mimic morality, AI will behave in perfect sociopathic fashion without regard for laws, social norms, and the rights or feelings of others. How not? It’s not human. So, the problem before us is how to instill an artificial moral code into a machine such that it behaves only in ways we find acceptable, always producing outcomes that do not offend our sense of right and wrong. How do we train a machine to behave like a morally perfect human? How do we define moral perfection? An impossible order.
This article complains about AI’s ability to pursue an outcome other than “seeking the truth,” but unless we somehow figure out how to teach “truth” to AI in an era when truth has become arbitrary and to always make that truth the most desirable outcome, AI will continue to behave in its own best interest. You know, like humans.