AI teaches itself to cheat

There’s a technique in AI called “prompt engineering”. AI does what it’s told, but it’s up to the requester to provide appropriate instructions via the prompt.

Providing false results is called a “hallucination” in AI lingo. You avoid those via the prompt by saying something like, “use only these sources” then listing out your approved sources. You could also be more general, like saying “use only sources from websites ending in .edu and .gov”. Along with, “provide a summary only, do not create your own conclusion”.

For AI chatbots on things like websites, you need to tell the AI “You are a customer service agent. Be polite and act with empathy”. That will actually produce different results than if you left those instructions off.

For all its power, AI is still a computer program that needs appropriate instructions. I think the basic mistake people make is assuming AI has common sense, which it definitely does not.

4 Likes