The most important thing that I’ll be conveying to my kids is how this reinforces the need for sharp critical thinking. I’m an engineer, and the overwhelming sentiment in my circle is that ChatGPT — while good (great, even!) at some things — presents confident and convincing language that is very often incorrect. As an example:
ChatGPT’s firm grasp of language makes it seem like an omniscient AI. It speaks English perfectly, it’s confident and assertive, and it appears to be able to answer any question you throw at it. But after spending some time grilling the chatbot on my area of expertise, it became clear that like much of Silicon Valley, it revels in just confidently stating false information. Myself being a researcher by trade, it’s easy to check the AI on its frequent use of misinformation, but this is not something the average layperson is going to consider doing. As a matter of fact, I’ve even run into multiple instances of researchers writing articles based on nonsensical ChatGPT output, which evidently wasn’t fact-checked in any way, shape, or form. (source)
I don’t mean to dodge the core question of “how will AI tools impact career paths?”, but I think there’s a lot we don’t know about how things will go with it. But one thing I’m confident about is that it’ll only be more important that our kids have the tools to think critically when faced with confident-but-possibly-wrong information.