Will AI Automate Most White-Collar Jobs?

I appreciate all the thought-provoking and insightful comments that people have shared. And now, back to our regularly scheduled AI programming! :grinning: I’m about to go to an English department meeting at my regional working-class university. We’re being crushed by the huge temptation that ChatGPT represents for our underprepared students (not all of them are underprepared, but when you have to outwit and outlast half of your students in AI Land, it fundamentally changes the job). Large amounts of grading always came with the job, and was fine for people like me with a grammar-and-structure fixation. However, playing robo-cop did not come with the job, back in the day. The main task now is to design your courses so cheating will cause them to fail the assignment. Most of them recognize this fact when they see the assignment–but there are always one or two who cheat anyway. Quite a few of my colleagues have a 3-5 year exit plan, as do I. I’m not yet even near traditional retirement age, so I’m lucky my spouse has a good job, and I still plan to work after I leave this career.

8 Likes

This is a punitive approach. I hope the conversation at your meeting focuses, instead, on dispelling the use of AI as “cheating.” As I posted above, I don’t think that’s the way to go as that horse has already left the barn. Students and workers can and will use Chat and other forms of AI in satisfying their work assignments and should not be discouraged from doing so. Rather, I think the onus is on academia to figure out how to integrate AI into the education process such that no policing is necessary and students still learn what they need to learn. Rather than “taking the calculator away” or limiting its use to addition only, we need to embrace what this technology does and use it to make education better, even if it means we need to change the entire teaching paradigm.

I’m an English major, so I understand that essays and other writing assignments are fundamental to the discipline, but perhaps the day of these types of assignments is over. Perhaps that’s the “old” way to go about it now. I don’t have the answer for what this new way should look like, but I think we need to start taking the approach that AI is here to stay and students should use it to its fullest, not be penalized as “cheaters.” Teaching needs to adapt to the tool such that no time or resources are wasted on detecting its use, no playing robo-cop. Perhaps what needs to change is our expectation of what students should learn and how they learn it.

I would like to see more of an adaptive approach than a policing or limiting approach.

3 Likes

Khan Academy, I thought had some tips on how to use AI, as teachers.

1 Like

I could get into the weeds with suggestions myself but perhaps we need another thread for that.

2 Likes

At the university level, a friend at MIT uses Perplexity runs sessions for the students on how to use Perplexity to write scientific papers. So, rather than fight it, he’s said, let’s embrace it and focus people on how to best use it as part of the process.

Per @snowball2’s post, the issue of HS English classes seems more complex. I recoiled at Humanities classes but was/am a good writer. In the required freshman literature corse in college (we read a novel a week including Dostoyevsky and Thomas Mann among others), the professor told me I was an excellent writer, which fortunately got me past the hurdle of having nothing interesting to say about the books. When I was writing my PhD thesis in an applied math field, my advisor ripped apart my intro chapter about 8 times until I internalized his writing rules – never use the passive verbs, always provide readers with everything they need to understand each sentence, etc. – I was a much better writer.

That said, I find that the process of writing what I am thinking exposes problems or issues that I could easily miss/hand wave if I were just talking or using PowerPoint. I think things through that way. Unlike most companies, Amazon requires folks needing a decision to write a brief memo. The decision-making group meets and the first activity is carefully reading the memo (compared to other companies where you email the pre-reading in advance of the meeting and it turns out that people have read the pre-reading at levels ranging from not at all to very carefully). They then use the rest of the session to make the decision. A former employee who has worked at Amazon and at other tech companies is of the opinion that this leads to better decisions on average. Given how my mind works (writing a complete argument ensures that I have missed a lot less), this seems highly likely.

Looping back to the subject of AI and English, if students never learn to write well because ChatGPT and Perplexity are doing the writing and IF my experience is not just idiosyncratic, will failing to learn to write harm their capacity to think things through?

2 Likes

Perhaps having assignments whereby students are instructed to insert an essay prompt into ChatGPT or similar and then to evaluate the essay on strengths and weaknesses and what students would add/change to make the product better? Then it might help students (particularly underprepared ones) start to see the shortcomings of AI, but also to help them grow with respect to figuring out what should/shouldn’t be done in writing assignments?

6 Likes

@coolguy40, I’m not sure that your comment that AI will not be useful to the military. It is already in use, choosing targets, identifying threats, etc. Palantir has been a huge recipient of military funds. When there are hundreds of thousands of drones out there, people will not be able to respond to the threats fast enough and we will be using AI to figure out which drones to shoot down and in what order and using which technology (lasers, missiles, etc.). I’m not in the military so we may already be using AI for this last purpose). But, I would be surprised if AI is not a very significant part of military strategy going forward.

4 Likes

My passion in life is giving knowledge and social mobility to working-class and underprepared students. For those of you who haven’t taught at this level (even those of you who are college profs who teach the middle class and up), I can promise you that if we did not prevent most of OUR students from cheating with AI (which is perfectly possible and I do it all the time, through great effort in course design), they would never learn how to structure a basic argument and back it up with good evidence. That’s my primary mission, which fits them for all sorts of basic professional jobs. It’s one thing for students who have had AP courses in high school like my daughter. It’s quite another to purposely allow students not to advance their critical thinking, when they have been taught to spit back information in a basic way, and bubble in multiple-choice tests. My students don’t see me as punishing them, they are super-grateful that I don’t sell them out for my own ease. Some people who teach working-class students are selling them out–and making the even more pernicious argument that “students from diverse and underprivileged backgrounds need to be allowed to use AI more because they are not as capable.” They don’t say it quite like that, though.

9 Likes

IMO, yes. I moved my S to private school in middle school because it was clear that our school didn’t focus on proper writing skills. I believe that fundamentals must be learned first, before using things like AI (and yes, calculators), because it’s crucial to understanding what is being done by the AI (or calculator). I need to know enough about the underlying calculations to realize whether I might have punched a number into my calculator incorrectly. I need to know enough about the underlying concepts to know if the AI is correct … after all, AI will only be as good as the information being fed into it. Has it been manipulated to give a certain viewpoint? Has it been fed incorrect data or biased data (intentionally or unintentionally)? We can’t just cede our critical thinking to a machine. Remember, garbage in, garbage out.

We need to teach concepts first, then teach students (or employees) how to make things easier by using technology. The learning needs to come first.

7 Likes

Can confirm.

1 Like

At the university level, a friend at MIT uses Perplexity runs sessions for the students on how to use Perplexity to write scientific papers. So, rather than fight it, he’s said, let’s embrace it and focus people on how to best use it as part of the process.

My SIL is a professor at a major Mid Atlantic university. (Actually he’s the director of a center for quantum computing at this major university.)

After he writes his homework sets for his graduate computer science/theoretical mathematics/theoretical physics classes he then uses ChatGPT to solve the problems because he knows the students will do that instead of solving the problems themselves. He’s been surprised at just how good and creative the AI’s solutions are.

I asked him if he was going to require his students to hand over their phones and make them take the final in Faraday Cage. he said no, that he’s eliminated all graded problem based exams in favor of an original projects.

SIL is a frequently cited name when it comes to quantum computing and he says there are still plenty of major issues that need to resolved first before quantum computers will be fully usable. Hardware is still glitchy. Plus there are 3 or 4 major competing methods for data storage arrays. Until one is proven more effective and reliable–it’s going to be a risk of being BetaMax instead of VHS. Software still cannot fully compensate for the essential randomness of quantum mechanics. (And the mathematical methods of how randomness can be corrected depends on what type pf hardware is being used. )

SIL said he used to think that lower level jobs (aka programmers) were going to be eliminated first by AI, but now he says that even pure theoretical mathematics isn’t a “safe” field anymore.

He joked that at least his wife will still be employed since as an AI doesn’t have hands and is really bad at detecting when people are lying.

5 Likes

Let’s not assume that AI will interfere with students of any ability or preparation learning how to structure a basic argument and back it up with good evidence or learning how to write, just as the calculator did not impair anyone’s ability to learn math. Our thinking about how to teach these skills in our AI world is what requires scrutiny and change. We need to let go of the assumption that AI impairs our ability to teach these skills or students to learn them.

I think a discussion of ways to accomplish this requires its own thread as it’s off-topic for this one (automating white collar jobs), but I’ve been accused of policing threads, so carry on.

1 Like

Kudos to you.

Every time I hear the argument (on CC but also IRL) that “Corporate America” is %^&* a significant part of the workforce by its emphasis on certain colleges (targets) certain degrees, “elitist hiring practices”, etc. I grind my teeth.

My company is very aware of the “paper ceiling” and works hard and consistently to make sure that arbitrary degree requirements aren’t systematically keeping out employees whose educational backgrounds don’t follow the traditional trajectory.

But as Snowball understands- and many others with the best of intentions do not, it’s REALLY hard to teach a 35 year old basic sentence structure, grammar, core rhetorical tools. Yes, there are people who didn’t go to college at all, or who got an AA degree, or who went to a non-ranked open admissions U who are solid writers. Elderly members of my own family- HS grads-- who write beautifully.

But that is-- sadly- not the norm. And before you push back that hundreds of job classifications don’t require the ability to write- yes and no. Most corporate type jobs require the ability to summarize, synthesize, edit. Even if it’s only the invite to an all-hands meeting with a summary of “Why are we having this meeting”. The boss hands an admin a three page “here’s why we are are getting together” memo with instructions- “send out the invite”. Not the three page, confidential memo. But a two paragraph “Here’s why”.

And sadly- a LOT of people cannot craft a two paragraph anything using Standard English. So yes, we train. Millions of dollars in the aggregate for training, development, various learning modules and programs. But there are things that most companies can teach well (look at all the “mini-MBA programs”- turns out you don’t need four semesters to learn how to do a discounted cash flow analysis if you have strong math and analytical skills). But the K-12 basics are extremely difficult to teach to working adults. You can run workshops and provide incentives and pay employees who take classes on the side- whether for a degree or not. But it is hard.

So hugs to anyone in the trenches who is working to remove this very substantial roadblock. I have friends who recruit for large hospital systems, and they are increasingly worried about workforce development issues. Do you want your prescription filled by a pharm tech who doesn’t know that the placement of a decimal point is significant? (50 mgs is good. 500 mgs will kill you). Do you want your phlebotomist getting confused about kilos and pounds? And when the new sterilization standards are released for the entire hospital system, do you want ANYONE who deals with body fluids, critical care, implanted devices, or infectious diseases who does not understand a four paragraph memo?

6 Likes

I totally agree with the need for all the skills required to succeed in today’s world, but let’s not assume that AI will hamper this acquisition, for anyone. Let’s use our creative energy to figure out how to use it to make sure it doesn’t. Let’s let go of any notion that because AI can generate content, students can’t be taught to write or think critically. We old dogs need to learn new tricks.

(Sorry. Broken record. I’ll stop.)

But with every new trick, there are underlying requirements. A sleight of hand magician must learn manual dexterity. We must always value learning underlying principles and concepts before learning to use artificial anything to assist in problem solving or everyday tasks.

2 Likes

No argument from me.

I’m sure there were folks convinced that Word Processing would destroy people’s ability to draft a coherent paragraph.

2 Likes

Actually, word processing required a lot of people I knew to learn a new trick - how to type (and edit their typing ). Word processing led to the elimination of secretaries at my company and many others. Suddenly, people had to know how to type their own documents. While spell check and grammar check were helpful, it was clear that a lot of people weren’t particularly good at spelling or grammar.

Let’s eat Grandma. Punctuation matters!

5 Likes

Apparently this moral dilemma is already playing out as AI identifies targets and human judgement is incorporated into the ultimate decision making. In all likelihood generative AI will minimize the human input over time.

Humans allow emotions and biases into these decisions and for better or worse machines don’t. While I certainly don’t sleep better envisioning this evolution towards automated warfare it seems inevitable.

3 Likes

Total aside: This thread is great. I appreciate the insights, both discussing the issues and the willingness to consider positive changes. Refreshing!

4 Likes

There is a whole branch of the Army dedicated to this — Electronic Warfare. Our son’s BF is in this branch. EW combined with Cyber and Futures Command are working to ensure we minimize boots on the ground. Our son has received a commendation medal for a cyber deliverable that performed its “strike,” though he can’t tell us exactly what it is, but no infantry were required or harmed.

4 Likes