Will AI Automate Most White-Collar Jobs?

The most disorienting thing about today’s A.I. industry is that the people closest to the technology — the employees and executives of the leading A.I. labs — tend to be the most worried about how fast it’s improving.

But today, the people with the best information about A.I. progress — the people building powerful A.I., who have access to more-advanced systems than the general public sees — are telling us that big change is near. The leading A.I. companies are actively preparing for A.G.I.’s arrival, and are studying potentially scary properties of their models, such as whether they’re capable of scheming and deception, in anticipation of their becoming more capable and autonomous.

As these tools improve, they are becoming useful for many kinds of white-collar knowledge work. My colleague Ezra Klein recently wrote that the outputs of ChatGPT’s Deep Research, a premium feature that produces complex analytical briefs, were “at least the median” of the human researchers he’d worked with.

1 Like

Multistep research tasks being completed by an AI bot.

“The product, Deep Research, is a new type of “agent” that can go out on to the internet to take on complex tasks, from product comparison to carrying out financial analysis. It is capable of generating reports that might otherwise be carried out by highly paid consultants or City analysts.

“My vibes-based estimate is that it does about 5 per cent of all tasks in the economy today,” Altman said. “With this deep research launch, a lot of people have said, ‘This is my personal AGI moment — it’s doing real, economically valuable work and I didn’t think a system was going to do this’….
Altman has predicted that AGI could arrive within Donald Trump’s second term, which is due to end in 2028. Others including Dario Amodei, chief of OpenAI’s rival Anthropic, said the breakthrough could come as soon as next year given the breakneck rate of advances in the field.“

2 Likes

I worry very much about this. But my worry is that the companies themselves will build in scheming and deception on purpose.

5 Likes

Concur. AI in the hands of bad actors is the danger rather than AI itself. We have proven to be ineffective at controlling bad actors, so it follows…

6 Likes

I noticed a diamond on the left ring finger of a local news personality. I did a search to see if she is engaged or married. The first entry when I google is always AI generated. And in this case, it’s false. Apparently, it’s confusing a social media person with the same name for this personality … but it includes correct information describing the personality’s job and background, so there’s no way that incorrect personal details about a husband, his name and her supposed pregnancy should be attached to her. I found the social media person with the same name, and the details that are incorrect for the news personality are only partially correct for the YouTuber … she’s married; that’s it. The husband’s name given by AI is actually the first name of the husband of another news person on the same channel, with that person’s maiden name (not the H’s surname). That person does have a child.

In other words, if it mixes up something so simple, why on earth would I trust AI to give me important information?

4 Likes

I was talking with a colleague this morning about a book talk she attended about an AI tool (not ChatGPT) whereby the user can specify or limit the types of resources that the AI system uses (such as limiting it to peer-reviewed journals) and then the AI’s response cites the sources it used to develop the particular parts of its answer. Something like that could be much more dangerous than when AI mixes up (or just makes up) information to answer a given “prompt” if one is thinking about the automation of white-collar jobs.

1 Like

I had that realization when I asked my AI app to add up a string of dimensions (like 5’-10 1/8" and 3’-5 3/16"). It got it wrong! By several feet!

3 Likes

In your example, AI is drawing conclusions based on the hundreds of billions of pages indexed by Google for the billions of Google searches each day. Billions x billions is a volume that is so large that it is difficult to conceptualize and would be impractical for any person… or even any team of people to replicate. One key benefit of AI is to make high volume tasks like this possible – high quantity, not high quality.

Regarding the quality, it depends on the specific AI and AI parameters. In general AI can do some things far better and far more accurately than any human, and there are some things that humans find simple that an AI may struggle with. AI is far from infallible, which often limits full job replacement. Employers often don’t trust that the AI will get everything right, with good reason.

Where AI can benefit is instead of job replacement, using it as job enhancement. For example, AI might be used to sort, analyze, and summarize large volumes of material that would be impossibly large for a human to review, then have the human use the AI summary for their task. This can open the door to doing new types of work that were previously not possible and generally enhancing productivity.

One area I’ve been impressed with AI is in terms of technical knowledge and coding. I’ve started using sort of like a colleague to bounce ideas off of, which I find particularly helpful given that I work from home. For example, earlier this week I was seeing some unexpected (to me) results in terms of how DC offset and IQ imbalance was influencing a timing loop. I explained what was happening, and asked the AI why this result would occur? It gave several possibilities, and after I reviewed the code, I confirmed that the AI’s first thought was the correct reason. I probably also could have had the AI review the code for me.

For coding, it’s a similar principle to the quality vs quantity concepts discussed above. My experience is it can quickly review or write large volumes of code in a huge number of languages, and instantly make adjustments. It can implement countless different technical concepts in code, make suggestions about how to improve, or explain/debug why code doesn’t work. This includes many tasks or timelines that would be impractical for any human or team of humans. However, it also regularly makes basic errors that would be obvious to someone who is an expert on the material. AI has many uses as a tool to increase the productivity of someone writing and/or reviewing code, but in general, I wouldn’t trust it to fully replace the employee.

4 Likes

This is a good summary of where we are now. The concern is where we’re going as quantum computing exponentially increases the capacity of AI toward the Singularity (a hypothetical point in the future where AI surpasses human intelligence, leading to rapid, uncontrollable technological and societal changes, potentially beyond human comprehension or control).

With the recent development between Microsoft and Harvard scientists of the topological qubit which:

can power a quantum computer more reliably than previously developed quantum qubits and which they believe will speed development of ultrafast quantum computers capable of tackling the toughest computing challenges, far beyond the capability of even supercomputers built through conventional means.

we’re getting closer and closer to this potential but, more important to national security, is this unchecked power moving us closer to Q-day, the day quantum computers finally crack encryption, becoming a universal picklock capable of taking down our entire system of security for banking, bitcoin wallets, electrical grids, privacy data, top-secret military information, and all the rest. When this happens, AI replacing white-collar jobs will be the least of our worries.

When Mosca and his colleagues surveyed cybersecurity experts last year, the forecast was sobering: a one-in-three chance that Q-Day happens before 2035. And the chances it has already happened in secret? Some people I spoke to estimated 15 percent

4 Likes

My opinion…the use of AI can be helpful, but folks should still know how to THINK. And understand. Without those critical thinking skills, many AI generated responses have the potential to not be spot on.

4 Likes

I never really thought about how I might use AI until recently. I had to give a speech, something I had no experience doing or desire to do. I gave ChatGPT a list of the points I wanted to make, in no particular order. It structured the speech the pretty well and I was able to put it in my voice. It would have taken me days to put it together. In the end I spent about 2 hours to get it to where I liked it. Most importantly, it was well received by my audience.

4 Likes

The non-profit board I’m on is getting its act together and organizing committees. The President of the board used ChatGPT to write draft charters for each committee. It did a really good job!

2 Likes

I saw there is an article on Nature’s website about specialized AI tools for science
“ As artificial intelligence (AI) continues to evolve, more tools are being built to meet the needs of students and scientists. Nature explores how to harness AI to streamline various parts of the research process, from sharpening your literature review to streamlining your statistics. Among these are tools such as Paperpal and Thesify, which check academic manuscripts against journal submission guidelines, and Elicit, which helps to summarize papers at speed.

Nature | 10 min read

2 Likes

What do you think of AI in place of customer service for organizations that deal with questions not easily answered?

For example, some may think financial aid advisers can be replaced by AI. On the surface, it may seem like a good idea for answering simple questions. But even simple questions aren’t necessarily simple. I often found that students would ask one question, but I would probe them & determine that they weren’t actually asking the question in a way that would give them the correct answer.

As I think about the move toward replacing Social Security personnel with AI, I can’t help but imagine that a lot of questions asked in that realm are also deserving of probing questions to find out what the person actually wants to know.

You can give me AI for simple stuff, but knowing how those annoying chatbots never seem to actually answer my questions, I don’t want them to replace real, trained customer service representatives.

6 Likes

I agree with your post, @kelsmon, but not all automation is AI-based. Most chatbots are using an if-the-else script which is why many of them can’t seem to get out of a non-helpful loop of responses (looking at you Amazon). A true AI chatbot based on a language model tailored for a particular application will do a pretty good job and will connect you quickly to a real person if a) you request it or b) it determines it can’t give you the answer you need. If you can’t break out of the loop easily, you can be pretty sure the underlying tool is not AI (or is very poorly implemented).

3 Likes

And to add to Choatiemom’s fine analysis- most companies readily admit that they’ve set their chatbots to deal with the 80/20 rule, which in this context means that human agents spend a huge amount of time answering questions which are readily available elsewhere (on their website, in the delivery email, on your monthly bill). So the cost savings for them in the chatbots is the online equivalent of “where’s the ladies room” (the most frequent question by about a factor of 100 when I worked at Macys in NYC). You go through the annoying cycle-- but in a large percentage of cases, your question will eventually get answered. And for the outliers- just push zero and get connected to a human!

Agree that Amazon is absolutely the worst.

2 Likes

This is interesting use of AI to study astronomical data

7 Likes

Will financial advisors use AI?
Do Vanguard or Charles Sceab, etc., see a role for AI?

“ How you can use models like Quasar Alpha to create your own market-beating strategies?

The awesome thing about this is that the methodology is not being gate-kept. You can try it yourself right now for 100% free.

To do so:

  1. Go to NexusTrade and create a free account
  2. Go to the AI chat page
  3. Literally just type what I typed (or create your own ideas and share them with the world)

The NexusTrade platform is as transparent as possible. You can audit the decision-making, see the exact trading rules, and even peek at the underlying JSON behind the strategies to make sure everything makes sense.

You don’t have to create your own trading platform to use AI to improve your decisions. You just have to create a trading strategy.

Implications of these results

The implications of this are quite literally mind-blowing for anybody who’s been paying attention. Using NexusTrade, you can quite literally click this a button and subscribe to a portfolio that was created fully using AI.

Subscribing to the portfolio costs $10/month, so there is a financial motivation for the person writing the article. I am extremely skeptical. The listed time period is too short to distinguish between being lucky vs having skill. There may also be other factors such as survivorship bias. I don’t disagree with the possibility of AI being able to take advantage of market inefficiencies better than a human could. However, if this ability is public or available to hedge funds investing billions, then such inefficiencies will not last long or be possible for the typical armchair investor.

I’ve also asked AI about investing in the past, mostly out of curiosity. It made some basic errors. When I commented on those errors, it replied back that “I was absolutely right” and made corresponding corrections. I got the impression that it was creating strategies based on different things it found on the Internet, many of which contradict each other and many of which have objectively false statements. If I call out the error, then it switches to a different source of information. The AI didn’t seem to have a good way to distinguish between which of the many contradictory information sources on the web to include and which to ignore.

1 Like

I think a lot of these “robots” have or will incorporate AI.

If they can find the secret sauce - then great - but life changes every day and I’m not sure if there is a secret sauce other than diversification and time. Technically, the S&P - which we all use - isn’t true diversification as it’s market weighted.

One thing a robo does vs. a person is it removes emotion - so that’s a good thing.

1 Like