I Used AI to Do My Entire Job for a Week. Here's Who Should Be Worried.
Last Monday I decided to do something stupid. I told my boss I'd use nothing but AI tools for an entire work week — every email, every report, every presentation, every spreadsheet. No human thinking allowed. Pure AI output, copy-pasted straight into the real world.
By Wednesday I almost got fired. By Friday I understood exactly which jobs are actually dead — and which ones AI people keep lying about.
The numbers everyone's freaking out about
Anthropic — the company behind Claude — just dropped a research report that basically X-rayed the entire U.S. labor market. They didn't just guess which jobs AI could theoretically do. They measured which tasks AI is already doing in the real world.
The results are ugly:
- Computer programmers: 75% of tasks already coverable by AI. The highest of any profession.
- Customer service reps: 80% projected automation rate. 2.24 million out of 2.8 million U.S. jobs at risk.
- Data entry clerks: 95% automation risk. AI processes 1,000+ documents per hour with a 0.1% error rate. Humans? 2-5%.
- Computer and math occupations overall: 35.8% of work already being done by AI — the highest of any category.
And here's the part nobody talks about: the people getting hit hardest aren't who you think. The highest-exposure workers are older, more educated, female, and earn 47% more than their zero-exposure peers. This isn't coming for minimum wage jobs. It's coming for the comfortable ones.
What happened when I actually tried it
Monday — Email and communications: I fed every email into Claude and sent whatever it wrote back. Honestly? Nobody noticed. My emails were clearer, more professional, and 3x faster. This part of most office jobs is genuinely dead. If your job is "write emails all day," start learning something else now.
Tuesday — Data analysis and spreadsheets: I dumped three months of sales data into ChatGPT and asked for insights. It found patterns in 30 seconds that would've taken me two hours. But — and this is a big but — it also confidently told me our Q4 numbers were "concerning" when they were actually our best quarter ever. It had mixed up two columns. Nobody would've caught that except someone who actually knows the business.
Wednesday — The presentation disaster: I asked AI to build a client presentation from our brief. It made beautiful slides. Great structure. Compelling narrative. One problem: it hallucinated a case study that doesn't exist. I presented it. My client asked for the source. I had nothing. My boss pulled me aside after the call.
Thursday — Customer support: I used AI to draft responses to 50 customer tickets. It handled about 40 of them perfectly — the routine stuff. Password resets, billing questions, how-to guides. But the 10 it got wrong? Those were the ones where the customer was actually upset. The ones that needed empathy, not efficiency. AI wrote responses that were technically correct and emotionally tone-deaf.
Friday — Code: I asked AI to build a feature our team had estimated at two days of work. It produced working code in 20 minutes. Then I spent four hours debugging edge cases it missed, security vulnerabilities it introduced, and integration issues it couldn't possibly know about because it doesn't know our codebase. Net time saved: maybe two hours. Not zero. But not two days either.
So who's actually screwed?
Based on my week and the Anthropic data, here's the honest breakdown:
Dead within 3 years:
Seriously threatened (5-year horizon):
Safer than people claim:
The real threat nobody mentions
Here's what scared me most about my experiment: it wasn't that AI did my job badly. It's that it did my job well enough that a manager who doesn't understand the work wouldn't know the difference.
The Wednesday presentation disaster? If my client hadn't asked for the source, everyone would've thought it was great. The Thursday support tickets? The metrics would've shown faster response times and higher ticket closure rates. The fact that 10 customers left angrier than they arrived wouldn't show up in the dashboard for months.
AI doesn't need to be perfect to replace you. It needs to be good enough to fool the people making budget decisions.
Young workers are already feeling this. Anthropic's data shows a 6-16% drop in employment for workers aged 22-25 in AI-exposed jobs. Companies aren't firing people and replacing them with AI. They're just not hiring the next round of juniors.
What you should actually do
If you're in a high-risk role: Don't panic, but don't ignore this. The data entry clerk who learns to manage AI workflows makes more money than the data entry clerk who pretends AI doesn't exist. The customer service rep who handles the hard cases that AI can't is more valuable than ever.
If you're a knowledge worker: Your job isn't to produce output anymore. It's to verify output. To know when the AI is wrong. To bring context that no model has. That's a skill — develop it intentionally.
If you're a manager: The companies that will win aren't replacing humans with AI. They're giving humans AI and watching them do 3x the work. McKinsey found that 88% of companies use AI but only 39% see any impact on the bottom line. The difference is whether you're using AI to cut headcount or to multiply capability.
The uncomfortable truth
AI isn't going to replace all jobs. That was always hype. But it is going to replace the parts of jobs that feel like jobs — the repetitive, the routine, the mindless. What's left is the stuff that actually requires a human: judgment, empathy, creativity, accountability.
The question isn't "will AI take your job?" It's "when your job is reduced to only the hard parts, are you good enough at the hard parts?"
For some people, that's great news. For others, it's the wake-up call they've been ignoring.
If you're going to use AI tools for work, at least use good ones. LazySusan gives you ChatGPT, Claude, Gemini, and 50+ AI models in one subscription — so you can find which model actually handles your work without hallucinating. Start free.