
What Anthropic's Research Reveals About AI Adoption
Anthropic just published some interesting research in this Insights post. They built an AI-powered interview tool and used it to talk to 1,250 professionals about how they actually feel about AI at work. General workforce, scientists, creatives. Real conversations that go beyond just tool usage, including what comes after, how people use what AI generates, how they feel about it, what they imagine the future looks like.
The research validates a lot of what we're seeing.
People tend to like AI. They use it frequently. They're still uneasy about it.
Stigma
People don't want to admit they use AI because they think it may make them look lazy or like they're cheating.
We found that the real fix for this isn't giving people permission and a policy, but giving people room to talk about what they're doing together. That's one of the drivers for why we introduced the solution showcase in our AI Quest Foundations training. We give people space to share what they've built with the rest of the group. They see coworkers doing stuff, sometimes very basic like polishing emails, other times complex like deep data analysis. And suddenly they feel like AI use is not just permitted, but a requirement if they want to keep up.
It changes their perspective from quietly using AI to celebrating AI use.
Trust
One of the most common reservations we hear while teams are adopting generative AI is that the tools aren't worth it because you have to check all of the output anyway. This has to do with trust. If they have to verify everything anyway, why use the tools at all?
I've heard our own team echo this sentiment in the past. In the developer community, a frequent refrain is that they don't want to use agents because they have to review all the code for accuracy anyway. But reviewing is what the job is becoming. That's management. The move is going from "I wrote this code" to "I reviewed the code of my seven AI employees."
This hits on one of the most important mind shifts when adopting generative AI. AI creates options. Humans make decisions.
If we don't teach that frame, people never reach real leverage.
Task Intensity
One of the more surprising revelations as we've worked with teams is the cognitive load that generative AI can take over, but the unintended consequences associated. When we offload the mundane work, we're often offloading the low cognitive load work. This sounds great, until you realize that the low load work is sometimes a break for people during a stressful day.
If an 8-hour day is typically split between 4 hours of high load work and 4 hours of low load work, it can feel like a pretty good day. When that day becomes 8 hours of high-load work, people don't feel saved. They feel exhausted. And then they blame the tool because it feels like they've had to work much harder for the same output. The reality is they've achieved way more, but the incentive structure is rarely there to reward them. They've benefited the business, but they've seen no personal gain.
Not to mention that some of those routine tasks people actually like doing. It's mindless, it's a break from the hard stuff. They know they'll do it well. Delegating everything isn't always felt as a win.
And this is where quality training can help shift the narrative within organizations. If the training is just explaining how features work and how to use the tool, people won't always see the benefit. You'll reach the tech lovers but lose most of the staff.
Instead, we have to keep the training human-centered. We have to coach people on the benefit of delegation, as well as the risks associated. We have to be honest about how to protect people through this transition while also allowing them to maximize their use of tools.
As a result, we've focused on how we measure performance on metrics that go beyond simply attending the training sessions that we provide. We measure four things.
- Actual workflow automations
- Time saved
- Fewer errors
- Employee satisfaction
Most importantly, we make the social aspect intentional. Our trainings are human, they bring people in. We learn and share together because generative AI adoption isn't just a tech problem. It's a people problem.