0

Managers are rushing to deploy AI for efficiency gains. Employees have to figure out how to make it work—and that’s sometimes harder than it seems.



Half of organizations piloted general-purpose AI tools last year, according to MIT research. But adoption and readiness aren’t the same thing. 



According to Rumman Chowdhury, former U.S. Science Envoy for AI and CEO and cofounder of Humane Intelligence, the burden is likely to fall on workers.



“There’s a lot of FOMO among C-suites and high-level execs on pressure to build AI, and then they’re also incentivized to pretend like it works really well,” she says. “If and when it doesn’t, the responsibility is on the employee who had no say in whether or not this technology was adopted and used, or even often what it was used for.”



For many employees, particularly those who don’t have a technical background, the promise of AI-driven efficiency comes with a catch: Useful output often requires time and effort that doesn’t always get counted. The gap between what these tools are supposed to do and what it actually takes to make them work has become its own job. 



Companies are figuring out whether the fix is better training or more realistic expectations around what these tools can deliver. For now, employees are absorbing the additional labor involved in prompting AI and double-checking its outputs.



“PhD-level experts in your pocket”?



Kellie Romack, chief digital information officer at enterprise software company ServiceNow, suggests managing AI is a hands-on effort. During a recent session with one of the company’s AI tools, she caught the model making a basic math error.



“I wrote back and said, I think your math is wrong,” she recalled. “It wrote back to me and said, ‘Oh, you’re right. I do have it wrong.’” Romack gave it a thumbs-down and flagged it for her team’s feedback loop. 



The cleanup that follows is a cost organizations don’t always account for. 



“There may be efficiencies of production,” Chowdhury says. “And then if you scratch beneath the surface, some of this employee frustration is like, yeah, it’s producing stuff—and then I have to spend three hours going through every citation and making sure it’s not a hallucination.”



A January 2026 Workday study of 3,200 employees found that over a third of time saved through AI is offset by rework, which the report calls an “AI tax on productivity.” 



Most leaders, the report finds, are focused on gross efficiency, or how much time AI saves. That metric doesn’t account for rework, and when it does, the net value of AI is often lower than expected. Net value, which the report defines as “time saved minus time lost,” is what shows whether AI is improving how work gets done. The only way to capture AI’s return is to move beyond hours saved and account for outcomes achieved, the report says.



The problem is the AI industry oversold what these tools could do, Chowdhury says, pointing to OpenAI CEO Sam Altman’s claim last year that users would have a “team of PhD-level experts in your pocket.” The result has been frustration among both employees and managers: What was promoted as transformative has turned out to be far more uneven.



“These technologies are simultaneously capable and not capable, and that’s what’s weird about it,” she says. “People who are the furthest removed from AI—the imagery they have in their head is this magical sentient being. And then they’re frustrated because . . . this isn’t a magical sentient being.”



The difference, she adds, tends to be greatest among those with the least experience using the tools.



The training gap



A 2024 study by University of Texas at Austin researchers Min Kyung Lee and Angie Zhang included a workshop with 39 primarily knowledge workers from 26 countries—with follow-up interviews conducted separately with some participants. When workers received AI training, the majority described it as superficial.



One participant recounted a colleague who used ChatGPT to generate a list of publications and didn’t realize the titles had been invented by the AI.



The consequences of using AI without proper training or context can be serious. 



Zhang recalled one participant who worked at a labor standards organization that had to fire a junior employee after their AI-assisted work repeatedly fell short. The employee kept turning to generative AI to draft labor standards, producing work that drew on standards the participant had never come across or had no bearing on the task. (The organization had not formally adopted AI but some employees had begun using it anyway.)



Some companies are trying to get ahead of the problem. IBM Consulting requires every employee to acquire a foundational generative AI badge, covering not just how to use the tools, but what they can and can’t do, says Tess Rock, associate partner for global finance transformation at IBM Consulting.



But training alone isn’t enough. What matters more is leaders who can clearly define how and where AI should be used, she says. Without that, even well-trained employees get frustrated.



“There needs to be that leadership mandate, operating model, governance-type decisions to be made, versus kind of having a population of frustrated practitioners trying to leverage this,” Rock says.



IBM Consulting is treating AI adoption like any other business discipline. It involves two-week sprints where teams pitch an AI idea with an ROI case, build it, and scale what works. What doesn’t prove value gets cut.



Working with one client, Rock’s team identified more than 200 potential AI use cases, then measured each against ROI. Half were cut immediately. The top 10 ended up driving 80% of the total value. 



“Focus on those areas that are going to drive impact, and invest there,” she says.



Making it work



Part of what makes the AI management burden so hard to address is that workers’ frustration runs deeper than the tools, Chowdhury says. Employees weren’t asked whether they wanted the tools in the first place. That puts middle managers in a difficult spot, caught between executives wanting to accelerate AI rollouts and employees pushing back.



Her advice: Don’t just push harder. Try to understand what’s actually behind the resistance. 



“The majority of the fear is that people think that ultimately management wants to replace them,” she says. “And it’s a valid fear.”



For Rock, a key question is how organizations think about AI and productivity. Too often the focus is on how much time individual employees save writing emails faster or summarizing meetings. She argues that’s the wrong unit of measurement.



“That to me is pennies on the dollar,” she says. “When people talk about productivity, it’s less about Tess Rock as an individual being more productive and [more], how do you fundamentally set up your organization to be more productive?”
Be respectful and constructive. Comments are moderated.

No comments yet.