AI News Bureau
Written by: CDO Magazine
Updated 3:57 PM UTC, May 15, 2026
As enterprises accelerate AI deployment, many organizations are discovering that adoption alone does not create transformation. The real challenge is building systems employees trust, ensuring AI benefits extend across the organization, and preparing people to work with AI responsibly without abandoning critical thinking.
In this second part of a three-part interview series, former Qlik CEO Mike Capone joins Dr. Adita Karkera, Chief Data Officer, Government and Public at Deloitte, about responsible AI, enterprise trust, AI literacy, and the cultural changes required to scale AI effectively.
Part 1 explored the evolving relationship between AI, analytics, governance, and enterprise transformation.
According to Capone, enterprises often underestimate how quickly employees form lasting perceptions about AI systems.
“If someone’s first experience with it is bad, it takes forever to win them back,” Capone says. “If the first bunch of answers you get out of your AI platform are inaccurate, not credible, it’s gonna turn people off.”
He emphasizes that organizations cannot treat trust as a secondary consideration after deployment. Before scaling AI broadly, enterprises must first ensure the technology consistently delivers reliable and useful outcomes.
Capone also argues that organizations should avoid limiting AI access to select technical teams while leaving the rest of the workforce behind.
“There can’t be AI haves and have-nots,” he says. “You can’t give it to developers so they can code faster, and not to the finance or the HR teams.”
Instead, he says enterprises need a meaningful organization-wide AI strategy that helps every department understand how AI can improve productivity and decision-making.
Capone expands the conversation beyond enterprise implementation, arguing that the broader value of AI should not concentrate only within large organizations.
“The value of AI can’t just accrue to big companies,” he says. “We need to make sure that the value of AI accrues to all of society.”
Drawing from his experience advising the White House on responsible AI competitiveness, Capone highlights the importance of democratizing access to AI capabilities so that average citizens can benefit from AI.
He adds that the same responsibility applies to governance and deployment practices across the private sector.
“The private sector should do their fair share to make sure that AI is deployed responsibly,” Capone says.
During the discussion, Karkera references Qlik’s long-standing focus on data literacy and asks how the company has evolved those programs in the AI era. Capone frames the transition as part of a much larger workforce evolution.
There was the industrial era, followed by the rise of the knowledge economy. As work became increasingly centered around information and ideas, skills like reading, writing, analysis, and critical thinking became essential.
For years, Qlik emphasized data literacy, helping employees understand how to interpret and question data effectively. But Capone says the rise of AI requires organizations to expand that focus into AI literacy.
“We’ve transformed in AI literate,” he says. “Understanding how to work with AI and how to understand the signals from AI about whether you can trust the output from a given model.”
He stresses that employees must learn not only how to use AI tools, but also how to critically evaluate AI-generated outputs.
“You can’t just blindly trust it,” Capone says. “It’s acceleration without complete control.”
According to Capone, organizations should train employees to assess:
He warns that enterprises risk weakening long-term knowledge development if employees become overly dependent on AI-generated answers.
“AI is great for short-term problem solving, but it’s bad for long-term knowledge acquisition. If everything’s automatically handed to you and you don’t have to teach your brain to think through problems, it’s an issue,” Capone adds.
Sharing how Qlik approached trust when developing AI solutions, Capone says the company intentionally sought external perspectives early rather than relying only on internal governance structures.
“We built an AI council including people from outside of the tech business world. They advise governments, work in education, and are thinkers around the ethics and value of AI,” he elaborates.
More importantly, he says those advisors challenge the company with uncomfortable but necessary questions. “They ask us the hard questions like why should your customers trust you versus somebody else?” Capone says.
However, Capone cautions that governance structures alone are insufficient if responsible AI remains isolated from everyday business operations.
He argues that responsible AI must become embedded into organizational culture and operational workflows.
Capone further points to Qlik’s internal communication practices as an example. The company’s Chief Customer Officer regularly holds brief weekly sessions with customer success teams to discuss AI initiatives, explain how AI will affect work, and answer employee questions directly. “We’re very transparent about that stuff,” he says.
In practice, Capone believes transparency is ultimately foundational to sustaining trust during workforce transformation. “Never tell people something that you think they want to hear if it’s not true,” he says.
He adds that organizations must openly discuss how jobs will evolve alongside AI adoption rather than allowing employees to navigate uncertainty alone.
CDO Magazine appreciates Mike Capone for sharing his insights with our global community.