Opinion & Analysis

AI Skeptics Can Derail Your Transformation, Here’s How To Win Them Over

avatar

Written by: Danny Abdo | Chief Operations Officer, Skillable

Updated 4:43 PM UTC, Mon February 17, 2025

post detail image

We are firmly in the race to become AI-powered in almost all aspects of our lives, from our smart homes and cities to our workplaces where ChatGPT has become shorthand for the productivity benefits that AI promises workers. But, like any technological change, some workers are not on board with the C-Suite’s AI plans.

Winning over these laggards and late majority adopters will make or break your AI transformation, and it could hinge on getting them to experience the benefits and impact AI can have in their role firsthand.

Combatting low AI adoption

Research paints a depressing picture of AI adoption today. Only 7% of the U.S. public uses generative AI (GenAI) tools like ChatGPT daily — falling to 2% in the UK and France, and 1% in Japan. That’s despite 70% of the public being aware of popular generative AI tools.

There may be many reasons why employees disengage with AI transformations in their organizations, ranging from a lack of knowledge of how to integrate the tool in their role, to fear of being automated out of a job. Providing opportunities to practice and experiment with AI technology can help to overcome these obstacles, as long as such opportunities don’t impact business continuity or open it up to risks.

You need AI upskilling that mimics what someone will use the tool for in their everyday lives, in a safe environment where they can make mistakes and not compromise data or operations.

AI training for all

Current AI training programs often focus on the highly technical aspects of building and deploying machine learning models. In doing so, they target a small, specialized employee group who’ll be implementing and overseeing AI every day. These employees are likely already on-board with your AI strategy and their upskilling focuses on equipping them with the skill mastery needed for AI execution and governance.

However, for widespread AI adoption, a different upskilling program is needed. All employees must understand how AI will impact their specific roles and trust the technology. This requires foundational AI and data skills taught through practical, hands-on experiences rather than solely theoretical courses (such as videos and learning pathways). Practical application makes AI real, and hopefully exciting, for this employee group.

Hands-on training includes on-the-job learning and shadowing (like the “show one, do one” methodology in medical school), simulations and virtual IT labs, apprenticeships, hackathons, and temporary redeployments. Depending on upskilling needs, you may decide on a range of different hands-on training methods — and this will ultimately depend on the skill levels required, the size of your cohort, subject matter expert availability, and resources.

A marketing team that requires training in an LLM-powered content engine may be better off learning directly in the tool itself (or a close replication of it, separate from the live environment). A technical support agent who needs to understand new AI features may do better with a redeployment into the product team. Larger groups of employees may require scalable hands-on training in the form of labs and simulations, as these can be deployed globally and accessed on demand.

Training by adoption group

All technology adoption takes a bell curve approach where you’ll have a few employees leading the way, discovering and innovating with the new technology. As they advance through their findings, they’ll talk about it with their peers and begin getting others on board — reaching the early majority group of adopters.

At this stage, you’d expect to hit around 50% of your workforce. The late majority follows once they’ve seen that technology has been tested and vetted by their peers. Then, you have the “laggards” or skeptics who will avoid a new technology out of fear and concern. In each stage, providing tailored and practical upskilling can help move your AI technology through the curve with speed and efficiency.

  1. Innovators and early adopters will benefit from hands-on experiences that give them the freedom to understand and experiment with technology in a safe space. Getting them excited and more likely to champion the tool later on.

  2. Early majority employees will adopt a technology once they have proof that it’ll work well for them. Hence, a hands-on experience can give them the confidence they need to adopt a tool in their everyday activities.

  3. Late majority employees want evidence from their peers that the AI is working and that its value offsets the tool’s time and resource costs. Offering practical, interactive experiences can support this group to uncover the benefits of AI for their role. Sharing the results and successes of early majority adopters can also provide critical validation for this group.

  4. The laggards will be the hardest to win over since they are risk (and likely technology) averse. Their cautious approach and concerns can be addressed with hands-on opportunities that expose them to AI technology in a safe environment where they can try, stress-test, fail and learn, and form an educated opinion on an AI solution.

    Since this group only uses the technology they are accustomed to and have formed habits with, you’ll want them to quickly understand the new technology you’re bringing to the organization, see it as necessary, and form new work habits using it.

Freedom to experiment — Training early adopters

The approach you take with innovators and early majority employees will differ from those who are more skeptical about AI. Half the battle has been won with this group as they are already bought in and excited about the technology. Indeed, if anything, you’ll have to put guardrails around their experimentation to ensure it doesn’t risk or compromise your operations and data.

You must strike a balance between encouraging their enthusiasm and innovation while protecting sensitive data and preventing business-critical errors. Providing a safe space to explore a new AI tool, through a simulation, sandbox, or virtual lab environment can give someone the freedom to explore and “break” things without being in a live production setting.

Winning over laggards

Laggard and late majority employees are, at best, hesitant about AI and at worst, completely disengaged with it. How much time and effort you spend in winning this group over will ultimately depend on how critical the AI feature and their buy-in are to your business.

For solutions that need to be implemented across the workforce, and where widespread buy-in ensures success, a lot of your time may be dedicated to alleviating laggards’ concerns and encouraging them to explore the new technology.

This will be a cultural and training exercise. You don’t want laggards to feel left behind or unable to voice their concerns. An open-door policy can help with this, giving laggards access to the senior leaders responsible for bringing AI to the organization. Indeed, you will want to create feedback mechanisms that employees can use to ask questions and speak about their worries, as, without this, their disquiet may become watercooler chat and spread unease through the workforce.

Training your late adopters and laggards is essential as it’ll ensure their delay in adopting an AI tool isn’t due to a lack of skill or application. In other words, if you are confident that their skill level means they can practically apply it in the workplace, then their resistance will be due to a cultural or emotional reason that can be addressed by the leadership via a 1-1 transparent discussion with them. 

Providing a safe space for a laggard to test an AI tool can go some way in alleviating their concerns. They may even enjoy finding new ways to stress test a feature — and they might break it! But in doing so, you’re getting valuable insights into how robust your proposed solution is.

At the end of their training, don’t expect all laggards to be on board with your plans. Some may come away from the exercise more educated and informed, but still not bought into the AI tool. Yet, because they have that skill and understanding, their pushbacks may be all the more relevant for you. Their red flags could be significant if they continue to voice them after training in and stress-testing an AI feature. 

Real-life application

Providing opportunities to test, experiment, and fully grasp an AI tool makes for a more equitable and tailored learning experience. Instead of every employee sitting through the same videos, reading the same notes, and taking the same multiple-choice quiz, they can practically experience how an AI tool will work for their specific tasks.

For example, a marketing manager could be guided through practical exercises to use GenAI to build an email marketing campaign for early-stage prospects. They will learn how to prompt the AI model to create engaging email copy, upload audience data to train the model, and critically assess and edit its outputs for the best marketing message.

In doing so, they will experience first-hand how AI could improve their daily life, and this may help to win them over — all while building essential AI and human skills for working alongside the technology.

While completing such practical exercises, the manager could be prompted to answer questions about consent or right to information, which can test their understanding of data governance and privacy as it relates to marketing. A challenge could be unexpectedly introduced (similar to Netflix’s Chaos Monkey) to test the manager’s knowledge in the heat of the moment.

For instance, a customer complaining about their data being used for mass communications or bias causing unfair targeting of a certain demographic.

Giving agency

Taking a level-based, practical approach to AI upskilling can help someone overcome their confidence and trust gap when AI inevitably comes to their workplace, giving them the much-needed agency to use AI and advocate for themselves if they feel AI use is misaligned with their values, company goals, and responsible use.

Make no mistake, skeptical laggards are of value to your organization, providing a counterweight to rushing in headfirst with the latest buzzword. But for them to play Devil’s Advocate, they need the right skills and knowledge in the first place.

More importantly, understand when an innovation is the right step for their organization and roles. If they do continue to voice concerns about your organization’s use of AI, you can then delve deeper to understand if misalignment is happening.

About the author:

Danny Abdo is Chief Operations Officer at Skillable, focused on commercializing the revenue, product and marketing functions.

Prior to this, Abdo was the Senior Vice President of Global Business Solutions at Degreed, focused on strategizing the learning and HR solutions and technologies. Before Degreed, he served four years as Senior Vice President in Bank of America’s global learning organization, leading a number of teams, functions and strategies including vendor management, employee experience and product management for learning.

Related Stories

July 16, 2025  |  In Person

Boston Leadership Dinner (BOS)

Glass House

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About