Scale AI by balancing speed, agility and governance

Scale AI by balancing speed, agility and governance

Summary: Speed is the top reason that I hear for scaling AI, but a direct approach to generating it often has the unintended consequence of reducing both productivity and business value.  To scale AI across many business units and functions, companies need to balance three organizational behaviors that can conflict: speed, agility and control.  In this article I cover the risks of these behaviors, the technology features that enable them, and the organizational structures that balance them.

When I talk to executives about why they want to scale AI in their companies, the overwhelming response I hear is speed.  They want to go faster in product development, finance, marketing, supply chain, HR and every part of their business.  Here are recent quotes from shareholder letters by a few top CEOs:

“The bigger benefit [of the cloud] is increased speed.” Jeff Bezos, Amazon.com 

“We are training our people in machine learning – there simply is no speed fast enough.” Jamie Dimon, JPMorgan Chase 

“The speed of change we see today is... faster than ever.”  James Quincey, Coca-Cola 

“Speed is how we deliver.”  Kasper Rorsted, Adidas 

“Greatly speed up our new focus on software and data.” Herbert Diess, Volkswagen 

“Dramatically improve the speed with which we move new drug candidates forward.”  Robert Bradway, Amgen 

Too often companies new to AI take a direct approach to generating speed by hiring a few expert data scientists, providing them with public cloud accounts, and letting them loose to build custom solutions.  We saw this recently at two Fortune 5 companies and while that approach worked for Google and Facebook, it fails for most businesses.

Risks of speed alone

  • 65 - 85% of projects never make it to production because:[1]

             - Time to ROI is too long and projects get canceled

             - The data science experts are too detached from the business and don’t solve  real, actionable problems

  • The projects that do make it to production create tech debt and operational costs, which erode the experts’ productivity over time.[2]
  • Expert data scientists change jobs every two years so 25% leave each year, compounding tech debt for new hires.[3]
Many companies overreact when they experience such poor AI project success rates and switch their focus to agility.  They try to enable every analyst throughout their organization by providing a cloud database together with some desktop data wrangling and analytics tools: laissez-faire data democratization.  This approach carries its own risks.
 
Risks of agility alone
  • Data silos and data duplication proliferate as separate teams repeatedly solve the same problems
  • Shadow IT and AI teams emerge to support the separate teams and their tech debt
  • Low reuse and collaboration across teams
  • Low upskilling of analysts, which increases attrition[4]
Focusing on technology enablement alone overlooks people and processes.  90% of executives say that people and process are the challenge to becoming data-driven, not tech.[5]  To see why that is, assume that a successful AI project is 50% data preparation, 40% production operations (ops) and 10% machine learning; and that change management is 40% people, 40% process and 20% technology.  Multiply those two together and we see that machine learning is only 2% of the problem we’re trying to solve.[6] (See image 1)
 
Recognizing that the sexy part of AI projects一machine learning一is only 2% of the problem leads some to overreact again and move their attention to control.  “First we need to get our data under control, and only then can we think about creating value” is a common refrain.  Another cloud migration, new data cleaning initiatives, more data governance boards, and perhaps another MDM program follow.[7]  Overly restrictive data and computational resource access controls are put in place.  Data scientists cannot even explore datasets without providing a need-to-know and obtaining executive approval, to then plead with technology colleagues for prioritization.
 

Risks of excessive control

  • Data silos and data duplication proliferate even further, as data that is valuable when joined gets separated

  • Shadow IT and AI teams emerge.  This is sometimes called the Jurassic Park effect.  In the movie Jurassic Park, restrictions were put in place to prevent proliferation but nature found a way around them.  Similarly when IT or AI governance prevents business units from getting their job done, the business often finds a way around them.

  • Time to ROI increases as innovation is stifled

  • Data scientists perceive excessive access controls as politics, which is a top reason they switch jobs, so attrition increases.

Enabling speed, agility and control
 
For most businesses that we work with, it’s not a choice between speed, agility or control but rather how much of each behavior.  The ideal mix depends on your AI strategy, AI maturity, current organizational structure, and risk tolerance.  Different technology features enable each behavior and companies can use organizational structure to keep them in balance.  Here’s a summary of the technical features that we’ve seen work best for enablement:
 

Speed

Agility

Control

  • AutoML

  • Model testing

  • Model governance

  • Many database and BI connectors

  • Reuse of data pipelines

  • Rapid API deployment

  • Continuous development / Continuous integration

  • Python, R, Scala and SQL notebooks for programmers

  • Codeless visual development for non-programmers

  • AWS, Azure, GCP and on-prem implementation

  • Collaboration tools such as wikis, chat groups, tags, comments and global search 

  • Central data access control

  • Central compute controls

  • Project management

  • Version control 

  • Rapid rollback

  • Interpretability

  • Traceability

  • Dataset and model drift based on business KPIs

  • Multiple levels of training certification

  • A sustainable learning community

 

Balancing speed, agility and control

Organizational structure helps keep these behaviors in the desired mixture by allowing managers to quickly move resources.  The organizational structures for AI development that we see most often are illustrated here.  Rectangles represent business units, business functions, a center of excellence or a center for enablement.  The colored bars show the proportion of AI work and arcs depict collaboration between groups. (See image 2)

Distributed:  little or no collaboration between business units or business functions

Silo of Excellence:  deep AI expertise in one business unit or function, usually marketing or finance 

Center of Excellence:  a central, shared team that does AI work for all business units and functions

Center for Enablement (a.k.a. hub-and-spoke federation):  a central, shared team that handles administrative tasks but doesn’t do AI projects for the business.  Such tasks include infrastructure provisioning and monitoring, software licensing, regulatory compliance, best practices development, training, reusable template development, gamification of collaboration, process improvement, and maintaining a sustainable learning community.

Team of teams (a.k.a. mesh federation):  peer-to-peer sharing between business units and business functions where each is responsible for their own connections and there is no central team

Companies with a high risk tolerance often begin with a distributed structure to explore many options concurrently while low-risk organizations start their AI journey with a silo or center of excellence.  A silo usually has better initial ROI than a center simply because it’s closer to the business.[8]  Tom Davenport, the author of many bestselling business books on AI, often recommends an AI silo of excellence as a first step.  Companies that have scaled AI widely are three times more likely to use a center for enablement than those who haven’t scaled[9], and 90% use interdisciplinary teams on AI projects.[10]  Team of teams might someday become common in large companies in competitive industries since it has the best aspects of each structure and promises to “combine the agility, adaptability, and cohesion of a small team with the power and resources of a giant organization.”[11]  So far though, it's the least common organizational structure we see in practice.

In conclusion, most businesses want speed, agility and control.  AI strategists should determine the mix that’s best for them in order to broadly scale AI.  

[1] “Why do 87% of data science projects never make it into production?” Venture Beat, July 19, 2019
[2] “Hidden technical debt in machine learning systems”, D. Sculley et al., Proceedings of the 28th International Conference on Neural Information Processing Systems, 2:2503-11, December 2015. “Everyone wants to do the model work, not the data work: data cascades in high-stakes AI,” N. Sambasivan et al., ACM Conference on Human Factors in Computing Systems, May 2021 
[3] Sources: Quanthub, Burtch Works, LinkedIn, Indeed.com, WiDS, Computer Weekly, Glassdoor, Harham. McKinsey, Dice, CIO 2020 data, https://quanthub.com/data-scientist-shortage-2020/
[4] “Why so many data scientists are leaving their jobs,” Jonny Brooks, KDNuggets, April 2018.  “5 key reasons why data scientists are quitting their jobs,” S. Jain, Analytics Vidhya, December 2019
[5] NewVantage Partners 2020 Big Data and AI Executive Survey, 2020
[6] Some say it’s only 1%: “Getting serious about data and data science,” T. Redman and T. Davenport, MIT Sloan Management Review, September 28, 2020
[7] “Ten red flags signaling your analytics program will fail,” O. Fleming et al., McKinsey & Company, May 14, 2018
[8] Unilever took this approach; see “Building an insights engine: How Unilever got to know its customers,” F. van den Driest, S. Sthanunathan, and K. Weed, Harvard Business Review, September 2016
[9] “Building the AI-powered organization: Technology isn’t the biggest challenge. Culture is.” T. Fountaine et al., Harvard Business Review, July 2019.  GE Aviation, and many other Dataiku customers, have used Centers for Enablement to successfully scale AI; see for example, “How GE Aviation transformed their data processes,” Dataiku conference, Sept. 2019, https://www.youtube.com/watch?v=68Iudi2entE
[10] “AI: built to scale,” Accenture, November 2019.  “Driving ROI through AI,” ESI ThoughtLabs, sponsored research, 2020
[11] Team of Teams: New Rules of Engagement for a Complex World, S. McChrystal, Portfolio, 2015

About the Author: 

Doug Bryan (Doug.Bryan@Dataiku.com) recently joined Dataiku as an AI Strategist to help companies navigate the path from 100 to 1,000 AI practitioners.  His background includes lecturer in computer science at Stanford University, leader of the product recommendations team at Amazon.com that generated $2B in incremental revenue per year, Accenture Labs, Vice President of Analytics at Hearst Communications, and Senior Vice President for Data Science Products at Dentsu International.  He has 25 years of data science experience and has done consulting worldwide in many industries for clients such as Barclays, Nationwide Insurance, Marks & Spencer, L.L.Bean, PNC Bank, Mazda, Wells Fargo, Overstock.com, Motorola, Experian, Discover Financial Services, Rogers Wireless and Carnival Cruises.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech