How Should CEOs Think About Generative AI? – GenAI’s Potential and 3 Critical Limitations

A must-read article for CEOs to explore some of the factors and considerations that they should think about when exploring how and where they might leverage generative AI in their companies, or not.
How Should CEOs Think About Generative AI? – GenAI’s Potential and 3 Critical Limitations

Generative AI is a phenomenon, but to what end and what purpose?  Will it spell the end of humankind as we know it?  Or will Generative AI propel us into the future?

The invention of the wheel; travel by horse, train, or airplane; the advent of the printing press, telegraph, and telephone; the introduction of motion pictures, film, radio, and television; development of the mainframe computer, personal computer, and Internet —  each of these inventions has had the effect of revolutionizing traditional processes and changing human behaviors in their wake.

Savings of time and money;  increased speed and efficiency;  extending or breaking down barriers to communication;  boosting productivity; and enabling higher-order creativity — these are the benefits and the ingredients that propel disruptive change.

  • Will generative AI have a similar transformational impact?

  • Is it going to disrupt business or is it too early to say?

  • What are the potential opportunities and risks?

To avoid becoming a distraction and wasting precious resources, it is crucial to know how to think about generative AI and how to approach its adoption strategically. 

This is the first of a two-part article.  The first part explores the potential of generative AI and explains how it works. The second article will explore how generative AI can be used to deliver sustainable business value.

Here are some of the factors and considerations that all CEOs should think about when considering how and where they might leverage generative AI in their companies, or not.

Generative AI’s potential

Generative AI is a new type of Artificial Intelligence (AI) that can generate text, code, or images using natural language. Generative AI is powered by Large Language Models (LLMs) which are AI algorithms that are trained on massive amounts of words written on the Internet.

ChatGPT is a specific implementation of LLM, which can be characterized by two distinct capabilities that represent breakthroughs and that distinguish Generative AI from previous forms of AI:

  • Chat refers to the capability to communicate in natural language. It provides the conversational ability to ask for something, and then in real time review the results and continue to iterate in natural language to obtain the answers needed. For example, a business analyst can ask AI to generate the first draft of a proposal, and upon review, ask AI to find ambiguities or conflicting information in the draft.

    The business analyst can then ask to expand upon the analysis. Creating what would have taken days or weeks to achieve in a matter of minutes in real time dramatically increases speed and efficiency.

  • GPT (Generative Pre-Trained Transformer) refers to the capability to generate text, software code, or images based on a model pre-trained on public content and third-party extensions called plug-ins.  It provides the ability to generate text, images, or software code based on knowledge gained from the data it’s trained on.

    For example, a programmer can ask it to instruct the AI to generate code for an application that can show products, accept payment information, and complete a transaction. Typically, this would take weeks of effort which now can be completed in a matter of hours.

Generative AI’s limitations

While generative AI has limitations that are common to any AI model, such as bias and trust, it has a few others that are unique to it. They are:

  1. Outdated information —  Currently, ChatGPT is operating on limited information. For example, some of the code that is generated will not incorporate features and functions available in any programming language that has been released since 2021.

    While this can be mitigated by retraining the model using up-to-date data or by using plug-ins, there will always be a lag as extreme-scale LLMs are expensive to train often.

  2. Hallucination —  There are fundamental limitations to LLMs that time may not solve. Most notable is hallucination, which refers to the humanization of an erroneous output from a probabilistic algorithm. AI generates content that appears plausible but is either incorrect or entirely fabricated.

    This occurs because AI doesn't possess true understanding or reasoning capabilities.  It is simply replicating patterns it has observed in the data it was trained on. Hallucinations can lead to credibility issues, potential misinformation, and even legal risks if incorrect information is published or utilized in business decisions.

  3. Context and instruction —  The quality of output or answers is directly correlated to the quality of the input, context, and instructions provided. The business user must be an expert in the particular task that they are working on, so that they know what to ask, and also so that they can evaluate the answer for its quality.

    The business user must be able to iterate until a favorable answer or result is generated. The risk is that output or answers can be shallow or misleading, and the user may not realize this. Users must be adept in both their domain expertise and in working with generative AI, which can represent a steep learning curve.

In the second part of this article, we will explain how generative AI can be used to deliver business value to an organization. It will be published on 10/24/2023.

Randy Bean is the author of Fail Fast, Learn Faster: Lessons in Data-Driven Leadership in an Age of Disruption, Big Data, and AI, and a contributor to Harvard Business Review, Forbes, and MIT Sloan Management Review.  He was the Founder and CEO of NewVantage Partners, a strategic advisory firm he founded in 2001 and was acquired by Paris-based global consultancy Wavestone. He now serves as Innovation Fellow, Data Strategy at Wavestone.

Laks Srinivasan is co-founder and managing director of the Return on Artificial Intelligence Institute. He was previously co-COO of Opera Solutions and an Associate with Booz Allen Hamilton. He holds an MBA from The Wharton School and a degree in electrical engineering.  He now serves on the board of Lehigh Valley Public Media.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech