AI News Bureau
Written by: CDO Magazine Bureau
Updated 12:30 PM UTC, Wed August 6, 2025
Arvind Balasundaram, Executive Director, Commercial Insights and Analytics at Regeneron, speaks with Clyde Gillard, North American AI GTM Leader at HPE, in a video interview about the future of work with AI, how machines think differently, guarded optimism on AI, balancing the optimism with realism, and the need for responsible oversight.
Speaking on the evolving topic of how AI is rapidly reshaping the nature of work, Balasundaram shares his perspective on how people are relating to AI and the fears surrounding it.
Balasundaram draws on historical context to explain the shifts technology has always brought to the labor market. Referencing Daniel Susskind’s work on technology and jobs, he notes two key forces.
One is the substituting force, wherein technological changes make some jobs obsolete. Then there is the complementing force, which alludes to new jobs that emerge to augment the technology. He explains that every major technological inflection point in history carried both these forces. But with AI agents, this equation itself may be changing.
Balasundaram highlights how generative AI is already altering job structures.
For instance, he says, “If we have a forecasting agent, it can probably align on a decision with a supply chain agent instantaneously, which would take me weeks, maybe months, to align with a human.”
Despite fears of job loss, Balasundaram does not believe AI will fully replace humans. However, he cautions that humans must rethink their approach to how machines work.
Moving forward, Balasundaram underscores an important insight that AI does not mirror human thinking. He elaborates that machines often solve problems differently from humans. As an example, he refers to the historic moment when the world’s best Go player was defeated by an AI move previously thought impossible: “The move the machine made was a move that many generations of Go players had written as an edict: you never play.”
Moreover, creativity, once thought to be uniquely human, is already being replicated, says Balasundaram. “We’ve had robots compose music that, in a blind setting, people thought Bach composed.”
Pointing to a more concerning development, he says, “I just read an article… Agents are getting very good at lying. When you put a deliberate constraint in the code for them to restrict their activities, they evade it.”
Delving further, Balasundaram stresses that while humans will remain essential, their role will shift from doing tasks to governing, guiding, and overseeing.
“Yes, we will need humans in the loop, but we also have to be savvy that machines think differently than us. They use different processes than we do, so we shouldn’t always approach our jobs and functions as if machines would do it the same way. They might end up doing it far more efficiently and effectively.”
For someone overwhelmed with both structured and unstructured data, he affirms that the potential of AI to outperform humans is clear. Pragmatically, he opines that professionals must be willing to kill their own function before AI does it for them. According to Balasundaram, the human advantage will lie in judgment, governance, and balancing control with autonomy.
When it comes to AI, Balasundaram describes himself as a “guarded optimist.” He believes optimism is essential for advancing AI, but it must always be accompanied by caution and oversight.
AI often carries a cloud of skepticism — discussions of hallucinations, risks, and unintended consequences dominate the narrative. While Balasundaram acknowledges these risks, he emphasizes that with the right safeguards, the benefits can far outweigh the downsides.
Highlighting the beneficial advancements of AI in healthcare, Balasundaram says:
He reiterates that these new technological developments benefit a patient sooner, and as someone working in healthcare, this makes him optimistic about AI with appropriate controls.
Balasundaram is clear that optimism cannot mean recklessness. AI development, especially in sensitive fields like healthcare, must remain disciplined and well-governed.
“There are many dimensions to where AI can go but you want to do it in a very regimented way in which you involve appropriate registries, appropriate oversight, and governance, always in touch with your legal and compliance folks, so you’re not compromising either the enterprise or the patients in your endeavor to kind of bring them good things.”
Despite the risks, Balasundaram encourages practitioners to maintain perspective. When conversations about AI turn overly negative, he urges them to remember that humans, too, have limitations.
“Humans also have limitations, and we will need help to augment our ability to construct hypotheses, to get into multimodal analytics, to connect to knowledge graphs, and to be able to look at complex data and those interactions.”
Wrapping up, Balasundaram says, “I don’t think guarded development and advancing of some of these AI features is a bad thing.” He mentions striking a balance between caution and forward momentum and credits Regeneron for being innovative and enabling this balanced approach.
CDO Magazine appreciates Arvind Balasundaram for sharing his insights with our global community.