Branded Content
Why enterprises must move beyond impersonation and login-based access as AI agents scale
Written by: Matthew Carroll | Co-founder and CEO at Immuta
Updated 3:00 PM UTC, February 4, 2026

Enterprise data leaders are entering a new phase of data consumption.
For years, the challenge was enabling more people to access data safely. Today, that challenge is evolving. Increasingly, it is not people accessing data directly, but AI agents doing so on their behalf.
Whether framed as “chat with your data,” AI copilots, or autonomous analytics agents, these systems are quickly becoming the interface between humans and enterprise data. As adoption grows, a critical question emerges: how do you provision data access to agents at enterprise scale?
For many organizations, the initial answer has been deceptively simple: impersonation.
Impersonation feels intuitive. If an agent is answering a question for a user, why not have it authenticate as that user?
In this model, the agent logs into data systems using the user’s identity. Existing access controls are reused. Policies already defined for humans are applied automatically. On paper, it looks like the path of least resistance.
But impersonation is not a strategy. It is a workaround.
More importantly, it reveals a deeper issue: enterprises are trying to solve an authorization problem using authentication mechanisms.
Authentication answers the question, “Who is logging in?” Authorization answers the question, “What data should be accessible right now, given the context?”
Agent-driven data access exposes the limits of conflating the two.
Impersonation ties access to login.
That means every user an agent might act for must exist everywhere the agent might go. Data warehouses, data lakes, analytics platforms, and, often, SaaS systems as well, even if those users never interact with them directly.
From a data governance and security perspective, this is a regression. The number of identities grows. Permissions become harder to review. The attack surface expands. Users are granted direct system access simply so an agent can function.
For the average data consumer, this level of access is unnecessary and often inappropriate. Yet impersonation makes it unavoidable. This is not a failure of identity systems. It is the result of forcing authentication to do work that belongs to authorization.
In regulated enterprises, knowing who accessed data and why is not optional.
Impersonation collapses an important distinction. When an agent authenticates as a user, every query looks like it came from a human, even when it was generated by an AI system acting on their behalf.
This creates blind spots that matter. Was the query written by a person or generated by an agent? Was it part of an interactive request or an automated workflow? Who is accountable if sensitive data is exposed or misused?
Authentication systems were never designed to answer these questions. As a result, audit and compliance teams lose clarity at exactly the moment agent adoption demands more transparency, not less.
Authentication-based access assumes permissions are decided at login time. Agents do not work that way.
They plan queries dynamically, adapt when access is missing, and reason across datasets. To do this effectively, they often need access to supporting data such as semantic views, lookup tables, or shared reference data.
Impersonation makes this difficult.
If the agent can only see what the user can see, it may lack the context required to answer a question correctly. To compensate, organizations often broaden user permissions simply to keep AI workflows working, introducing unnecessary risk.
The inverse problem also exists. Users may have access to sensitive datasets that the agent should not be allowed to use, even when acting on their behalf.
Impersonation offers no clean way to express these distinctions. Authentication turns access into an all-or-nothing decision.
When agents hit access limits they cannot work around, the result is often failure. Questions go unanswered. Responses are partial or incorrect. Users lose confidence in the system.
From the outside, this looks like an AI quality problem. In reality, it is a data provisioning problem. The agent could not obtain the right data, at the right time, under the right constraints.
As agent adoption grows, these failures compound.
At its core, impersonation is an authentication shortcut. It assumes access decisions can be made once, at login time, and remain valid regardless of context, purpose, or task. That assumption does not hold in an agent-driven world.
What enterprises actually need is the ability to make authorization decisions dynamically, at query time, based on multiple factors: who the agent is, who the agent is acting on behalf of, what data is being requested, and why.
Authentication establishes identity. Authorization determines access. Impersonation collapses these layers and inherits the limitations of both.
As enterprises adopt agents at scale, a different approach becomes necessary. Instead of having agents impersonate users, agents retain their own identity. When they need data, they request access on behalf of a user.
Access decisions are made dynamically, at the moment of use, based on human entitlements, agent constraints, data governance rules, and intent (purpose).
This shifts access from being identity-driven to being context-driven. Authorization becomes the primary mechanism for controlling data access, rather than a side effect of authentication.
It also reflects a practical reality: many users may rely on the same agent. Authentication alone cannot model that relationship cleanly. Authorization can.
For the Chief Data Officer (CDO), this shift is not about a single technology decision. It is about building a data provisioning strategy that accounts for agents as first-class data consumers.
As agents become more prevalent, often exceeding the number of human data users, traditional access models begin to strain. Static, login-based approaches were designed for people, not autonomous systems making real-time requests across the data estate.
Addressing this requires partnership. CDOs need to work closely with IAM, security, and platform operations teams to rethink how access decisions are made. In particular, this means separating authentication from authorization and recognizing that impersonation is no longer a sustainable model at scale.
Authentication teams continue to establish trust and identity. Authorization mechanisms must take on the responsibility of deciding what data should be accessible at query time, based on the human user, the agent acting on their behalf, the data’s governance rules, and the purpose of the request.
This is a shared responsibility. Data teams bring policy and governance context. Security and IAM teams bring identity, trust, and operational rigor. Together, they can move away from impersonation and toward an authorization-driven model that supports agentic access without sacrificing control.
For CDOs, the rise of AI agents is not a future scenario to plan around. It is already reshaping how data is accessed across the enterprise.
The first step is reframing the problem. Impersonation and login-based access are not long-term strategies; they are signals of architectural strain. When teams rely on them, they are compensating for the absence of an authorization model that can operate dynamically, at scale, and with context.
From there, CDOs must treat data provisioning as an enterprise capability, not a collection of tactical exceptions. This requires working across organizational boundaries. Authentication teams continue to establish trust and identity. Security teams focus on risk and enforcement. Data teams bring policy and governance context. Authorization becomes the connective tissue that allows these functions to work together coherently.
Practically, this means shifting where access decisions are made. Instead of anchoring access at login time, organizations must move toward decisions made at the moment data is requested. That shift enables access to reflect who is asking, who they are acting on behalf of, what data is involved, and why it is being used.
CDOs should also redefine success. Fewer standing permissions. Fewer direct user accounts in data platforms. Clear attribution between humans and agents. Faster access without bypassing governance. These become indicators of a mature, scalable data provisioning strategy.
Just as importantly, CDOs must influence process. Agent access cannot be solved quietly inside individual applications or left to ad hoc integrations. It requires shared patterns, clear ownership, and consistent authorization logic across the enterprise.
This is a leadership moment. AI agents will continue to grow in number, autonomy, and importance. Enterprises that continue to force them into login-based models will find themselves constrained by complexity and risk. Those that deliberately shift toward authorization-driven data provisioning will be better positioned to scale safely and confidently.
For the CDO, this is not simply an access control evolution. It is an opportunity to redefine how data is made available in an agent-driven enterprise, and to lead that change before the architecture is defined by necessity rather than intent.
About the author:
As co-founder and CEO of Immuta, Matthew Carroll is focused on helping organizations deliver governed data access safely and at scale. He is widely recognized for building and protecting scalable data systems, as well as for his service and innovation within the U.S. federal government. Matthew is deeply passionate about data policy and the evolving challenges of risk management.
Before co-founding Immuta, Matthew served as an intelligence officer in the U.S. Army, including tours in Iraq and Afghanistan. Following his military service, he became CTO of CSC’s Defense Intelligence Group, where he led large-scale data fusion and analytics programs and advised the U.S. government on data management and analytics strategy.
Through this work, Matthew came to understand both the power of data and the importance of ensuring it is accessible only to the right people, for the right reasons, and in the right form. The core challenge was enabling that access safely and securely while reducing complexity and risk. This realization led to the creation of Immuta.
Matthew holds a Bachelor of Science degree in Chemistry, Biochemistry, and Biology from Brandeis University.