Branded Content

Why Ticket Systems Can’t Survive the AI Era — and How Data Provisioning Solves the Access Crisis

avatar

Written by: Matthew Carroll | Co-founder and CEO at Immuta

Updated 2:00 PM UTC, Fri October 10, 2025

post detail image

We are entering a new era in how enterprises access and use data. In my work with Chief Data Officers (CDOs) and data leaders, I’ve seen firsthand the pressure of deploying proprietary agents and managing the exponential rise in access requests. Two fundamental shifts are reshaping the landscape:

  1. Data for everyone: The democratization of data access through generative AI (GenAI)
  2. The rise of agents: The rapid and widespread adoption of non-human identities, or agents. 

Together, these forces are redefining how CDOs must think about provisioning data at scale. In this new world, provisioning isn’t just an operational concern—it is the strategy. It is what unlocks AI, enables agents to be adopted safely, and empowers data consumers to generate insights at scale.

Shift one: Data for everyone

For decades, only a small population of technologists — perhaps a few thousand in a large enterprise — had the expertise to query databases, warehouses, and cloud storage. GenAI has changed that dynamic.

Any employee, regardless of their technical skill, can now directly interact with data through natural language interfaces. This means tens of thousands of employees are suddenly becoming active data consumers. The sheer volume of additional access demands will test existing systems and fundamentally change how access must be provisioned.

Shift two: The rise of agents

The second shift is even more dramatic. That’s because employees are not just consuming data themselves — they are increasingly deploying agents to automate repetitive tasks.

These agents, or non-human identities, request and use data on behalf of their human owners. While data scientists may continue to build advanced models, it is the broader base of business users who will drive explosive growth in agents. In addition to tens of thousands of humans seeking data, enterprises will soon face millions of agents doing so — often in real time. Because at the end of the day, agents weren’t designed to wait in line.

The breaking point of ticket systems

Traditional models of data access — based on ticketing, IT queues, and manual approvals — cannot handle this scale. The reality is already visible inside some of the world’s largest enterprises. At a top-five global automotive manufacturer, more than 100 data stewards each average over 100 access tickets every week. At a top-ten pharmaceutical company, 20,000 employees generated over 200,000 access tickets annually.

From the CDO’s perspective, this is an impossible equation. They know that IT cannot keep pace, that governance teams will eventually miss something critical, and that business users will grow frustrated with delays. And yet, they also know that loosening controls creates regulatory and security risks they can’t afford.

That is the status quo before every employee becomes a data consumer and before they begin unleashing agents to automate tasks on their behalf. Add those dynamics, and the model collapses. Layers of management, governance, and IT working through tickets become a choke point — and it breaks fast.

The enterprise needs a fundamentally new approach: Data Provisioning.

Redefining Data Provisioning

Data provisioning is the systematic ability to provide data access — whether to humans or non-human identities — at scale, securely, and with governance embedded. It’s broader than what analysts once called “data access management,” and it rests on two complementary pillars:

1. Provisioning by policy: Attribute-based access controls (ABAC) and fine-grained policies can grant near-instant access to data aligned with a user’s role, organization, or location.

For example, a finance team member might automatically gain access to reporting data, but not raw transaction data. Policies separate the logic from the platform, allowing access decisions to be made at machine speed.

2. Provisioning by request: Not every case can be anticipated by policy. With thousands of employees and agents exploring new combinations of data, ad hoc requests will always emerge. Here, semantic layers become critical.

They enable users and agents to understand what data exists, what it contains, and what other data might be useful in conjunction with it. Request workflows must support policy exceptions, route approvals in parallel, and apply classification frameworks to strike a balance between risk and speed.

The CDO’s imperative: Build a data provisioning framework

For CDOs, preparing for the era of GenAI and agents isn’t optional. It requires a deliberate data provisioning framework — and it must be grounded in measurable outcomes. The CDO’s job is to move from anecdotes (“we spend too much time on access”) to hard, operational metrics.

The metrics that matter

CDOs should start by baselining and tracking:

  • Access volume metrics: Total number of access requests per month/year, and growth rate (human requests vs. agent requests).
  • Effort and workflow metrics: Average number of approvers per request, average time to decision, average hours spent per year on approvals, percentage of requests fulfilled via automation vs. manual workflow.
  • Risk and compliance metrics: Percentage of requests aligned with low-/medium-/high-risk categories, average time to revoke/recertify access, number of policy exceptions per quarter, and percentage of requests reviewed for compliance.

The target is not just fewer tickets — but faster, safer, and more automated decisions.

The top 5 metrics for CDOs

Story Image

Core elements of the framework

  1. Classification frameworks: Every dataset must be classified against a risk model (e.g., public, internal, confidential, regulated). Provisioning decisions then align to this classification. Low-risk data can be automatically approved; high-risk data follows stricter workflows.
  2. Approval workflows at scale: Traditional serialized ticketing systems cannot support tens of thousands of users or millions of agents. Workflows must be parallelized, risk-based, and routed directly to the right approvers. Think of it as dynamic triage for data access.
  3. Policy-based automation: Attribute-based access controls (ABAC) enable “birthright” access that doesn’t require human intervention.
  4. Request and exception handling: No policy framework can anticipate every need. Semantic-driven request systems allow users and agents to discover relevant data, justify access, and submit exceptions. Approvals should be time-bound (e.g., 30 days), conditional, and logged for audit.
  5. Recertification at scale: Access granted should not imply permanent access. Enterprises need automated recertification cycles aligned with data sensitivity: Annual for low-risk data, quarterly for medium-risk data, and monthly or continuous for high-risk/regulated data.

Conclusion

The AI era is not just about building better models or unlocking new insights — it is about access. Enterprises that cling to ticket-driven, manual provisioning will quickly hit bottlenecks in scale, compliance, and efficiency. By contrast, organizations that invest in a clear, metric-driven data provisioning framework will unlock a fundamentally different trajectory.

Provisioning is not just an operational necessity — it is the strategy. CDOs who define the right metrics, implement classification frameworks, align workflows to scale, and recertify access intelligently will position their organizations to seamlessly deliver data to both humans and non-human identities.

The payoff is exponential: These enterprises will be able to leverage AI more efficiently, securely, and pervasively than their peers. In short, the winners of this next era of data will be those who recognize that provisioning is the strategy.

About the Author:

Matthew Carroll, Co-founder and CEO of Immuta, is driven by a mission to secure the future of data. Renowned for building and protecting scalable data systems, he has a background in service and innovation within the U.S. federal government, and a passion for data policy and risk management.

Before founding Immuta, Carroll served as an intelligence officer in the U.S. Army, completing tours in Iraq and Afghanistan. He later became CTO of CSC’s Defense Intelligence Group, leading data fusion and analytics programs, and advising the U.S. government on data management.

Working with the U.S. Intelligence Community, he tackled complex data challenges as a forward-deployed engineer, realizing the critical need to make data accessible only to the right people, for the right reasons. This insight led to Immuta’s creation, enabling real-time, secure data analysis for organizations worldwide.

He holds a B.S. in Chemistry, Biochemistry, and Biology from Brandeis University.

Related Stories

October 7, 2025  |  In Person

Cincinnati Global Leadership Summit – Data

Westin Cincinnati - Downtown

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About