Data Management

Funding AI the Right Way: Drata’s Blueprint for Cost Allocation, ROI, and Flexibility

avatar

Written by: CDO Magazine Bureau

Updated 1:00 PM UTC, Mon December 1, 2025

As enterprises accelerate their adoption of generative AI (GenAI) and modern data platforms, data leaders face rising pressure to justify investment, balance cost structures, and architect flexible environments that scale with unpredictable use cases. In earlier installments of this series, Lior Solomon, VP of Data at Drata, explored how his team operationalizes AI for go-to-market teams, instills a “crawl-walk-run-sprint” experimentation mindset, and governs complexity without slowing innovation.

In the first installment, Solomon discussed the foundational data ingestion, cost governance, and rapid prototyping frameworks that enable Drata to safely operationalize GenAI across GTM teams. In the second, he unpacked cost optimization philosophies, DBT governance, AI team structures, and how internal prototypes become customer-facing product capabilities.

In this final installment of his conversation with Ido Arieli Noga, CEO and co-founder of Yuki Data, Solomon turns to the realities of funding AI initiatives, proving ROI, adopting fair cost-allocation models, and preparing for a future centered on open table formats like Apache Iceberg.

Internal ops vs. customer-facing AI

Addressing the dual remit of leading internal data operations while simultaneously incubating AI use cases, Solomon explains that these responsibilities are funded from two different financial brackets. Any tooling or infrastructure that powers customer-facing AI lives under the cost of goods sold (COGS), while traditional BI and internal data operations fall under operational expenditure. This separation gives the AI organization room to scale responsibly without cannibalizing internal BI scope, but it also introduces a new layer of accountability.

ROI is difficult — but necessary

Next, Solomon agrees with Arieli’s observation of a familiar industry complaint: internal dashboards rarely receive explicit ROI credit, even when they shape executive decision-making. He says, “It’s hard to see the data people. They’re somewhere in some basement making sure the data comes in at the right time,” he says. And even when the impact is real, “it takes a while until you figure it out and takes a while to get that kind of attribution to the data team.”

The solution would be to ensure the data team always owns at least a few projects that tie directly and visibly to customer value or revenue.

Solomon points to Drata’s metric layer, owned by the data platform team and exposed back to customers through Cube.dev: “That project was very much relevant to some new initiative,  and it’s fairly easy to tie it up to revenue.”
Owning such initiatives ensures that data’s contribution is unmistakable during annual reviews. “I always make sure there’s at least two or three clear, meaningful, direct attributions to customer activity,” he says, because “it’s not that easy to get the high-fives for all the activities we’re doing.”

Why open table formats are rising

Turning towards the explosive rise of Apache Iceberg through enterprise adoption, Arieli asks whether a single company will eventually dominate the open table ecosystem or whether organizations will adopt multiple query engines to run workloads atop shared storage.

Solomon calls this debate “very relevant” — and deeply connected to Drata’s internal cost structure. He explains that 30–40% of Drata’s data warehouse spend stems from CDC ingestion of customer databases — data that often never gets queried.

“We want to have it there just for the sake of engineers and research,” he says, but storing all raw CDC data in Snowflake is expensive. Instead, he has championed moving raw ingestion to a data lake — likely Iceberg on S3 — while reserving Snowflake for gold-layer business metrics.

“Why can’t we just ingest it into a data lake and have an engine on top of it that will let the engineers run the queries they need?” he asks. The shift would dramatically reduce warehouse spending while maintaining observability and research flexibility.

Pick the right engine for the right job

On the future of storage and engines, Solomon sees two opposing forces:

  1. Flexibility: “I like the idea of picking and choosing, and you can bring the right tool to the right problem.”
  2. Consolidation: “I’m also adamant that we need to consolidate some of the vendors. When I’m looking at the list of vendors I have, I’m like, ‘Oh my God, we can consolidate some of them.’”

He expects the future to be driven by a mix of structured and unstructured data use cases, each requiring different query engine behaviors, but also believes enterprises will push to reduce vendor sprawl where possible.

“There’s still going to be a palette of multiple different solutions when it comes to the query engine,” Solomon concludes.

CDO Magazine appreciates Lior Solomon for sharing his insights with our global community.

Related Stories

December 4, 2025  |  In Person

Boston Leadership Dinner

Abe & Louie's

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About