Data Management
Written by: CDO Magazine Bureau
Updated 2:32 PM UTC, Fri November 7, 2025
As enterprises race to embed generative AI (GenAI) into every corner of the business, data leaders like Lior Solomon, Vice President of Data at Drata, are focused on striking the right balance — moving fast while staying governed. In conversation with Ido Arieli Noga, CEO and Co-Founder of Yuki Data, Solomon unpacks how Drata is leveraging Snowflake, Amazon Bedrock, and Cortex to operationalize AI for go-to-market teams, manage rising data costs, and maintain agility without compromising trust.
Solomon begins by crediting Drata’s success in AI-led GTM initiatives to a strong partnership with the internal data platform team. This team owns the orchestration of Snowflake across the organization, enabling collaboration with GTM departments.
“What I really appreciated in using Snowflake is the fact that instead of us bringing more tools and more vendors, I have a one-stop shop for me, where I can actually ingest into a centralized enterprise data warehouse all the data and be able to do the governance there and the security aspects as well,” Solomon says.
By focusing on keeping the team nimble, Drata avoids bloated tooling while still delivering value at scale. Snowflake’s evolving capabilities make it possible to bring GenAI models into the GTM environment seamlessly.
While GenAI may be the shiny new tool, Solomon makes it clear that foundational work around ingestion and transformation is far from trivial. “We live and die by making sure that all the data has been ingested in a fresh manner into the data warehouse,” he explains.
He describes the “bread and butter” of the team: synchronizing thousands of MySQL databases from a single-tenant production architecture into the warehouse — closer to real-time. “We do a lot of activities with regard to the CDC pipeline, which is just like driving terabytes of data per day.”
But the data team isn’t working in isolation. GTM executives return from conferences excited about GenAI. “Some of them experiment with it, and they have expectations like, ‘Oh, it’s so easy. I can start extracting insights.’” Solomon points out that the real challenge is scaling these efforts securely and with the right governance structures in place.
The surge in AI experimentation is driving a shift in skill expectations, Solomon shares. The transition from traditional data engineering to AI facilitation demands a different mindset — one that embraces rapid iteration while still ensuring enterprise-grade safety and cost control.
On the issue of cost governance, Solomon emphasizes a phased approach. “We introduced this new framework — crawl, walk, run, sprint — where we want to try and fail as quickly as possible with our stakeholders.”
Rather than building fully-fledged pipelines from day one, the team prioritizes quick feedback loops — using sandboxes, cloud notebooks, or Streamlit apps to test hypotheses.
Once business impact is validated, the team gradually introduces cost tracking, governance, and scalability. If a stakeholder’s hypothesis lacks merit, there is no point in building complex data pipelines, governance frameworks, or cost-tracking systems. This shift in mindset, he explains, is something many data teams are grappling with today.
Traditionally, data teams were trained to focus on building scalable, robust pipelines from day one — often requiring significant upfront effort. But this often led to cost inefficiencies and delays. Solomon is pushing back against the common criticism that “data teams are too slow” by flipping the script: with the rise of GenAI tools, teams can quickly prototype using lightweight scripts or Streamlit apps.
He advocates for shared accountability, inviting stakeholders to test ideas rapidly in a sandbox environment. If the idea proves valuable, only then does the team layer in cost governance, observability, and scalability.
From a cost management perspective, Drata monitors activity at the data warehouse level, sometimes even creating dedicated warehouses for high-impact initiatives. While this makes cost tracking easier, it can be inefficient if underutilized. Hence, Solomon’s team continues to explore strategies to balance agility with cost-conscious execution.
When it comes to Snowflake architecture, Drata takes a functional approach, says Solomon.
The approach depends on the scale of the initiative. Currently, Drata typically assigns a separate data warehouse for each department, structured under broader functional roles. This allows the team to track departmental activity levels. However, this setup doesn’t always provide complete visibility — especially with tools like DBT, where a single large transformation job runs nightly.
In such cases, it becomes more challenging to trace back to specific upstream models or departmental dependencies, says Solomon.
To address this, the team uses tagging on resources to help identify and monitor usage patterns. Additionally, they have recently started analyzing the compute costs related to large language models, specifically looking at token usage.
One method involves a Streamlit app that estimates the number of tokens a query would use and assigns a rough price to that computation, shares Solomon. This cost is then shared directly with the stakeholder, who must decide whether the potential value justifies spending, say, $20, $50, or $100.
This practice encourages awareness and accountability around AI-related costs. These tools and methods currently support Drata’s efforts to balance innovation with operational control, he concludes.
CDO Magazine appreciates Lior Solomon for sharing his insights with our global community.