Opinion & Analysis
Written by: Pritam Bordoloi
Updated 5:00 PM UTC, Tue November 18, 2025

No organization in the world pushes the boundaries of human curiosity while blending science with cutting-edge technology like the National Aeronautics and Space Administration (NASA). From sending humans to the Moon to exploring the far reaches of the solar system, NASA operates at a scale few can fathom — and at the heart of it all is data.
Every day, the agency generates terabytes of information, transforming raw numbers into insights that guide exploration, protect Earth, and expand human knowledge.
In an exclusive interview, David Salvagnini, NASA’s Chief Data Officer (CDO) and Chief Artificial Intelligence Officer (CAIO), tells CDO Magazine that it is impossible for one person to be in charge of all the data at NASA, given the vast amount it generates.
To tackle this, NASA has adopted a federated framework for data management. Responsibility is shared across missions and centers, ensuring data is governed, secured, and used appropriately, with practices tailored to each mission’s context and specific needs.
According to Salvagnini, at NASA, data forms the foundation of every AI initiative. He emphasizes that preparing, aggregating, and understanding data in terms of its quality, context, and lineage is often more challenging than AI itself, and integrating data and AI governance is key to meaningful outcomes.
“AI needs data more than data needs AI,” he says, highlighting the discipline and rigor behind responsible AI adoption. From managing terabytes of information to exploring generative AI (GenAI) applications, Salvagnini offers a rare glimpse into how NASA turns data into discovery.
I’m not the sole owner of all data across the agency, nor do I make every decision regarding it. Instead, we’ve built a federated framework for data management that allows us to appoint people in key roles across NASA’s missions and centers, ensuring data is managed close to where it is created and used. Through governance and outreach, we work to establish consistency where it makes sense.
For example, NASA’s science mission has an enormous public-facing role, with vast amounts of open data supporting researchers and citizen scientists around the world. To manage this, we have a Chief Science Data Office and senior officials responsible for their respective mission areas. You can think of them almost like ‘associate CDOs,’ though we call them Senior Data Officials.
Beneath them, we have Data Stewards, who manage day-to-day data stewardship, and Data Custodians, who handle the technical aspects — like ensuring secure data transfers between systems, applying policies, or transforming data for downstream use.
The reality is, NASA is far too large and diverse for one person to centrally manage all data. Sharing responsibility through this federated approach is the only effective way. And critically, anyone in a data role at NASA must understand the mission they support. Managing HR data is very different from managing science data, and very different again from handling sensitive program data. Mission context drives how data should be governed, secured, and shared.
Q: Can you elaborate on what falls directly under your purview, and some of the work you and your team are focused on right now?
As CDAO, my statutory responsibilities are defined in the Evidence Act and the Geospatial Data Act. I’m accountable for open data, geospatial data, and for reporting to OMB (Office of Management and Budget) on guidance that comes from the administration or other oversight bodies.
My role is to ensure the right mechanisms, partnerships, and federated delivery models are in place. I sit within the CIO organization administratively, but my responsibilities are distinct.
One area I directly own is the Enterprise Data Platform (EDP) — a suite of common services we provide across NASA so every team doesn’t have to reinvent the wheel. The EDP supports the ingestion, conditioning, tagging, securing, analyzing, and visualizing of data — a full-stack capability that enables insights from aggregated sources.
Some mission areas, like science, have strong internal resources to build their own data capabilities; others don’t. The EDP ensures that all parts of NASA, regardless of size or resources, can access robust tools for managing and leveraging data.
Another key responsibility is our data cataloging initiative, which I oversee as a functional owner. That means I define the business requirements and guide its evolution, while technical delivery sits with the CIO organization. Cataloging is vital for ensuring datasets are discoverable and consistently managed.
While NASA centers may develop their own data solutions, the goal of the EDP is to bring as many as possible into a unified environment. That allows us to enforce consistency in areas like cataloging and governance, while still giving mission areas the flexibility they need.
Q: Could you help us better understand how the Enterprise Data Platform works?
We’ve integrated a mix of commercial off-the-shelf tools with Amazon’s cloud environment. For example, many organizations use visualization tools like Power BI or Tableau — NASA leverages Tableau as part of this platform. But the EDP is much more than just a stack of tools.
What makes it powerful is that it’s supported by a full set of services. We have teams who work with mission leaders and customers to understand their needs, advise them, and then help integrate those needs into the environment. Technology by itself doesn’t solve problems — people, process, and technology must work together.
That means creating awareness of what good data management looks like, showing leaders what’s possible when you aggregate different data sources, and ensuring the right processes are in place so users don’t have to figure everything out on their own. With the EDP, we’ve combined the tools, the processes, and the services to help customers get real value and insight from NASA’s data.
Q: How does combining the CDO and CAIO roles change the way you connect data governance with AI innovation?
As a CDO, I see data as central to any AI initiative. While AI models and implementation are important, the foundation is always the aggregation, conditioning, and preparation of data so it can be used effectively. That involves understanding not only the source and reliability of the data but also its context, lineage, and provenance — traditional data discipline activities that become even more critical for AI. A quote I often reference is, “AI needs data more than data needs AI,” which I think nicely captures this relationship.
Often, the AI aspects of a project are less challenging than the data challenges — quality, labeling, and readiness of the data are the critical path for generating meaningful outcomes. Regarding governance, data governance focuses on managing and onboarding data, while AI governance evaluates initiatives, risks, and responsible use, including privacy and security concerns. At NASA, we keep these governance functions distinct, but having a single executive serving as both CDO and CAIO helps integrate the two effectively.
Q: NASA operates on a scale and complexity far beyond most enterprises or even other federal agencies. When it comes to digital transformation and AI adoption, do you face the same challenges as others — or are NASA’s obstacles fundamentally different? If yes, in what way?
In many ways, NASA’s challenges mirror those faced by other organizations, but there are important differences as well. On the similarities side, every organization must navigate cultural differences across a multi-generational workforce. Some employees are comfortable with AI and its applications, while others are more skeptical. As a change agent, my role is to reassure people that AI is not a replacement for humans but rather a tool to augment their capabilities and make them more effective.
When it comes to enterprise rollout of large language models and GenAI, our challenges are similar to those in other organizations. We face data-related challenges as well — ensuring data is properly conditioned for AI use and that the sources AI consults to generate responses are reliable.
NASA differs in the environments where we operate. We work in highly specialized conditions, which drives unique AI use cases — what I often call “embedded AI.” A well-known example is the Mars Perseverance Rover, where AI systems are embedded to operate autonomously at the edge. Unlike most organizations, we don’t rely on commercial cloud infrastructure in space; all compute, storage, and AI capabilities must exist within the spacecraft itself. These vehicles also operate under strict constraints — weight limits, power budgets, and environmental tolerances — which create additional challenges.
Furthermore, NASA’s scientific community engages in highly specialized research and has developed foundation models tailored to diverse scientific disciplines. These efforts often involve massive datasets collected over decades, addressing unique challenges in fields like heliophysics and space weather. These discipline-specific requirements make our AI applications and deployments distinct from what you’d find in other organizations.
Q: NASA’s stakeholders range from scientists and engineers to academia, commercial partners, volunteers, and the general public. How do you approach data sharing while ensuring trust and security?
It’s a complex challenge. In some cases, like our vast collections of open science data, the answer is simple: share everything publicly. In other cases, it’s much more nuanced. Earlier in my career, I worked in the intelligence community, where the default was “secure by default, allow as mission necessitates ” At NASA, we face both extremes.
Take our partnerships with companies like Boeing, SpaceX, or Intuitive Machines. Program data tied to their intellectual property has to be protected under contractual terms and conditions. That requires us to manage data in ways consistent with their policies, while also meeting federal compliance requirements.
The key is thinking about data protection at the very start of a program. That’s where data management plans come in — programs should consider questions such as: Who will need access to this data? Will foreign partners, like those under the Artemis Accords, be involved? What parts of the data are proprietary to vendors? Addressing these issues early is far more effective than trying to fix them later.
Legacy programs like the International Space Station highlight this. When the ISS was being developed 30+ years ago, not every nuance of data access could have been anticipated. Today, with new commercial space stations on the horizon, we’re working with private vendors who also need access to ISS data. Handling those requests requires careful, methodical processes to protect all parties involved.
In short, data sharing at NASA is about balancing openness with responsibility — supporting science and collaboration while safeguarding sensitive, proprietary, or contractual data.
Q: How is the agency exploring GenAI, from open-source collaborations to mission applications, and what role do you see it playing at NASA?
First, it’s important to recognize that NASA has been working with AI for decades. A great example is the Perseverance rover on Mars — it uses AI for navigation, computer vision, and even during its landing sequence, when there was significant latency in communications between Mars and Earth. The rover needed a degree of autonomy to land safely, and AI played a key role in that.
Beyond missions, AI is deeply embedded across NASA’s science and engineering efforts. We’ve used AI to analyze data from past missions and uncover discoveries that weren’t identified when those missions were active.
In material sciences, we’re using AI to create alloys that can withstand the extreme conditions of space. We also have initiatives like “Text to Structure,” where engineers use natural language prompts to design structural components — essentially describing what they need, and AI generates the designs.
AI is even helping medical officers on long-duration space flights diagnose potential health conditions through programs like the Crew Medical Officer Decision Assistant (CMODA).
That said, much of my current focus is on GenAI. Unlike traditional AI or machine learning models used for specific mission tasks, GenAI is more ubiquitous. It’s something employees might use daily — to summarize documents, extract insights from engineering notes, or review lengthy program reports. It introduces new efficiencies, but also new responsibilities.
As both CDO and CAIO, I focus heavily on understanding the data behind the AI — what sources are being used, whether they’re sensitive, and how reliable they are. We encourage employees to always ask, “What sources did this response reference?” and to verify the completeness of those sources. Sometimes, AI outputs may sound convincing but might reflect only a fraction of the available data.
We’re also strengthening our responsible AI practices — ensuring that people understand they remain accountable for the accuracy of their work, even when assisted by AI. And we’re reinforcing data protection measures, such as applying appropriate sensitivity labels in tools like SharePoint or Teams. If AI has access to those repositories, we need to make sure the right data boundaries are in place.
So, NASA’s approach is twofold — leverage GenAI for efficiency and innovation, but do it responsibly, with a clear understanding of data, lineage, provenance, and trust.
Q: Is NASA exploring agentic AI use cases — not just internally, but also in the context of space missions and exploration?
There’s certainly a lot of excitement and curiosity about the potential of GenAI, including agentic AI. It’s something I even highlight in my internal presentations. That said, we’re still in the early stages — exploring use cases, testing feasibility, and thinking carefully about how to introduce these technologies responsibly.
Right now, AI at NASA is more embedded in niche, highly tested systems rather than GenAI agents operating autonomously. In high-risk, mission-critical environments, a human-in-the-loop is essential. Think of commercial aviation — AI might provide insights to the pilot, but it won’t fly the plane alone. Similarly, for NASA, we see GenAI augmenting knowledge workers and supporting decision-making, not replacing humans in critical operations.
The key challenge is that GenAI isn’t deterministic — it can hallucinate, unlike rule-based systems such as flight control computers that behave predictably. That’s why we’re proceeding methodically, ensuring we understand risks like model drift, prompt injection, or data poisoning. Like other organizations, NASA is working to move forward responsibly, balancing innovation with safety and trust.
Q: How do you ensure that AI models operate ethically, transparently, and without unintended bias — especially when decisions can have high-stakes scientific or safety implications?
I often discuss this both inside and outside NASA when it comes to AI. NASA has a decades-long history of using AI and machine learning in missions and scientific research. Our engineers follow very rigorous systems engineering and risk management procedures because, in the environments we operate in, there’s often no second chance.
Any new technology, including AI, undergoes strict validation and verification to ensure it performs reliably and responsibly. Software is classified by its criticality — from human spaceflight systems, which require the highest standards, to internal IT applications like employee timecards, which follow lighter testing protocols. AI is treated like any new technology and integrated into these existing procedures to de-risk its implementation.
In scientific applications, there’s an additional layer of rigor. NASA publishes the data, models, algorithms, and methods behind any scientific outcome, allowing the broader scientific community to peer review and replicate results. This ensures that AI-driven insights are validated and trustworthy.
Q: From your career in the Air Force, intelligence community, and NASA, what key learnings have stayed constant when building data and AI teams? And how is NASA different from your previous roles?
What remains constant across any discipline or mission is leadership. No one can do everything themselves; a leader must mobilize a team around common objectives, create clarity, and inspire people to engage with the mission.
Leadership and change management are critical for a CDO or CAIO — leaders guide, inspire, establish vision, and roll up their sleeves to help teams achieve outcomes and overcome challenges.
As for differences, my experience in the intelligence community focused on protecting data first and releasing it on a need-to-know basis. At NASA, much of our mission data and research is publicly shared, with restrictions only on sensitive or proprietary information — a shift I’ve really enjoyed.
Culturally, NASA is research-heavy, unlike the operational focus I experienced in the intelligence community and DoD. Here, I work with brilliant teams in an environment steeped in systems engineering, robust risk management, and scientific research, which has required some adjustment but is incredibly rewarding.
Q: When you talk about your work with kids in your family or young students, how do you explain what it means to be NASA’s AI Chief?
It depends on their age group. I try to help them understand the unique challenges of operating in space and how technology — sometimes AI — solves problems specific to that environment.
On Earth, we take things like the atmosphere, stable temperatures, and easy access to power for granted. In space, we have to manage extreme temperature swings, oxygen levels, radiation, hydration, waste, and human safety.
I describe NASA’s cool solutions: deploying ISS solar panels, handling space debris with lasers or capture devices, and how the Perseverance rover safely landed and explored Mars despite 20-minute communication delays. While AI is increasingly part of our systems, much of what we do is still non-AI, but it all highlights the ingenuity required to operate in space. These stories captivate students and adults alike.
Q: What’s the most surprising or “sci-fi made real” AI project at NASA that you think would wow people outside the agency?
One project that’s very sci-fi is holographic telepresence for telemedicine. NASA projected a doctor on Earth as a 3D hologram aboard the International Space Station, allowing real-time interaction with an astronaut — very much like something out of Star Trek.
Another exciting initiative is Text-to-Structure, where AI designs structural components based on parameters like loads and angles. The resulting organic, skeletal-like designs often exceed human specifications and are completed in roughly 25% less time.