Opinion & Analysis
Written by: Kristin Lowery | Field Chief Information Security Officer at Optiv
Updated 4:18 PM UTC, May 7, 2026

AI models often reuse data from earlier systems, datasets, or models. Because this data was often collected for different purposes, its provenance, compliance, and privacy must be carefully validated.
While this reuse improves efficiency and streamlines tasks, organizations must anonymize and remove sensitive employee and customer data to meet regulatory and privacy obligations.
As AI continues to advance, it is helpful to leverage well-recognized frameworks to provide assurance to your customers and stakeholders that you are managing data effectively and that the broader implications of AI are understood.
Frameworks are a useful way to provide context to business teams and Boards, and provide for a common language, in my experience in facing off with Boards in multiple industries. Furthermore, they enhance audit readiness and help your teams avoid rework.
One framework an organization of any size can leverage is the NIST Cybersecurity Framework (CSF). NIST integrated AI into its existing framework in 2020, yet some remain unaware of this explicit callout. Teams can use the CSF to establish effective AI governance around data management or improve what they may already have in place.
NIST emphasizes secure, ethical, and effective data management practices across the data lifecycle. The key pillars include establishing data governance, ensuring data privacy, and ongoing training as part of the data lifecycle. It is widely recognized and adaptable for organizations of any size, which is why I would recommend this over frameworks such as ISO and COBIT.
If you are limited on resources, an initial starting point is to ensure you have a data governance team made up of Security, Privacy, Legal, and Technology representation. This group ensures the data list is of quality and protected. To ensure data protection, here are some steps a company of varying sizes can implement pragmatically during the age of AI.
One side note is that, as a security and data governance leader, it is important, in my experience, to establish credibility earlier by engaging with leaders outside of the data domain and showing excitement at the possibilities of AI in a safe way.
The steps below aid a leader in an approach with both short- and longer-term benefits to influence effectively by leveraging AI safely as the outcome:
This includes data classification, access controls, and encryption protocols. In parallel, be sure to delete data when no longer needed, as this reduces risk in an effective and low-cost way. Auditing against a clear data retention policy ensures you are not keeping excess data. This is a key area in my experience; you can maximize your success by partnering with your Audit and Technology functions to ensure backups are aligned with your data retention policies. Auditing against a clear data retention policy should be done at least on an annual basis.
This includes ensuring only authorized personnel have access to sensitive data. Encrypt data both at rest and in transit, particularly for AI models and training datasets used by AI. For AI, include model validation and bias detection as additional controls. Organizations that lag in effectively managing data tend to be weak in customer relationship management, which negates positive business outcomes.
Ensure your teams are trained to recognize and respond to AI-related incidents. This is easier than it sounds and something that will need to be refined as your team becomes more knowledgeable and AI continues to evolve.
NIST Special Publication 800-53 is a key document to utilize to aid in the above, as it covers the above steps and what is relevant in audit. NIST Special Publication 800-190 is also important to leverage as it focuses on protecting Controlled Unclassified Information (CUI) in non-federal systems. This is particularly important if your AI systems process sensitive data. Lastly, the NIST AI Risk Management Framework (AI RFM) further guides the management of risks associated with AI technologies.
The above steps and freely available NIST documentation can ensure you have proper risk management of AI as it pertains to sensitive data, allowing you to safely take advantage of all that AI has to offer. You can use your judgment to skip pieces that may already be in place and use this as a roadmap to be updated on a regular cadence with NIST AI as a foundational anchor.
In my experience, taking complex topics such as data management in the context of AI and using a recognized framework like NIST to simplify and provide a common language resonates well with leaders within and outside the data domain.
About the author:
Kristin Lowery is currently Field Chief Information Security Officer at Optiv. She previously served as Chief Security Officer at American Electric Power, where she led enterprise-wide cyber and physical security strategy for one of the largest electric transmission systems in the U.S., serving 5.6 million customers across 11 states. She brings over 30 years of leadership experience across cybersecurity risk, data management, and infrastructure engineering.
Prior to that, Lowery was Chief Security Officer at Bread Financial, overseeing governance, regulatory compliance, cyber incident response, and data protection. She also served as Chief IT and Data Risk Officer, building a second line of defense to reduce cybersecurity and availability risks.
Earlier, she held leadership roles at JPMorgan Chase and Nationwide Insurance, along with technical roles at NCR and MCI WorldCom. She holds an MBA from the University of Phoenix and a BS from Ohio University.