Managing Risk in a Highly Regulated Industry

Managing Risk in a Highly Regulated Industry

(US and Canada) If data is at the heart of your business, understanding the quality of data is paramount to your success. In other words, your success is based on how well you manage risk related to data quality. In a highly regulated environment, there are three layers to managing that risk:

 

Ownership and Accountability: Who owns the data, what are the responsibilities associated with that ownership, and, importantly, what are the dependencies on that data, e.g., who needs it and when.

 

Operational Management: How is the data managed daily through systems and processes?

 

Architecture and Control: What controls are in place to ensure the architecture is considered and the operational activities are maintained?

Ownership and Accountability

At the top of the house, one of the key activities of the board of directors is to set the Risk Appetite Statement for the firm. This starting point — understanding the board’s appetite for risk related to data quality — is imperative because the operational management and control infrastructure will follow from there.  

If you truly want to embed the idea of data quality into the fabric of how your company is managed — and doing so in a highly regulated data-oriented business is not optional — this starting point needs to be in place. Some progress on data quality can be made based on what external drivers dictate: global regulators, political bodies, and industry initiatives. However, those activities will always be reactive and are a tax on progress rather than a yardstick to measure success.

The discussion on data ownership would not be complete without consideration of the data consumers — who needs this data and for what purpose? Not all data requires the same quality, so what is the quality your consumers need? Is it appropriate to exchange quality for speed? What are the consequences of poor-quality data for any given use case? You have to know the answers to these questions to focus the quality discussion on the most relevant elements of your data. It is essential that the data owner is close enough to its production to influence its quality and has insight into its use and consumers to understand the quality requirements.

With “data quality” risk as a foundational component of your risk appetite statement, the importance of quality becomes endemic to the firm’s success. Once you understand where the firm's leadership stands on Data Quality risk, the rest is logistics.

Operational Management

On to the logistics.

If you work in a data-oriented field, managing your data daily through systems and processes is the number one job of the vast majority of your team. Once your risk appetite statement tells you there needs to be accountability for your data — a surprisingly new concept for too many — you need to work on establishing a data culture.  

Historically, we have thought about ownership of applications focusing on features and functionality and a view that the data is transitory. A pivot is required to properly consider data ownership, with the applications being viewed as transitory. That pivot begins with culture and how we think about data end-to-end. If you consider a certain group of related data elements — let’s call it a data domain — and look at how those data elements are created or acquired, and then follow them through your architecture, you will get a very different picture of the health of that data than if you just look at the data resting in any given application. That new picture represents data end-to-end, and the process needs an owner.

Another component of the pivot in data ownership is the operational management of data with a goal to simplify and right-size by establishing a shared data taxonomy and implementing modern data architecture to reduce data lifecycle complexity.  We do that by:

  • Naming a specific, empowered individual to own each set of data, and giving them the rights to make changes to how and where that data shows up in the data lifecycle management (for example, consolidation of systems, shifting distributed data creation and distribution to centralized solutions).

  • Ensuring we have an enterprise-wide data taxonomy and glossary, rationalizing naming conventions and definitions so everyone agrees on the same definition for the same data element).

  • Housing that enterprise taxonomy in a place everyone can find it, fully accessible to every appropriate member of staff so teams can search for the data they need instead of guessing where they can find it (the Enterprise data catalog.)

  • Strengthening the technology standards around data management by embedding data thinking into SDLC and agile frameworks (e.g., getting the developers to buy in on looking from a data lens rather than pure features and functionality).

  • Actively assessing and monitoring data domain health through report cards and KPIs.

Remember, not all data is created equal when it comes to quality. Sometimes what is important is fast data — traders want to know the last price in the market and they want it now. Other data absolutely  positively has to be correct — having to restate year-end earnings is a painful market-facing process — so that data has to be right the first time.

Making the pivot from application ownership to data ownership will facilitate a more complete understanding of quality needs for each set of data.

Architecture and Control:

Books have been written on two of the main control frameworks we often think about related to data — cybersecurity and privacy. Setting those aside for this article, let’s talk about what controls you have in place to ensure both the overall architecture is considered on an ongoing basis, and that the operational activities related to data are maintained to the appropriate standard.

Congratulations! Your risk appetite statement is now in order and the business and IT have made the pivot from application orientation to data orientation. Your focus now turns to restructuring and enhancing your data controls to align with this new framework. So, where do we start?

Architecture, namely data architecture, is as good a place as any. You are going to need a strong, smart team who can both stop the bleeding (no, Mr. Businessman, you are not going to buy that new workflow tool because we already have enterprise contracts for three others that do the same thing), and create a vision for the future (the drive to cloud, AI and machine learning, finding database technologies which will both push us forward and integrate successfully with the heritage architecture we will still be managing for the next three to five years).  You need an IT and business control process with the ability to stop rogue development and an architecture team who can’t be circumvented. That can be tough in large complex organizations, but the prize is worth the battle.

And finally, controls — the land of “trust and verify.”  It is unfair to talk about controls at the bottom of an article on data.  Controls deserve a place close to the top where the non-data professionals are still reading. But inevitably, the control foundation sits at the bottom where it serves to secure our data and ensure the quality and integrity of critical data across the data lifecycle.

What to think about when setting data controls? First, in data-heavy organizations, many of the controls you need may already be there, just not organized in a way that makes them easy to understand or execute. Firms have long written data controls into code. Is the market data file that just landed the same one as yesterday or is it new? Is the data just input the length and shape of a phone number? In nearly every case there is a foundation for data quality controls, but it is sometimes difficult to understand what they are.  

The next steps? Talk to the data owners and set the standards by data element, whether already controlled or not. Ensure the controls are documented to enable the necessary discussion and testing. Talk to the consumers to see if they agree on the rigor and comprehensiveness of the controls. Make rules about documenting provider-consumer agreements if that isn’t already de rigueur. Remember that not all data requires the same approach to quality and the  80/20 rule is flipped in data — only 20% of your data  needs to be the highest quality, so don’t burn too many calories controlling stuff that doesn’t matter. If data is the heart and soul of your firm, operational data controls, along with strong architectural governance and practices, will be keys to success.

The final word

As a data leader, your success is based on how well you manage risk associated with your key data. The appropriate consideration for ownership and accountability, how that is operationalized, and a fit-for-purpose set of architectural guidelines and controls will set you on the path to success. Good luck on your journey!


About the Author

Deb Lorenzen has been Chief Operating Officer of various portfolios for industry leaders State Street and BNY Mellon. She is currently Head of Enterprise Data Governance at State Street, the leading provider of asset intelligence to the owners and managers of the world’s capital. Previously, she led the Strategy & Data Governance team at State Street Global Advisors, heading a number of infrastructure development programs. She has a proven track record for driving internal and external change programs, including client, technology, and human capital impacts, through periods of growth and consolidation.

Lorenzen serves as Chair of the Board of Directors for Stearns Bank N.A., in St. Cloud, Minn. She is an adjunct professor in the MBA Program at Providence College, teaching coursework on how to execute strategic change in large corporate environments.

She holds an MBA from Columbia University and bachelor’s degrees in economics and journalism from Fresno State.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech