Leadership

Trust Should Not Be Addressed After it Is Lost — MeridianLink VP of Data

avatar

Written by: CDO Magazine Bureau

Updated 12:20 PM UTC, Mon September 8, 2025

MeridianLink stands at the forefront of digital transformation, delivering powerful software solutions that enable banks, credit unions, mortgage lenders, and consumer reporting agencies to operate smarter and faster. Partnering with more than 2,000 financial institutions across the U.S., the company helps streamline complex workflows, boost lending efficiency, and unlock actionable, data-driven insights.

The first part of this interview series examined how financial institutions can break down data silos, cultivate thriving data communities, and establish standardized definitions. The second installment explored strategies for sustaining those communities, pinpointing critical KPIs, and overcoming the hurdles posed by disconnected systems.

In this final chapter, MeridianLink’s VP of Data, Chris Eldredge, sits down with Afidence Business Development Manager, Spencer Hogan, to discuss how organizations can embed trust in their data, elevate data quality as a core KPI, harness context to tell compelling data stories, and apply AI to reduce bias and deliver more objective, value-driven narratives.

Embedding trust into the data lifecycle

For Eldredge, trust is not a post-incident exercise — it is the operating principle. As he frames it, trust must be engineered into every layer of the data program. “Trust is not something that should be addressed after it is lost. It’s something that should be built into the fabric of everything that you do, starting in the beginning.”

That mindset to foster trust starts while building data pipelines, building data warehouses, or building semantic layers for reporting. He adds, “Data engineers sometimes are perceived as magicians. They can do anything. And if you’re not a professional data engineer, you probably have no idea what they actually do.”

This creates a potential distrust, as what goes on behind the scenes remains unknown to the non-engineers. To address this, Eldredge suggests making it transparent every step of the way, such as exposing logs while building data pipelines that show the whereabouts of data. He notes, “Trust isn’t built when things go right. Trust is built when things go wrong. It’s how you communicate and handle that situation that enables trust to happen.”

Adding further, Eldrege says that while it is inevitable that the pipeline will fail, the communication around it must be crystal clear. Similarly, the processes on the data warehouse side must be well-defined from both business and technical rules perspectives.

“If your data warehouse is doing what the business asks you to do, then you’ve done your job, but you have to make it clear to them that it’s actually doing that,” he adds. The other crucial aspect from a trust perspective is to have data dictionaries, says Eldredge.

That documentation lives alongside shared language: “Sometimes it’s called a business glossary, but you need to make sure that that’s available too.” Also, data communities can become allies who validate that definitions reflect how the business truly operates.

Score what matters: Data quality as a KPI

“Data quality is a discipline,” says Eldredge. He separates data quality as a function from data engineering and evaluates it from multiple angles — accuracy against reference values, timeliness versus refresh expectations, and more.

Eldredge and his team approach data quality by translating it into a cascading metric that rolls up into a single, enterprise-wide score — a value between zero and one hundred. “Maybe that number is 95%. Whether 95% is good or bad is unknown until you socialize what that means to the business. You need to build rules that help the business understand.”

He illustrates with an example: If five systems should refresh hourly, and one of them hasn’t refreshed in two days, the quality for that check is 4/5, or 80%. The only question that matters is business fitness: “Is that good enough for the business?”

Data storytelling, need for context, and AI-driven insight in data narratives

Speaking of data storytelling, Eldredge returns to setting expectations. “You have to be able to set and manage expectations about what the data can and cannot do.” That discipline is even more urgent in the age of AI, where models will only be as reliable as the guardrails teams encode around the data and its limits. To understand what the data implies, one needs to understand the business process and context.

Context, he stresses, is the difference between signal and misdirection. To make the idea stick, he borrows a memorable parable: “10 elephants for a dollar would be a great deal if I had a dollar and I needed 10 elephants.” Eldredge states that if the data at hand is relevant to the question, then it can be put into context, but in the case of irrelevance of data, people must be helped to understand “why.”

Finally, Eldredge cautions that human incentives shape data. A sales rep logging a lost deal might select “competitor” for reasons unrelated to the actual cause: “You may actually have incentives that tell you to pick a competitor versus maybe the real reason, which was that your demo engineer didn’t show up.”

Here, AI can help by triangulating operational traces and debiasing the narrative: “You can use AI to take some of those incentives out of it to determine the correct answer, which is that no demo happened,” he concludes.

CDO Magazine appreciates Chris Eldredge for sharing his insights with our global community.

Related Stories

September 10, 2025  |  In Person

Chicago Leadership Summit

Crowne Plaza Chicago West Loop

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About