Smart Data Empowers Data Products with Secure, Private & Trusted Data Sharing Within and Between Organizations

Smart Data Empowers Data Products with Secure, Private & Trusted Data Sharing Within and Between Organizations

How do Chief Data Officers unlock the full value of data? By managing it like a product and creating reusable data products and patterns for piecing together data technologies, enabling companies to derive value from data today and tomorrow.

I interviewed Alan Rodriguez, founder of the nonprofit Data Freedom Foundation, to get his insights and observations regarding data product solutions. Our in-depth discussions led to this series of four articles.  

In the previous three articles, we introduced the Smart Data Protocol (a data-sharing standard, and an open-source solution, created by the Data Freedom Foundation). In this final article, we discuss three more observations about data products and explore how the Smart Data Protocol powers the most advanced enterprise data production solutions.

Observation 10: Today’s Predominant Approaches to Data Governance Are Largely Unsuccessful

Derek:

Organizations typically employ either a grassroots or big bang data strategy — neither of which enables them to make the most of their data investments. How does the notion of “data products” provide a better way? 

Alan:

Over the last decade, many Chief Data Officers followed one of two data governance strategies. 

In a grassroots “business-focused” strategy, each team pursuing individual use cases assembles the necessary data and technologies. This approach results in significant duplication of efforts and a tangle of redundant technology architectures that are costly to build, manage, and maintain. 

Meanwhile, in a big bang 'technology-focused’ strategy, a centralized team organizes, cleanses, and aggregates all organizational data assuming this effort will drive all use cases. This approach does enable some data reuse, but it’s often not aligned with business use cases and therefore fails to support end users’ specific needs. In response to the inevitable lack of business value, new use cases are funded and aligned with specific business priorities, often triggering a grassroots approach and its associated problems.

Big Bang Strategy

Organizations that choose the “big bang” approach often assume this will accommodate the needs of every analytics development team and data end user in one fell swoop. They launch a massive program to build pipelines to extract all the data in its systems, clean it, and aggregate it in a data lake in the cloud without taking much time up front to align its efforts with business use cases. The resulting solution is often unable to deliver basic new functionalities. 

After spending nearly three years creating a new platform, a global bank found that only some users, such as those seeking raw historical data for ad hoc analysis, could easily use it. In addition, the critical architectural needs of many potential applications, such as real-time data for personalized customer offerings, needed to be addressed. As a result, the program didn’t generate much value for the firm.

Derek:

So, what's a better approach if neither a big bang nor a grassroots data strategy works?

Alan: Data product strategy.

Data Product Strategy

Companies are most successful when they treat data like a product. When a firm develops a commercial product, it typically tries to create an offering that can address the needs of as many kinds of users as possible to maximize sales. Often that means developing a base product that can be customized for different users.  

An article in the July-August 2022 Harvard Business Review,“A Better Way to Put Your Data to Work,” maintains that when companies treat their data like a product, they can:

  • Reduce the time it takes to implement a data product in new use cases by as much as 90%.

  • Decrease their total cost of ownership (technology, development, and maintenance) by up to 30%, 

  • Reduce their risk and data governance burden.

Companies enhance their products by adding new features like modifications that improve core capabilities. Companies introduce brand-new offerings in response to user feedback, performance evaluations, and changes in the market. At the same time, firms seek to increase production efficiency. Wherever possible, they reuse existing processes, machinery, and components.

Derek:

I can see how treating data in much the same way helps companies balance delivering value with their data today and paving the way for quickly getting more value out of their data tomorrow. Can you give us some examples?

Alan: Yes.

Observation 11: Manage Data Like a Product to Maximize Value

A data product delivers a high-quality, ready-to-use set of data that people within and outside an organization can easily access and use to help solve many different business challenges. It might, for example, provide 360-degree views of customers, employees, or a channel, like a bank’s branches.  

Another product might enable digital twins to use data to virtually replicate the operation of real-world assets or processes, such as critical pieces of machinery or an entire factory production line.

Because they emphasize data reuse, data products can generate impressive returns. For example:

  • One customer data product at a large national bank has powered nearly 60 use cases across multiple channels. Those applications provide $60 million in incremental revenue and eliminate $40 million in losses annually. And as the product is applied to new use cases, its impact will continue to grow.

  • At a large telecom company, their first data product provided comprehensive data powering 150 use cases. They’re set to produce hundreds of millions of dollars in cost savings and new revenue within three years. The company estimates that over the first 10 years, the use cases will have a cumulative financial impact of $5 billion, providing a return many times over on its initial investment. 

Data products are layered above existing operational data stores like data lakes, data warehouses, data virtualization, and master data management (MDM) solutions that provide a trusted, high-quality data feed. The people using data products don’t have to waste time searching for data, processing it into the right format, and building individual data sets and pipelines. Data products create reusable and cost-effective technology and data assets.  

And just as consumer products use quality assurance testing or production line inspections to ensure that their products work as promised, data product managers can guarantee the quality of their offerings’ data. 

For instance, a data product owner must tightly manage data definitions and metadata, availability, and access controls. They must also work closely with “data stewards” who own the data source systems or are accountable for the data’s integrity.

Smart Data as an open standard dramatically improves data sharing by:

  • Enabling more data sharing exponentially while mitigating many risks and third-party costs associated with traditional data-sharing paradigms.

  • Improves data sharing by addressing data quality, security, provenance, and trust challenges as data flows between organizations and individuals.

  • Automates most security, privacy, and regulatory compliance as data flows between organizations and individuals.

 Smart Data-Enhanced Data Products:

Smart data as a data-sharing standard solves several existential problems across data security, privacy, trust, and monetization:

The Data Security Problem: How can we secure and protect shared data assets?

While many organizations focus on data access policies to secure data assets, today's solutions only secure data within the organizational boundary. Smart Data extends organizational access control to data shared outside the organization.

The Data Privacy Problem: How can we automate data privacy to minimize complexity and compliance risks?

Privacy compliance continues to grow in complexity. Smart Data enables automated privacy compliance within and between data-sharing partners. Smart Data lowers organizational risk and accelerates innovation.

The Data Trust Problem: How can we trust data to make significant, critical, and timely decisions?

Many organizations focus on data quality and lineage to improve confidence in data-driven decision making. Smart Data provides cryptographic proof of data provenance across all data-sharing partners. This dramatically improves data provenance, quality, and trust, accelerating data-driven decision-making.

The Data Ownership Problem: How can we monetize our data without losing control?

As we ponder the bigger dilemma of converting organizational data from an expense and a risk to a secure revenue-generating asset, it enables data owners to understand precisely how their data is used, when, by whom, and what decisions are driven by individual data products. Smart Data allows data products to be analyzed based on their individual ROI and eventually captured on an organization’s balance sheet as a tangible asset.

Conclusion

Derek:

As Albert Einstein said, “We cannot solve our problems with the same thinking we used when we created them.” The first article in this series boldly states that “The Best Solutions to Data Security and Privacy Challenges Require Revolutionary Thinking.” Your concluding remarks? 

Alan:

The Smart Data Protocol is the revolutionary thinking intended to solve our pervasive data-sharing challenges — challenges that many data professionals and organizations accept as a necessary evil. This “necessary evil” of sharing data with zero control no longer applies. We have better options. We have a nonprofit-governed data-sharing standard. 

We hope you find Smart Data as fascinating as the team at Data Freedom Foundation. If you do, we’d love to hear from you!

Article Series:
The Best Solutions to Data Security and Privacy Challenges Require Revolutionary Thinking 
Our second article explores repeating programmatic themes in technology over the last two decades (software-defined networks and storage, virtual computing, and code containers), culminating with the idea of Smart Data (software-defined data). - The Smart Data Protocol: Combining Web3 Innovations With Proven Enterprise Data Technologies
- Our third article explores Privacy Enhanced Technologies (PETs) and how they provide the Smart Data Protocol’s most fascinating ‘smart’ capabilities.

About the Author

Founder, CEO and Principal Consultant of Gavroshe. Derek has over 3 decades of Data & Analytics experience, including Big Data, Information Resource Management (IRM) and Business Intelligence/ Data Warehousing fields.  He established Data Resource Management and IRM Functions in several large Corporations using Bill Inmon's DW2.0 and the Zachman Framework as a basis. Derek established and managed numerous enterprise programs and initiatives in the areas of Data Governance, Business Intelligence, Data Warehousing and Data Quality Improvement. He is a founding member of MIT's International Society for CDOs.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech