Data Collateralization: How to Decide the Value of This Intangible Asset

Data Collateralization: How to Decide the Value of This Intangible Asset

Doug Laney

Douglas B. Laney, Data & Analytics Strategy Innovation Fellow | West Monroe 

The idea of using data as an asset, as tangible as any other, is fast catching on. It is already being used as collateral for securing loans, and major corporations are among the first movers. With the help of their customer loyalty data, United Airlines, Delta Air Lines, and American Airlines raised around US$25 billion during the pandemic.

In comparison, small to medium-sized businesses (SMBs) are yet to make a mark in the space. It is significantly more challenging for SMBs to secure funding through “data collateralization.” In reality, “data collateralization” benefits both the lender and the borrower.

On the borrower side, it provides a nondilutive source of funding, allowing the founders to retain equity and control of the company. Unlike traditional venture debt, the risks are lower when using data as a collateral asset. Even in the event of default, the worst outcome is that the lender will retain a copy of the data assets. At the same time, the borrower keeps the original version without disrupting their operations. In comparison, the seizure of physical assets or a hostile creditor takeover can be far more disruptive.

The lender side also enjoys unique advantages. Data is a constantly growing, nondepleting, and nonexclusive asset that can be monetized to satisfy a defaulted loan. Lenders also have access to personal and up-to-date information about the borrower through the data assets before and during the loan period. The data can act as an early warning signal, providing insight into a company's health ahead of other indicators.

But how do we decide the value of this intangible asset?

Valuing intangible assets such as data is becoming increasingly popular. However, determining the value of a specific dataset acceptable to both the lender and borrower is a key roadblock. While there are established methods for valuing tangible assets such as real estate and inventory, data is intangible and presents a new asset class.

To mitigate this challenge, some third-party experts and consulting firms provide valuation services for intangible assets, including data. The process can be time-consuming and demanding in terms of expertise. Even the most experienced consultants may take several weeks or months to deliver a valuation report. Furthermore, the required expertise can come at a significant cost, often amounting to several hundred thousand dollars.

While each consultancy firm may keep its exact valuation methodology confidential, standard methods for valuing data include the Royalty Relief Method (RRM) and the cost basis method. These methods consider the direct financial value generated by the business's datasets but differ in how they assign costs to the data assets themselves.

The RRM is based on intangible assets like trademarks and copyrights and calculates the hypothetical royalty payments incurred if the asset was leased from a third-party licensor. On the other hand, the cost basis method calculates the cost of producing or purchasing the data, considering factors such as research and development expenses, storage and server costs, and labor.

Traditional data valuation approaches do not work

The traditional methods of evaluating data have limitations. One of the biggest challenges with these methods is the absence of market comparisons. While some valuation models aim to determine the value of data, the lack of transparency in the data brokerage market is needed to ensure the understanding of market rates and potential buyers.

At a high level, the data market is massive, widespread, and highly obscure, with more than 5,000 companies globally and growing. The global data market is predicted to reach $462 billion by the decade's end, expanding at a CAGR of 6.8% during the forecast period (according to TMR). However, these marketplaces and exchanges do not make their transaction details publicly available, and many data exchanges are still peer-to-peer, relying on custom solutions or platforms. This makes it difficult to comprehend how the markets operate and the supply and demand for various data types.

To overcome these challenges, some new players in the data valuation field are utilizing search and AI technologies to gain insight into these markets. For instance, Nomad Data, a search platform for third-party data, has started collecting and analyzing metadata. Gulp Data, a “neo-lender” that offers data-backed loans, uses machine learning trained on thousands of datasets from active markets to perform data valuations in hours rather than weeks or months. By monitoring the data liquidity markets on various data exchanges with over 15 billion records listed, they have real-time visibility into the actual market demand for data. They can also identify differences in demand between markets globally and uncover specific buyers, shedding light on monetization and providing actual market comparisons.

As the data markets grow and demand increases from companies seeking to utilize their data assets, technology-driven valuations like these will become increasingly necessary to understand the worth of data. Combined with true expertise, these tools can eradicate some barriers for SMBs to leverage their data and democratize access to the expanding global information market. However, the path to comprehending the actual market worth of data is becoming more apparent today.

About the Author

Douglas Laney is the Data & Analytics Strategy Innovation Fellow at West Monroe. He consults with business, data, and analytics leaders on conceiving and implementing new data-driven value streams. Laney originated the field of infonomics and authored the best-selling book, “Infonomics,” and the recent follow-up, “Data Juice: 101 Real-World Stories of How Organizations Are Squeezing Value From Available Data Assets.” He is a three-time Gartner annual thought leadership award recipient, a World Economic Forum advisor, and a Forbes contributing author. Laney co-chairs the annual MIT Chief Data Officer Symposium, is a visiting professor at the University of Illinois and Carnegie Mellon business schools and sits on various high-tech company advisory boards.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech