Opinion & Analysis
Written by: Dan Mazur
Updated 11:03 AM UTC, Wed August 9, 2023
In the following article, I am touching on how as a society we need to think about the impact of AI on our world, and how we need to closely consider aspects of confidentiality, data protection, bias, liability, regulations and privacy, and intellectual property.
Throughout history, we have come across game-changing moments that have dramatically reshaped the way we live. Sometimes, these inventions took a while to catch on. Take the telephone, for example — at first, people saw it as a fancy gadget or even a luxury item (much to the chagrin of Mr Bell). Eventually, it revolutionized the way we communicate, turning into a must-have in our daily lives.
Who would have thought at the time that people would have anxiety about leaving their phones behind?
However, there were also those technologies that made a big impact right from the start. The printing press spread knowledge like wildfire, the steam engine jump-started the Industrial Revolution, and the internet changed how we connect and share information. And now, here we are, at another exciting turning point: The rise of artificial intelligence (AI).
As Generative AI technologies recently burst onto the scene, we are seeing a mix of excitement, caution, maybe fear, and of course, those already racing to get rich quickly. So, how should we be reacting to this ‘AI for the common person?’ If you don’t know yet, you aren’t alone. My suggestion is to simply take a step back and take a breath.
As AI continues to advance, we will realize it is already touching virtually every aspect of our lives — from education and healthcare to manufacturing and finance, and from entertainment to well, something a bit more serious like national security. We need to quickly get to the right balance between embracing AI’s potential for helping humanity and addressing the ethical and regulatory concerns that need to be applied to put protections in place.
When it comes to keeping our data safe and sound, we have to rethink the way we handle confidentiality. AI systems process heaps of information, so it is crucial to protect both the data we provide as inputs and the insights generated from it. We can do this by developing internal rules that restrict sharing personal or proprietary info or by requiring encryption to secure data before it even reaches the AI system.
Data protection – another big topic
AI products are constantly learning and improving, which makes it tricky to set clear boundaries on how service providers use customer data. To tackle this issue, we can implement contractual terms that keep the ownership of pre-existing materials with the customer. Plus, we need to keep an eye on AI’s potential to link redacted data and address the concerns related to data persistence, repurposing, and spillovers.
Now let’s talk a bit about bias in AI. You should know that these large language models (LLMs) like GPT are trained on massive amounts of data. But here’s the thing – a lot of the electronic data and information floating around in recent years has been controversial, biased, and downright partisan. And guess what? LLMs take all that input and use it to generate their output.
So, if there’s an overwhelming presence of bias in the data they’ve been fed, it increases the chances of biased results coming out of the model. It’s like the saying goes, “Garbage in, garbage out.” If we don’t address the bias in the data, it’s likely to rear its ugly head in the AI’s output. That’s why it’s so important to be aware of this issue and take proactive steps to ensure that AI systems are trained on diverse, representative, and unbiased data. Let’s strive for fairness and inclusivity in the AI realm to make sure everyone is treated fairly and respectfully.
Liability is a big deal…
…when it comes to AI, especially if you are familiar with data and privacy. Here’s the crux of the matter: AI service providers must ensure they possess the appropriate permissions for all the data they handle. Without these permissions, customers could face issues such as copyright infringement or data misuse.
It is certainly not beyond imagination that the AI might ‘accidentally’ (being accidental assumes some rules were broken that it knew about) use copyrighted content which it is not supposed to, and that can lead to some serious legal trouble. The bottom line is that when dealing with AI, it is important to be aware of these liability risks. Companies need to make sure they are working with providers who follow the rules, get the proper permissions, and have measures in place to address copyright issues and bias.
Regulations and privacy laws have seen an increase in nearly every jurisdiction across the world. AI will only amplify the focus on protections. Some regulations and privacy laws may seem similar, and many of them will tackle the same things. For example, the European Union (EU) is intensifying its efforts by introducing the AI Act, a significant piece of legislation that aims to create a legal framework for AI in the EU.
Similar to the EU’s General Data Protection Regulation (GDPR), the AI Act signifies the EU’s continuing commitment to protecting individual rights in the era of rapidly advancing technology. Much like the GDPR revolutionized data protection, the AI Act could pave the way for setting global standards for AI regulation.
This only emphasizes how the EU is once again at the forefront of establishing comprehensive legal structures to address the complexities introduced by emerging technologies. To quote directly from the draft of the AI Act, “…it is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited, and minimal”.
It is an interesting and simple framework. The Act focuses on ensuring that AI systems are safe and respect fundamental rights, all while fostering innovation and economic growth. What does this mean?
The AI Act establishes requirements for the various levels of risk in AI systems, like transparency and accountability. It also sets up a European Artificial Intelligence Board, which will play a crucial role in implementing and updating the rules. Plus, it imposes penalties for non-compliance, with fines of up to 6% of a company’s annual global turnover. Yes, the EU takes privacy very seriously indeed.
Regardless of if and when the EU’s AI Act is in play, we are already seeing the legal and regulatory landscape shifts. We not only have to consider existing regulations like the National Artificial Intelligence Initiative Act of 2020, the European Union’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), the California Privacy Rights Act (CPRA), and other laws like the Virginia Consumer Data Protection Act (VCDPA) but also keep up with new legislation like the (still proposed) AI Act. As the AI field evolves, so will the rules of the game — and we need to stay ready to adapt.
IP rights in AI-generated content
Navigating intellectual property (IP) rights in AI-generated content presents challenges as we explore the realm of artificial intelligence. The existing legal landscape may not offer straightforward answers. Who rightfully owns AI-generated content when the AI itself is involved? Who really owns the picture of Kim Jong Un in a Hawaiian shirt eating poke?
These uncertainties are like uncharted waters, requiring ongoing discussions within companies and as a society. They necessitate collaboration with legal experts (certainly through lawsuits as well), and the development of clear legal frameworks. Accidental leaks of IP have occurred, underscoring the need for robust strategies and safeguards.
There are many examples of IP leaks over the years, but who would have thought an Air National Guardsman would post top-secret national security information on Discord to a group of gamers? What about confidential information in your company? Would you want any of this data merged with the inputs of the LLMs to be used by everyone else?
It is very easy to now use these technologies to improve productivity but it could result in inadvertent leaks allowing unauthorized access to proprietary algorithms, jeopardizing the company’s competitive advantage and potentially compromising system security. Such incidents highlight the imperative of implementing strong measures to prevent accidental IP leaks and safeguard valuable assets. As data enthusiasts, we embrace the responsibility of grappling with these complex IP dilemmas while ensuring the fair and secure development of AI technology.
So, here we are, at a significant turning point in our journey. Just like the printing press, the steam engine, the telephone, and the internet revolutionized the world, we find ourselves amid a new wave of transformation with the rise of artificial intelligence (AI). As we navigate this exciting technological change, it’s important to support the ethical and regulatory considerations that come along.
By doing this, we, as a society can seize the opportunity to not only shape the future of AI but shape the future of society as well. It’s pretty amazing to think about the possibilities that lie ahead. We have seen how historical advancements brought immense progress and transformed our lives in ways the people of the time couldn’t have imagined.
So, let’s pause for a moment and ask ourselves: How does this turning point…this ideal point for inflection… compare to those remarkable advancements of the past? How can we harness the power of AI while ensuring it benefits everyone? It’s a responsibility that rests upon us, and the choices we make today will undoubtedly shape the course of history.
Together, we need to embrace this exciting era with enthusiasm and a sense of shared purpose… with a healthy dose of caution as well. By working collaboratively, addressing moral and ethical concerns, and taking on the challenge responsibly, we can create a better, AI-driven world that enhances our lives and empowers us all. How will that look? At this point, we can only imagine.
About the Author
Dan Mazur is a data strategist and information governance professional with 30 years of experience. He has presented at many data conferences on a range of topics, including business intelligence and analytics, data and business capabilities, data management, and information governance.
Mazur holds degrees in data and software architectures, education, archaeology, and history from Carnegie Mellon University and Cleveland State University. He is dedicated to maximizing the use of data to achieve business goals and using technology to advance education.