How implementing AI guide rails today will benefit society tomorrow

Facebook
Twitter
LinkedIn

In June of this year, members of the European Parliament (MEPs) voted on the adoption of the draft EU AI Act, passing a vote of 499 to 28 (with 93 abstentions). The first law by a major regulator aims to align future AI development with the EU’s rights and values such as social and environmental well-being, safety, privacy and transparency, non-discrimination, and human oversight – analyzing and classifying them based on the risk they pose to users.

AI and machine learning have been applied across industries for several years, transforming everything from customer service to medical diagnoses. But the emergence of generative AI products like ChatGPT have triggered an explosion of interest in the value the technology can bring to businesses and consumers alike. In fact, reports forecast that the global generative AI market will reach USD 126.5 billion by 2031, growing at a compounded annual growth rate of 32%.

The EU AI Act serves as a pivotal moment that will shape how AI is developed in the EU and could serve as a blueprint for other regulatory authorities worldwide. At a cost of 31 billion Euros to implement, this type of regulation doesn’t come cheap––nor is it without partial opposition. In fact, in an open letter sent to the European Parliament, Commission, and member states, leaders from more than 150 organizations caution that the act has the potential to “jeopardize technical sovereignty.” While most executives understand the need to regulate AI––and are keen to move forward with proper regulation–they want to ensure it’s being done thoughtfully.

All that said, it is imperative to ensuring ethical and safe development, preventing biases, protecting privacy, fostering transparency, and minimizing potential risks posed by powerful artificial intelligence systems. Without proper regulation, AI can be developed and deployed in a way that undermines human rights, democracy, and global security. In other words: the opposite of what the technology is intended for.

Defining Risk

A vital element of the EU AI Act is that it seeks to define levels of risk posed by various types of AI deployment. At this point, AI risk is classified by four levels:

Unacceptable Risk is assigned to an AI system if it is a threat to people. These systems will be banned, as they include things such as: cognitive behavioral manipulation of specific vulnerable groups (think voice-activated toys that may influence children); social scoring systems that classify people based on behavior, socioeconomic status, or personal characteristics.

High Risk scenarios include AI systems that could negatively impact safety or fundamental rights. One group of systems will fall under existing EU product safety regulations (such as AI in toys, cars, medical devices, etc.) The second group includes AI systems used in areas such as education, infrastructure, law enforcement, migration, and employment. AI systems deemed to be high risk must register in a publicly accessible database managed and published by the EU. 

Limited Risk AI systems typically require minimal transparency so that users are able to make informed decisions. Users should be made aware when interacting with AI via chatbot or similar scenario. It also specifically requires generative AI systems (like ChatGPT) to identify the content, videos, and photos created by AI and to protect copyright and prove legitimacy by listing sources used for training and prompts.

Minimal Risk AI systems can be used freely without oversight. The overwhelming majority of AI systems currently in use fall into this category, such as spam filters and AI-enabled video games.

Regulating What We Know Today

Taking a risk-based approach is a smart first step by the EU Commission, as it sets up a framework to serve as a starting point for determining what level of regulation should be implemented. By seeking a “balanced and proportionate approach,” we can provide a minimum level of oversight while not stifling technological development.

But in order to do this effectively, we need to get comfortable with the concept of regulation amid a rapidly evolving landscape; one where we may not be able to see the whole picture just yet. This idea tends to make professionals and institutions across government, legal, and regulatory roles quite nervous, but we need to be willing to put guide rails in place and be ready to course correct as needed.

Fortunately, this is starting to happen in several pockets across the world. According to Stanford University’s 2023 AI Index, 37 AI-related bills became law in 2022, bringing the total worldwide to 123 since 2016. Authorities in China have rolled out a series of AI-related laws to protect privacy and promote transparency in generative AI. Prime Minister Rishi Sunak has promised that the UK will take a “leadership role” in drawing up “safe and secure” rules and has announced it would host the first global AI regulation summit in the fall of 2023.

Despite this progress, there remains little consensus internationally regarding how to set global standards. For example, the U.S. has published a “Blueprint for An AI Bill of Rights,” but it is taking a softer stance overall and has yet to codify or enforce compliance. This may create challenges for foreign technology looking to break into the U.S. or other markets.

What About Jobs?

It is nearly impossible to talk about AI and not venture into the topic of jobs. Efficiencies created by AI will undoubtedly impact certain roles, but it will also create opportunities. Yet, I think a more profound conversation needs to be had about what type of jobs will be created as a result of AI and how we train and upskill talent to truly reap the benefits of this technology. For example, bringing the power of AI and other technology tools to professionals of tomorrow, right in school or universities; and designing these tools with the next generation. We need to move past the teach everyone to code movement; to help augment human skills with machine intelligence to achieve outcomes like productivity and usability.

Finding a Balance

It’s undeniable that AI will continue to change how we all live and work. What is imperative is that we are unified in our goal to find the right balance between the benefits of new technology and any unintended consequences, putting checks and balances in place while also unlocking AI’s potential benefits.

Revolutionary technology as powerful as AI cannot be left to a small number of people when such a range of diverse talent and skills are available around the world. Industry and governments need to converge to put a framework in place; one that mitigates risks while also unlocking AI’s opportunities in a safe and transparent way. With the proper guide rails in place, we can realize the promise of AI to better serve communities by facilitating justice, providing access to healthcare, driving financial inclusion, and more.

Source: Legal Technology

Sign up for our newsletter

Get weekly news and insights delivered straight to your inbox!