How to Make AI a Force for Good

chatgpt
Facebook
Twitter
LinkedIn

By Manuel Sanchez, Information Security and Compliance Specialist, iManage

There’s no sense ignoring an uncomfortable truth: Generative AI has laid a gift at the feet of bad actors. This new technology enables them to launch highly targeted and ultra-convincing phishing attacks at a scale they previously only dreamed of, or to generate a devastating piece of malware simply by typing a few prompts on the keyboard. 

It’s not all doom and gloom, though. At the same time that AI is turbocharging the attack capabilities of “the bad guys”, it also offers new ways for “the good guys” to batten down the hatches and improve their defensive posture. 

So, how can we put our finger on the scale to tip it in a positive direction? What is required to make AI a force for good?

Into the fold

The first step is for vendors to build AI into their offerings in a thoughtful and mindful manner. This approach should take into account any potential risk to sensitive content, as well as ethical and regulatory duties. 

In practice, this means integrating this technology into their products as a built-in functionality, rather than requiring their customers to implement standalone third-party tools that could expose their data to a costly breach. Taking this approach enables AI to benefit customers in a range of areas.

Smarter security, thanks to AI

One long-time use case for AI has been using behavioural analytics to detect patterns within a document management system (DMS), to better identify anomalous behaviour that might indicate a potential data breach. For example, if legal professional X normally only accesses a couple dozen files from the same 3-5 matters, but they start downloading hundreds of files from 10 different matters, that could indicate that the user’s login credentials may have been compromised. AI-based monitoring would look at previous patterns and recognise if this is a genuine action or if it is indeed malicious activity. This helps to reduce any disruptions should the activity be genuine,

Another use case for AI around security has centered on virus and malware recognition. Historically, antivirus tools relied on signature-based methods: referencing a continually updated list of known viruses and other threats when scanning a system to make sure that these threats weren’t present, or to quarantine them if they were.

Unfortunately, the bad guys move pretty quickly, and if a signature file doesn’t reflect the most recent threats, it won’t provide effective protection. That’s where AI comes in: rather than cross-referencing a signature file, it can “learn” what malicious code looks like and then quarantine something that looks suspicious as soon as it encounters it.

Generative AI is ready to join in

In the same way that traditional forms of AI have been woven into existing offerings to give the good guys powerful new capabilities, generative AI can also be used as a force for good. The chat-style interface of generative AI, for instance, makes it ideal as a dispenser of valuable knowledge and best practices, especially for IT and security professionals within an organization.

Picture a security analyst who spends hours querying systems and working through vast amounts of reporting data to investigate an incident and assess the level of risk posed by the incident.  Entering some prompts into a generative AI interface can instantly take care of the extraction and analysis of this data to help establish the risk in minutes.

Of course, the answers that the generative AI serves up are only as good as the knowledge it draws upon to generate content. For this reason, it’s important to ensure that the underlying large language models (LLMs) are trained on vetted and approved content – a process known as grounding. In this case, the IT team would want to ensure that the LLM was trained on trusted sources and the collective wisdom of the organization, rather than generic materials from the entire expanse of the internet, or outdated versions of best practices.

The training of generative AI models brings up another important consideration, which is that using a free generative AI tool – rather than generative AI functionality that’s been embedded in an existing vendor’s platform – raises the possibility that the data being fed into the model might leak to the outside world. 

That’s bad enough within the IT team; it’s equally devastating if the real estate or capital markets practice group is using sensitive, highly negotiated documents to train an LLM as a knowledge base, and that information seeps into the public. Using generative AI that’s been embedded in trusted applications is a way to create a “closed loop” that prevents any potential leaks of sensitive data, de-risking its use.

An Oppenheimer moment?

When it comes to AI, we’re essentially at an “Oppenheimer” moment: A powerful new technology has emerged that can be used as a force for either good or bad. It’s incumbent on tech vendors to incorporate AI into their products in ways that enable customers to have the confidence of meeting their ethical and regulatory responsibilities, so that it can be used as a force for good. With anything this powerful, there’s a responsibility to use it wisely.

Sign up for our newsletter

Get weekly news and insights delivered straight to your inbox!