What GCs need to know about algorithmic bias

Facebook
Twitter
LinkedIn

Source: Legal Dive, Author: Bradley Merrill Thompson and Michael Shumpert

Bradley Merrill Thompson is member of the firm at Epstein Becker Green and Michael Shumpert is managing director at Mosaic Data Science. Views are the authors’ own.

General counsel are aware of the long and growing list of stories: An employment screening tool that doesn’t account for accents. Facial recognition software that struggles with darker skin tones. A selection app that shows a preference for certain backgrounds, education, or experience.

In some ways, it is easier to find certain types of discrimination committed by an algorithm than it would be if we were auditing a purely human decision. But even though all the software code and data are right before our eyes, evaluating the performance of these models is difficult at best. We cannot achieve perfection, but collaboration between data scientists and attorneys is the key to developing algorithms that are compliant with anti-discrimination laws.

There are several areas where data scientists and attorneys need to work together to ensure that the AI model is not biased. Read the full article here.

Sign up for our newsletter

Get weekly news and insights delivered straight to your inbox!