Artificial Intelligence Is Still a Double-Edged Sword

Image credit, Samsung Global Newsroom

Imagine having your fate decided by an algorithm.

COMPAS, or the Correctional Officer Management Profiling for Alternative Sanctions, is a risk assessment tool implemented by courts across the nation to aid judges in criminal sentencing. COMPAS takes into account a defendant’s answers to a 137-question questionnaire, predicting recidivism, mapping out violent crime, and determining inmate supervision options. But by bestowing upon an entirely opaque algorithm unprecedented power in changing the course of people’s lives, we could be simply encoding within it the same bias that already plagues our criminal justice system — and upholding the punitive sentencing procedures contributing to the crisis of mass incarceration.

Over the last decade, artificial intelligence has developed at a remarkable pace, revolutionizing healthcare, education, and policymaking, among other fields, but amid these massive advancements has arisen a discussion of the simultaneous perils of automation and its implications for humanity. We don’t have to look far to find headlines warning us of the the weaponization of AI in the form of autonomous military drones programmed to kill, or of the exploitation of 50 million Facebook users’ data to influence the outcomes of the 2016 U.S. presidential election and U.K. Brexit referendum. But as illustrated by software like COMPAS, algorithms can be just as destructive when bias comes into play under the hood — posing a risk that could potentially spell out our doom if not addressed.

Understanding where algorithmic bias comes in is fundamental to reversing it. Amazon’s now-scrapped recruiting engine was found to unfairly penalize prospective employees on the basis of gender, rejecting resumes containing phrases specific to female applicants. The culprit? Years of historical hiring decisions and examples of successful applications in which men were overrepresented, leading to a biased algorithm trained on similarly biased data. Google Photos’ image classification schemes were revealed to commonly label African-Americans in pictures as gorillas, due to an unbalanced training set containing more samples of lighter-skinned faces than darker-skinned faces. One primary takeaway from this is that AI can oftentimes reflect the bias of its creators. Just as AI is touted as learning to interact with the world as a child might, machines can learn prejudice as a byproduct, carrying forward the bias inherent in human decision-making.

The path to fixing AI bias isn’t so clear-cut, however. According to MIT’s Technology Review, when Amazon reformed its recruiting engine to disregard explicitly gendered words like “women’s,” the system still caught onto implicitly gendered verbs like “executed” and “captured,” which appear far more frequently in male applicants’ resumes than those of their female counterparts. And since deep learning algorithms are programmed with the ultimate goal of accuracy, not fairness, we can lose sight of their greater human impact and their potential to be deployed as vehicles of oppression. It’s also too difficult to frame concepts as indefinite as “bias” in a mathematical context, resulting in a mutually exclusive array of possible representations of algorithmic fairness.

With all of this in mind, will truly equitable AI remain merely an elusive ideal? Not necessarily — significant strides are already being made by researchers seeking to diversify training sets as well as eliminate bias in every stage of an algorithm’s development, provide new ways to define fairness, and make justice a priority in the creation of new AI. It’s important that we work toward developing intelligent systems that remove, rather than amplify, the disparities that currently face our society.

read original article at https://medium.com/@sneharevanur/artificial-intelligence-is-still-a-double-edged-sword-1131eee8d95b?source=rss——artificial_intelligence-5