Research & Innovation

Combating Bias with a Human-Centered Approach to Artificial Intelligence

Stevens Institute of Artificial Intelligence director says the key to addressing algorithmic bias is prioritizing awareness, education and integrity — both in data and in humans

What does it mean for an artificial intelligence (AI) to be biased?

A documentary recently released on Netflix and PBS highlights the experiences of MIT Media Lab researcher Joy Buolamwini, a dark-skinned Black woman who discovered that commercially available facial recognition software algorithms were unable to detect her face unless she wore a white mask.

Last fall, a Twitter user posted to the social media platform to complain that Zoom's virtual background algorithms continually erased his Black colleague's head — only to discover that Twitter's own photo preview algorithm repeatedly chose white faces in a photo as "salient" (Twitter's word) over Black faces, regardless of position or even if said faces were presented in cartoon form.

How could three independently designed algorithms consistently fail to "see" Black people — the social implications of which are tantamount to technological racial discrimination?

And how do such failures enter the system in the first place?

Photo of Jason CorsoJason Corso, director of Stevens Institute of Artificial Intelligence and Viola Ward Brinning and Elbert Calhoun Brinning Endowed Chair within the Charles V. Schaefer, Jr. School of Engineering & Science

According to Stevens Institute of Technology computer science professor and director of the Stevens Institute of Artificial Intelligence (SIAI) Jason Corso, the main culprit is most often data.

"Real-world issues of bias, in my view, are almost all a result of tacit assumptions made during data acquisition or data annotation. It has very little to do with the actual method and how that method is used."

Despite the prime importance of data to modern AI methods, Corso says it is too often undervalued, understudied and underappreciated.

Yet the problem of bias in AI, as well as its solution, he said, lies not in technology but in humanity. This belief underpins his vision for SIAI.

"A lot of individuals think of AI as taking humans out of the loop and replacing them with computing systems," Corso said. "The Institute at Stevens couldn't be more diametrically opposed to that mindset. I see AI as not just blindly trusting algorithms and the data sets that are being created and hoping for the best. Instead I see it as putting the human back in the loop, both in terms of responsibility in creating the data and the algorithms, and in deploying them for different benefits to humans. That's why I think of the human as the central focus to the SIAI as we rebuild and cultivate a new ethos for AI at Stevens."

Learning bias

To understand how an AI system learns bias, Corso would like you to picture a cat door.

Imagine you want to build a cat door system on your back porch that allows your cat — but only your cat — through the door. All other cats or animals should be refused entry.

To train the algorithm that will run this cat door system, you would feed it thousands of photos of animals — photo after photo after photo — that have been labeled as either your cat or not your cat. This curated set of annotated photos represents the data set that trains your cat door decision-making algorithm. The more photos you feed into it, the better your results will be.

From this data set, the AI learns which patterns and details (parameters) in a photo are statistically likely to indicate your cat and which are not. Based on these parameters, the algorithm has learned to "see" your cat and label it accordingly.

Thus, when your cat door system is deployed and must decide whether to allow a particular animal through, it will make its decisions in the future based on its analysis of these snapshots from the past. The algorithm will load the model it built from that massive collection of training data, compare the animal waiting on the porch to that model, and decide whether the animal fits the parameters of your cat established during the training phase. If the similarity of details indicative of your cat is strong enough, the cat door opens.

If your training set of snapshots is lacking, however, so is your AI's ability to learn.

What if, for example, your training data included too many pictures of your cat or too few, in contrast to the other pictures in your training data. Or all the photos were taken during the day, but the system must be able to work at night. Or the animals in your photos all face the camera head on, but none approach it from the side. Or perhaps your cat has similar markings and coloring to a fox or raccoon or the mean cat next door, but you failed to include any photos of such animals.

Such deficiencies in a data set, Corso explained, are "a welcome invitation for bias" and can lead to undesired or inappropriate results.

They also occur all too often, especially in research.

Interrogating the data set

Although an oversimplification of contemporary AI methods, the cat-door example illustrates how data integrity is integral to an algorithm's success.

Like a young child, an AI's worldview is determined and limited by what it is taught. If the information used to teach it is incorrect, insufficient, or skewed, so will its decisions be based on that data.

Unfortunately, says Corso, AI researchers spend far more time focused on their algorithms than on questioning the data behind them.

Most data sets used in the research community, he explained, are developed from public sources, such as YouTube, Vimeo and Flickr. The advantage of such data sets is that they are massive, expedient, cheap and (in the U.S.) available for use in research under the fair use doctrine.

"But there are also tons of bias in those data sets," said Corso.

Most Flickr photos, he explained by way of example, focus on a single object in the center of an image. This is not an inherent flaw: it is simply a result of the way this type of photo gallery tends to be used. But if your algorithm's performance relies on large samples of, say, landscapes or streetscapes to function appropriately, the results from an algorithm trained with this Flickr data set are destined to exhibit an unwelcome bias.

In the rush from research to product, data quality is often overlooked. But to avoid or reduce algorithmic bias, in his own work, Corso said, "I have to make sure that the way I present my data to the learning part of the algorithm is balanced and fair in the way I want it to be."

A lack of such balance underlay Buolamwini's discovery. With further research she discovered the facial recognition software she was using — as well as widely used software by IBM, Google and Microsoft, among others — was trained and tested using photo data sets predominantly featuring light-skinned male subjects. Tests revealed these algorithms underperformed in identifying dark-skinned faces, female faces and, most dramatically, dark-skinned female faces.

Having been trained with too little complexion and gender diversity represented in its data set, facial recognition algorithms not only struggled to identify Buolamwini's race or gender: they failed to "see" her at all. Presented with a dark-skinned female face and unable to make a properly informed decision, the algorithm could not recognize that her face was, in fact, a face.

It may be possible Zoom's virtual background algorithms suffered a similar problem of bias. While not necessarily intentional, the ramifications for algorithmic bias extend far beyond university projects and social media platforms, and can manifest in society as discriminatory practices in policing and law enforcement, housing, access to healthcare and financial services, and hiring and human resources processes.

Although algorithms can be designed to compensate to a certain extent for deficiencies in data, Corso stresses that prioritizing and actively questioning data integrity and assumptions are paramount to building fairness into AI systems.

The flagship product of a startup Corso co-founded, in fact, was designed to assist in that regard.

Called FiftyOne, the open-source tool is designed to help researchers identify and address potential shortcomings in their visual data sets that could introduce problems such as bias into their results.

"FiftyOne was built to build awareness and capability around the importance of looking at your data — like working with your data like a sculptor would work with clay as you're building the data sets and the models of AI systems," he said.

Empowering an AI-literate society to build human-centered AI systems

Corso is "very skeptical" of AI methods that purport to be able to "undo" bias in data, even when that data has been generated by an AI itself.

With the issue of AI bias still in its relative infancy, he said, the research community has yet to even develop the vocabulary necessary to meaningfully talk about the different classes of bias, let alone address them.

"It's a big and hard problem and is something the community is working on," he said.

Again Corso sees solutions stemming from humanity rather than technology. Rather than trusting the machine to police itself, he looks to another kind of data: education, increased awareness and the willingness to engage in a global conversation about the ever-growing role and ramifications of AI in modern society.

Corso cites his desire for Stevens to lead that conversation as central to SIAI's mission.

"There is a lack of social language around these topics. We're at the very early stages of inventing ideas about data ethics and bias, and we need more people writing about it, thinking about it, talking about it and working through it so that we build a better understanding of how to classify and talk about these types of issues," he said. "My goal, if the Institute is a success in 10 years, is that we will have played a part in improving the discussion for these questions."

One way in which Corso plans for SIAI to help empower the general public to play a part in this conversation is through weekend AI literacy workshops designed for non-technical audiences.

To build unbiased, ethical AI systems that ensure fairness and equitability and improve social welfare requires more than just solidly curated data sets: it requires "a necessary awareness of the technologists and the users — of the creators and the adopters — of the potential limitations and capabilities of AI," Corso said. "Yes, the creators need to be accountable for these situations. But also the users do. That's why AI literacy is so critical."

As for the creators, it's not enough to simply tell programmers to be more aware of bias. ("That's not an answer," Corso said.) Rather, he sees it as Stevens' responsibility to teach them how.

"In some sense, we are grounded in the algorithmic '80s of AI, the machine learning '90s and GPUs of the 2000s. All we think about is what methods we can do. We don't think enough and teach enough about these questions of data," he said. "So we will be reinventing the way we teach AI through the Institute at Stevens, through novel courses that not only increase awareness but also give that language and the mechanisms for investigating measurements on distributions of your data. We need to round out that conversation a lot more, and the only way to do that is through new courses specifically focused around the critical role that data plays in AI and ethical modeling of data."

"The speed of technology generation is so fast, and it's not always good," he added. "So I think the Institute will play a leading role in valuing the need to analyze what we're doing as we're doing it when it comes to AI."

Learn more about computer science at Stevens:

Bachelor's Program

Master's Program

Ph.D. Program