Twitter
Advertisement

This Indian origin scientist from Princeton just discovered why Artificial Intelligence can be racist and sexist!

What? Even AI is racist.

Latest News
article-main
Arvind Narayanan
FacebookTwitterWhatsappLinkedin

Artificial Intelligence systems can acquire our cultural, racial or gender biases when trained with ordinary human language available online, scientists including one of Indian origin have found.

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational.

However, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.

Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, researchers found.

These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorisation and automated translations.

"Questions about fairness and bias in machine learning are tremendously important for our society," said Arvind Narayanan, assistant professor at Princeton University in the US.

"We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from," said Narayanan.

Researchers developed an algorithm GloVe, which can represent the co-occurrence statistics of words in, say, a 10 -word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

Researchers used GloVe on a huge trawl of contents from the World Wide Web, containing 840 billion words.

Within this large sample of written human culture, researchers then examined sets of target words, like "programmer, engineer, scientist" and "nurse, teacher, librarian" alongside two sets of attribute words, such as "man, male" and "woman, female," looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race.

The Princeton machine learning experiment managed to replicate the broad substantiations of human bias.

For instance, the machine learning program associated female names more with familial attribute words, like "parents" and "wedding," than male names.

In turn, male names had stronger associations with career attributes, like "professional" and "salary." This correctly distinguished bias about occupations can end up having pernicious, sexist effects.

The study was published in the journal Science.

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement