Connect with us

Hi, what are you looking for?

BSO Entertainment

Advanced Robots Are Learning to Be Racist and Sexist (Video)

Artificial intelligence was adopted to develop unbiased and better decision-making. However, as AI becomes more advanced in its interpretation of human language, the more likely it is to adopt the human biases toward race and gender.

Researchers created a system for scoring the positive and negative connotations associated with words, including names, in texts analyzed by AI.

We replicated a spectrum of known biases, as measured by the Implicit Association Test [IAT], using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web.

Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names.

This research came after a study was conducted at Princeton University. Using an unsupervised AI by the name of GloVe, the research team conducted a word association study. The AI successfully paired words like “flowers” and “insects” with ‘pleasant’ or ‘unpleasant’ words like ‘family’ or ‘crash’. The algorithm was then given a list of “white-sounding” and “black-sounding” names to make the same association.


The AI associated the black-sounding names with “unpleasant” and the white-sounding names with “pleasant” prompting researchers to conclude that the datasets used in the training of AI are driven by prejudices and assumptions. These biases are being transferred to the AI and influencing their thoughts and ideas.

Flip to watch researchers explain how robots are becoming racist.

Pages: 1 2

Pages ( 1 of 2 ): 1 2Next »
Advertisement

Subscribe to BSO Facebook

Advertisement