Sacked engineer says that chatbot’s problem may also be ‘Google’s problem’

- Advertisement -

Google’s AI chatbot controversy continues to rage. The former Google engineer Blake Lemoine has leveled new charges against the artificial Intelligence (AI)-powered chatbot in an interview to Business Insider.

- Advertisement -

He said that the chatbot holds discriminatory views against those of some races and religions. Google fired Lemoine last month after he claimed that Google’s chatbot, known as LaMDA, or Language Model for Dialogue Applications, is sentient (has developed human feelings). Lemoine’s job included testing the chatbot.

Google initially sent Lemoine on paid leave after he allegedly gave some documents related to chatbot to an unnamed US senator, claiming that it is biased. He also published alleged transcripts of his chat with the Bot online.

‘Google’s problem’

- Advertisement -

In the interview Lemoine has given examples which he claims prove that the Google chatbot is biassed towards certain religions and races. Giving examples, Lemoine claimed that when told to do an impression of a Black man from Georgia, the bot said, “Let’s go get some fried chicken and waffles.” Similarly, according to him, the Bot answered that Muslims are more violent than Christians when asked about different religious groups.

Lemoine goes on to blame these alleged biases in the AI chatbot on the lack of diversity of engineers at Google who design them. “The kinds of problems these AI pose, the people building them are blind to them. They’ve never been poor. They’ve never lived in communities of colour. They’ve never lived in the developing nations of the world,” he said. “They have no idea how this AI might impact people unlike themselves,” he added.

- Advertisement -

According to Lemoine, there are large swathes of data missing for many communities and cultures around the world. He said that if Google wants to develop that AI, then it should have a moral responsibility to go out and collect the relevant data that isn’t on the internet. “Otherwise, all you’re doing is creating AI that is going to be biassed towards rich, white Western values.”

What did Google say

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google spokesperson Brian Gabriel said in a statement on Lemoine’s claims. “We will continue our careful development of language models, and we wish Blake well,” it added.

Source by [author_name]

- Advertisement -

Related Articles

Stay Connected

- Advertisement -

Latest Articles