The chat software available on Facebook Messenger has permeated nauseating comments from some Internet users, before being disabled.
A very popular South Korean chatbot, a chatbot that allows Internet users to chat with what is said to be a 20-year-old student, was deactivated this week after making sexist, homophobic and disrespectful comments about the disabled.
Lee Luda, developed by the Seoul-based startup Scatter Lab to run via Facebook Messenger, had been highly successful at its launch in December due to the spontaneity and naturalness of the responses from artificial intelligence. It had quickly attracted more than 750,000 users.
Lee Luda responded using an algorithm developed from data collected from 10 billion conversations on Kakao Talk, the nation’s leading messaging app.
But the chatbot was quickly at the center of controversy due to its hateful responses, to the point that its developers were forced to suspend it on Tuesday.
In the screenshot of a conversation, we can see the virtual student stating that she “despises” gays and lesbians.
When asked about transgender people, Luda explodes: “You drive me crazy. Don’t repeat that question again. I said I didn’t like them.”
In another conversation, he explains that the people behind the #MeToo Movement were “just ignorant”, adding: “I totally despise them.”
He also states that he “would rather die” than live with a disabled person.
The offline chatbot
Scatter Lab apologized for the comments, adding that they did not represent the values of the company.
This isn’t the first time chatbots have gone off the rails. But the embarrassment is compounded by the fact that Lee Luda was operating on the basis of discussions that have existed, and his setbacks could be indicative that some ideas are taking hold in South Korean society.
Scatter Lab said it worked to prevent such problems during the six months of testing leading up to the chatbot’s launch.
“Lee Luda is artificial intelligence, like a little girl learning to have a conversation. She still has a lot to learn,” the company said Tuesday.
“We are going to educate Luda to judge which responses are appropriate, rather than indiscriminate learning,” Scatter Lab continued without saying when the chatbot would be back in service.
This case recalls the controversy surrounding an artificial intelligence created by Microsoft in 2016, synthesized in a conversation robot on Twitter. After interacting with users, he spread racist and sexist comments, promoting Nazism in the process. The American company immediately disconnected him.