Friday, June 14, 2024

Censorship and thoughtful software


Someday we will have to think carefully about what place we give to Artificial Intelligence in our lives, if they continue to make mistakes like the one that was known a few days ago. It turns out that YouTube’s algorithms – like those on other platforms – were trained to stop racist language in time and prevent inappropriate content from being posted. Sure, it would be impossible to get people to see and hear the more than 500 hours of footage that is uploaded daily to that site. You need an algorithm. But the algorithm can fail.

And failed. It was in the broadcast of a chess game by the successful Croatian youtuber and chess player Antonio Radić, that has over a million subscribers. That game, from October last year, was followed by hundreds of thousands of fans and all of a sudden the transmission was cut off. YouTube never gave an explanation for the crash, but the service was restored within 24 hours.

Antonio Radić, youtuber and Croatian professional chess player who lives in the USA and has a channel (Agadmator’s) with more than one million subscribers.

Then all kinds of speculations arose, but now researchers from the Carnegie Melon Institute of Language Technologies (United States) completed an in-depth study of the case and came to the conclusion that the algorithm that YouTube uses (remember, a software that learns only to starting from certain parameters) you are making some “mistakes.”

And the main mistake, in this case, was not being able to discriminate the context in which certain words and phrases are said such as: “White attacks, white kills black, black is threatened, black eats white, among others”. The algorithm understood it as racist messages and did what it had to do and was programmed for: just lowered the lever.

It is not the first error of interpretation of the algorithms in social networks. From time to time, works of art censored appear on Facebook for having nudity, for example. Or photos of women breastfeeding. There is also the case of Google Photos, which labeled a young black couple taking a selfie as “gorillas.” All known cases, now how much content have we stopped seeing just because a software silently removed it?

There is a debate there about freedom of speech and censorship, governed by digital machines from big tech companies. But there is also a danger that content producers -artists, journalists, all of us- adopt opinions and worldviews to the liking of these censorious and well-thought-out softwares. Perhaps, how far are we from that -with the aim of eluding those algorithms- let’s end up playing chess with greens versus fuchsias?

Look also

Deep Nostalgia, a platform that brings old photos of your relatives to life
Look also

Update Democracy Software
Ebenezer Robbins
Ebenezer Robbins
Introvert. Beer guru. Communicator. Travel fanatic. Web advocate. Certified alcohol geek. Tv buff. Subtly charming internet aficionado.

Share post:


More like this

Green Glamour: How to Achieve Eco-Friendly Acrylic Nails

In the vibrant world of beauty and nail care,...

The Future Of Horse Racing In The Digital Age  

Horse racing, a sport steeped in tradition and history,...

How to Sell CS:GO Skins for Real Money

CS:GO skins have become not just an ordinary design...

Decoding The Diversity: A Guide To Different Types Of Horse Races

Horse racing reaches 585 million households worldwide, enjoying immense...