How social media companies are fighting to remove graphic content after TikTok's viral suicide video

With regards to machine learning and artificial intelligence, Facebook is developing an algorithm that can detect hateful memes alongside its existing systems, while others use chatbots to detect sexual harassment.

An infamous example of this is Facebook censoring an image of a child victim of the Vietnam war, as the company did not distinguish between the famous war photograph and images of child abuse.

In response to such issues, Facebook started an Oversight Board to deal with these issues of content moderation externally from the social media giant.

However, this can result in dramatic suffering for the moderators who have to spend hours each day watching potentially infringing content that has been uploaded to the platforms.

Moderators have reported becoming normalised to extreme content, finding themselves drawn to content they would never normally view such as bestiality and incest.

Technology companies often rely on outsourcing content moderation to third-party firms, and so do not see the human cost of such labour as visibly.

This is worse in countries in Asia, where labour is outsourced more heavily but is not as well protected as in the US or other Western countries.

Social media companies algorithms can also have the effect of pushing users towards extreme content, given that it is able to generate the kinds of engagement that they are looking for.

YouTube's recommendation algorithm has been condemned for directing users to videos promoting extremist ideologies, while Instagrams was denounced for pushing young girls down a rabbit-hole of self-harm images.

Some social media apps have taken steps to mitigate fears of addiction by introducing tools which allow users to monitor and restrict their time on the platforms

Original article