Shared filter bubbles and content moderation in online communities

duration 15:42

Alt-Tech platforms seem to have one thing in common, namely that they cherish free speech and open communication, without censorship or de-platforming. In fact, that’s what’s attracting many of the users to the various alt-tech platforms today.

The alt-tech platforms represent a renewed grand experiment, going back to open communication, and it has so far enjoyed great success just like the original iteration of the web. We’ve had Gab.com tainted with the alt-right hate speech brush, but that’s par for the course. t’s not that the alt-tech platforms are friendly towards any type of content in-particular: they are friendly towards everything and that’s the point.

One relatively new phenomenon is that of sharing block-lists, as a means to ‘clean up’ your experience on a platform. If anything, it leads to even more severe filter bubbles. In traditional media, the problem is exactly that the information is pre-filtered, editorialized, pre-packaged for you. The news, for example is sanitized and angled; some pundits make it on, others with the ‘wrong’ ideas, do not. Ideas of wrong-thinkers are misrepresented, and these wrong-thinkers have little means to retort. Using ready-made block lists enables more of the same, and frankly we don’t need this on the 2nd generation platforms.

That said, there has to be some level of moderating content and screening out the blatantly inappropriate or even illegal content. I do like, that on Minds, the process of removing inappropriate content is community-led. In other words, when a post is reported, a small subset of the community ‘vote’ on whether it should be removed. Great. As for sharing entire lists of users that should be squelched altogether — not good. You need a more granular approach, something on a post-by-post basis. Minds.com is setting a good example here.

You can tell that the posts that make it through on, say, Minds.com are of course more risqué, raunchy and daring than on Facebook — but this is a reflection of the audience at hand: who creates the content and who reviews it. Still, nothing too insane is coming through. With platforms such as Facebook, the policies applied also reflects the convictions of a select few, and again, you can tell what these convictions are. Of course, with a large corporation such as Facebook, they are going to err on the extreme side of caution with regards to policing hate-speech or anything even remotely edgy.

That aside, a lot of the policies on Facebook don’t reflect any threats of indemnification. On the Facebook ads side of things, you can really see their policies taken to an extreme, because Facebook do not want to be seen making money off of questionable or inflammatory content. For example, content can be removed because of principles such as ‘fat shaming’: if you use an image of a ripped guy on the beach, it can be seen as harmful, as it promotes an unattainable body image. So, Facebook’s policies are not just rooted in legal threats — it’s the social justice warrior and overt political correctness shining through. They are trying to appease people to safeguard their money-making machine.

Facebook obviously has the option of reporting content, and people on the ground are trying to use this feature to participate in the moderation. They report things they disagree with, and the left seem to use this a lot. Much of what is called hate-speech, is just I-disagree-with-it speech. The worst form of this, is when people brigade together to try and mass-report a user to de-platform him or her. Don’t do this — you are no better than censors if you try it. And in any case, it probably has very little effect if you are wrong-thinker. In fact, reporting content that is actually harmful, fraudulent or incorrect doesn’t seem to have much of an effect.

When it’s a grass-roots democratic approvals process though, you are more likely to accept the policy verdict as the norm: because it literally is the norm. The very definition of a norm is something that is usual, typical, or standard. It’s common users who enforce the rules, standard users setting the standard. Also, there is a French expression called “commes il faut”, which means behaving in the right way in public according to rules of social behavior. It’s not a law or a regulation of course; it’s just something you do because it’s something you do and it’s therefore widely accepted.

This is something that has is being lost more and more in this day and age, and especially in the online context. In fact, online discourse on the major platforms is deteriorating. There’s little meaningful substantial debate on politics, economics, science, etc. It’s one-liners, meme exchanges, insults and dramatic mic drop moments. We’ve developed a sophisticated global communication machine spanning the globe, and we use it like baboons. We divide ourselves into tribes, ignore what we dislike (such as with block-lists, muting, flagging and reporting). It’s through the newest technological advances that we see really see our primitive biases and behaviors really shine through!

Finally, there’s a balance to be struck here. We all want better platforms with a more sound approach to censorship. We all want better discourse, meaningful conversation, etc. You have to go wherever you can find that — and while alt-tech can be a good start, but you have to be wary of isolating yourself into a filter bubble. Minds, Gab, Bitchute, Steemit does seem to attract a particular type of individual: it’s going to be more right-leaning, and you have to be wary of that. Do you use shared block-lists? Do you use the report functionality on the mainstream platforms or on alt-tech? Do you relentlessly troll and mic-drop? Take a listen and find out how now to be a techno douche!

Supporters of taim.io.

Leave a Reply

Your email address will not be published. Required fields are marked *