Facebook's forever war on misinformation fails to answer whether users are safer

Facebook spent 3 hours detailing its efforts to fight misinformation on Wednesday, highlighting points of improvement but leaving unanswered the overarching question of whether users are safer than they were 2 years ago.

The good: Facebook is getting better at both detecting and removing some types of content, with a particular focus on efforts to subvert democratic elections.

The ugly: Facebook's pledge to shift toward private, encrypted conversations is likely to make it harder for the company to monitor and remove objectionable content.

Facebook executives acknowledged the issue Wednesday, but declined to offer any specifics on how the company will deal with it.

Between the lines: When it comes to false information, in most cases Facebook isn't looking to remove it, though it is working to keep such information from being viewed and shared as broadly.

Facebook faces a tough challenge as it looks to reduce the visibility of content that approaches, but doesn't violate, its standards.

What they're saying: Asked whether Facebook believes users are safer than in years' past, VP of integrity Guy Rosen told Axios that Facebook is doing better but stopped short of claiming users are safer.

Meanwhile, Rosen also said at the event that Facebook is still several months from being able to deliver a Clear History tool it originally promised for last year as a means for users to increase their privacy.

Facebook is in fact now engaged in a long-term war of attrition with some of its own users to shape the boundaries of acceptable speech on its platform.

Original article