Meta, the parent company of Facebook, Instagram, and other platforms, has recently announced a significant change in its content moderation strategy, moving away from traditional fact-checking and towards a community-based system called Community Notes.
Meta had been using independent fact-checkers to review the contents published across its apps. These fact-checkers are IFCN (International Fact-Checking Network) certified and would flag the contents and rate potential misinformation on Meta’s platforms. Meta would ensure that the piece of content marked as false by IFCN fact-checkers would be less visible to users. In this way, it ended the spread of false information so far.
However, Meta’s recent decision to pull the plug on its independent fact-checking programme in the US has drawn sharp criticism from various quarters, and ignited a broader debate over the most effective strategy to combat misinformation on social media.
Meta, in turn said that it would be deploying a system that’s similar to “community notes” by X to address misinformation on its platforms.
What are Community Notes?
Community Notes is a system where users can add context to posts that they believe are misleading, lacking context, or need more explanation. These notes appear below a post with the tag: ‘Readers added context’. The notes can include sentences, reasoning, and source links to support the claim. Other users can then vote on whether the added context is helpful or not. This crowdsourced approach is designed to tap into the collective knowledge of the community to identify and address misinformation.
How do Community Notes work in X?
- Users must sign up to be a Community Notes contributor.
- New contributors must first rate notes written by others.
- Once approved, users can write their own notes.
- A note only appears if it receives enough positive ratings by a variety of users.
- X employs a “bridging algorithm” that prioritizes ratings from users who have disagreed in the past to prevent manipulation of the system.
- Contributors who write notes that are regularly rated as ‘unhelpful’ may lose their ability to contribute.
- Meta’s Community Notes model is likely to be similar to that of X.
Shift from Fact-Checkers
Meta’s decision to move away from fact-checkers is driven by several factors:
- Meta claims that fact-checkers, like everyone else, have their own biases and perspectives which affected their choices about what to fact check and how.
- The company has faced “extreme political pressure” from conservatives, who accuse tech companies of censoring conservative voices. Meta is trying to repair its rocky relationship with Donald Trump and other Republicans.
- Mark Zuckerberg has said that Meta was “over-enforcing rules and making mistakes” which veered into censorship, particularly around topics such as immigration and gender identity.
- Meta’s new approach is aimed at simplifying its content policies and restoring free expression on its platforms.
Challenges of ‘Community Notes’
While Community Notes offer a potentially scalable solution, they also present challenges. The crowdsourced nature of Community Notes makes it vulnerable to coordinated manipulation. Although the bridging algorithm is designed to mitigate this risk, it may not be entirely foolproof.
Professional fact-checkers were independent, nonprofit journalistic organizations doing research into claims. Fact-checkers also followed a code of principles requiring non-partisanship and transparency. Meta reported that it removed over 22 million pieces of content across its platforms in 1 month (January 2023) due to bad-content complaints. To handle huge numbers of misinformation in an effective way, Community Notes may not be a one-for-one replacement, as it maybe exploited.
(For more such interesting informational, technology and innovation stuffs, keep reading The Inner Detail).
Kindly add ‘The Inner Detail’ to your Google News Feed by following us!