Meta shifts to crowdsourcing in misinformation fight

Hot Topics Talk

Lifestyle / Hot Topics Talk 10 Views 0 comments

By Anjana Susarla, Michigan State University Meta’s decision to change its content moderation policies by replacing centralized fact-checking teams with user-generated community labeling has stirred up a storm of reactions. But taken at face value, the changes raise the question of the effectiveness of Meta’s old policy, fact-checking, and its new one, community comments. With billions of people worldwide accessing their services, platforms such as Meta’s Facebook and Instagram have a responsibility to ensure that users are not harmed by consumer fraud, hate speech, misinformation or other online ills. Given the scale of this problem, combating online harms is a serious societal challenge. Content moderation plays a role in addressing these online harms. Moderating content involves three steps. The first is scanning online content – typically, social media posts – to detect potentially harmful words or images. The second is assessing whether the flagged content violates the law or the platform’s terms of service. The third is intervening in some way. Interventions include removing posts, adding warning labels to posts, and diminishing how much a post can be seen or shared. Content moderation can range from user-driven moderation models on community-based platforms such as Wikipedia to centralized content moderation models...

0 Comments