Facebook, the social media giant, wants to give its users the power to define what is objectionable and what is not. Also, the tech giant is planning to influence the local defaults of users who do not choose voluntarily. Eventually, the users will be able to choose how much graphic content, violence, profanity and violence they are comfortable seeing.
Users will eventually decide what their line on violence and on profanity is: Zuckerberg
Mark Zuckerberg, the chief executive officer of the social network, disclosed this big shift in the Community Standards policy of Facebook recently in his 5,000 word humanitarian manifesto. The social media giant depends on a one-size-fits-most set of standards currently, which is about what is allowed on the social network. The only exception is that it follows the local censorship laws. However, that has resulted in trouble for the tech company because newsworthy historical pictures include citizen journalism accounts of police violence and nudity that have been eliminated by the social network wrongly and then restored after executive review or media backlash.
Zuckerberg explains the forthcoming policy, saying the notion is to give everyone in the society options for how they would like to set the content policy for themselves. He writes (they have to decide) where is their line on nudity, on profanity, on graphic content, (and) on violence. He says what the users decide will be their personal settings. The CEO writes further that they will ask the users these questions periodically to increase participation and so they do not need to dig around to find them.
Zuckerberg adds, “For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Of course you will always be free to update your personal settings anytime.” He says content will only be taken down if it is more objectionable than the most permissive options allow, with a broader range of controls.
Zuckerberg has some plans for Facebook Groups as well
The mentioned approach lets the social network give engaged and vocal users the choice while establishing important and reasonable localized norms, without even forcing specific policies on any use or without requiring users to configure complex settings.
The social network will rely more on artificial intelligence (AI) to classify potentially objectionable content. Artificial intelligence is already delivering 30% of all content flags to Facebook’s human reviewers. The CEO hopes that over time, the AI of the tech giant will learn to make proper distinctions, like between a news report about a terrorist attack and a terrorist propaganda.
The CEO has further outlined a few other product development plans as well. The social networking site is hoping to add more suggestions for local Groups to join users deeper into their communities. The tech giant will give the Group admins/leaders more tools, like what the social network offers the Page owners. Mark Zuckerberg did not offer more specifics about this particular matter, but we think it could include analytics about which content is engaging and which is not.