Facebook to improve safety and security practice with further digitalisation

unsplash.com

Speaking at the Facebook Asia-Pacific (APAC) Safety Press Briefing webinar, Amber Hawkes, Head of Safety Policy for Facebook in APAC, said that Facebook will digitalise to further improve their security and safety in handling threats on the social media app.

Facebook moderates content by utilising three teams which all have their specific roles;

1.      Content Policy

This team writes the community standards, which are the rules which outline what is and is not allowed on Facebook. The team includes people with expertise in topics ranging from terrorism, child safety to human rights, from fields as diverse as academia, law, law enforcement, and government.

2.      Community Integrity

This team enforces the community standards responsible for building that technology which helps the team enforce the community standard.

3.      Global Operations

The community standard through human review. Facebook’s team has more than 15,000 content reviewers that review content in over 50 languages. The team is based in over 20 sites globally which covers every major time zone.

Facebook intends to progress their security and safety by utilising technology as the team’s central role. 

The community standards are better in identifying content and automatically take it down before anyone can see it with the upgrade in technology.

“Between April and June this year 99.6 percent of fake accounts, 99.8 percent of spam, 99.5 percent of violent and graphic content, 98.5 percent of terrorist content, and 99.3 percent of child nudity and sexual exploitation content, 95 percent of the content the team has removed from Facebook was identified and removed by our technology, Hawkes said.

“Without needing someone to report to Facebook’s security and safety team,” Hawkes added.

Using technology to prioritise content in review like suicide. Child exploitation or terrorism are sent to human review in chronological order, with user reports over content flagged proactively by Facebook’s technology.

However due to advances in technology in recent years, Facebook is now able to prioritise content that needs reviewing, after considering several different factors:

·        Virality: Content that is potentially violating that is being quickly shared will be given greater weight than content that is getting no shares or views.

·        Severity: Content that is related to real-world harm such as suicide and self-injury or child exploitation will be prioritised over less harmful types of content such as Spam.

·        Likelihood of violating: Content that has signals which indicate that it may be like other content that violated our policies will be prioritised over content which does not appear to have violated our policies previously.

It also means the reviewers in the team’s Global Operations team spend more time on complex content issues where judgment is required, and less time on lower severity reports that technology is capable of handling.

Facebook also applies a combination of technology and reports from the community and human review to identify and review content against the community standards.

Until recently, most of the technology that the team used to moderate content looked at each part of a post separately on two dimensions.

“Content type and violation type. For instance, one classifier would look at the photo for violations of our nudity policy, and another classifier would look for evidence of violence. A separate set of classifiers might look at the text of the post, or the comments,”

“This can make it challenging to understand the full context of the post,” she added.

To get a more holistic understanding of the content, the team created technology called Whole Post Integrity Embeddings or WPIE.

In simple terms, this technology looks at a post in its entirety, whether the images, video, and text. The team looks for various policy violations simultaneously using one classifier, instead of multiple different classifiers for different content and violation types.

“XLM-R is a new technology that Facebook developed that can understand text in multiple languages. This model is trained in one language and then used with other languages without the need for additional training data or content examples.

With people on Facebook posting content in more than 160 languages, XLM-R represents an important step toward our vision of being able to moderate content globally. It helps us transition toward a one-classifier-for-many-languages approach — as opposed to one classifier per language” said the Head of Safety Policy for Facebook in APAC.

This is particularly important for less common languages where there may not be large volumes of data available to train the algorithm.

Previous articleNora Junita appointed as new MDEC CFO to accelerate Malaysia’s digital outreach
Next articlePERMAI Stimulus, A Letdown For Events Industry

LEAVE A REPLY

Please enter your comment!
Please enter your name here