Online Child Protection

Overview

Across our family of apps, we take a comprehensive approach to child safety that includes zero-tolerance policies prohibiting child exploitation, cutting-edge technology to prevent, detect, remove and report policy violations, and victim resources and support. We also collaborate with industry child safety experts and civil society around the world to fight the online exploitation of children because our commitment to safeguarding children extends beyond our apps to the broader internet.


Our Efforts

At Facebook, our work on child safety has spanned over a decade. Our industry-leading efforts to combat child exploitation follow a three-pronged approach: prevent this abhorrent harm in the first place; detect, remove and report exploitative activity that escapes these efforts; and work with experts and authorities to keep children safe. We apply this approach across both the public spaces of our platform, like Pages, Groups, and Profiles, as well as on our private messaging services, like Messenger. In addition to zero-tolerance policies and cutting-edge safety technology, we make it easy for people to report potential harms, and we use technology to prioritize and to swiftly respond to these reports. We have specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations review potentially violating content and report findings to the National Center for Missing and Exploited Children (NCMEC). To understand more about the specifics of our approach to safety on WhatsApp, go here and on Instagram, go here.


Understanding Offenders

Understanding how and why people share child exploitative content is critically important to deploying effective comprehensive solutions to combat it. To help with that understanding, we conducted an in-depth analysis of the illegal child exploitative content we reported to the National Center for Missing and Exploited Children from Facebook and Instagram during two representative months and used what we learned to deploy new tools and launch new programs targeted at reducing the sharing of this abhorrent content. Here’s what we found:
  • More than 90% of this content was the same as or visually similar to previously reported content.
  • Copies of just six videos were responsible for more than half of the child-exploitative content we reported in that time period.
  • While this data indicates that the number of pieces of content does not equal the number of victims, and that the same content, potentially slightly altered, is being shared repeatedly, one victim of this horrible crime is one too many.

The fact that only a few pieces of content were responsible for many reports suggests that a greater understanding of intent could help us prevent this revictimization. With that in mind, we worked with leading experts on child exploitation, including NCMEC, to develop a research-backed taxonomy to categorize a person’s apparent intent in sharing this content. Based on this taxonomy, we evaluated 150 accounts that we reported to NCMEC for uploading CSAM (child sexual abuse material) in July and August of 2020 and January 2021, and we estimate that more than 75% did not exhibit malicious intent (i.e. did not intend to harm a child). Instead, these accounts appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child-safety ecosystem. Our work to understand intent is ongoing.

Based on our initial findings, we are developing some targeted solutions, including new tools and policies, to reduce the sharing of this type of content. We’ve started by testing two new tools: one aimed at potentially malicious searches for this content and another aimed at sharing of this content for reasons other than to harm a child. The first intervention is a pop-up that is shown to people who initiate searches on Facebook using terms associated with child exploitation. The pop-up offers offender diversion resources from child protection organizations and shares information about the consequences of viewing illegal content. The second is a safety alert designed for people who have shared viral memes of child exploitative content informing them of the harm it causes the victim and warns that it is against our policies and there are legal consequences for sharing this material. We also are working with experts on public awareness campaigns to help people understand that, no matter the reason, resharing this content is illegal and revictimizes the child and that they should formally report it instead.

Our work to understand intent is ongoing, and based on our findings we will continue to develop targeted solutions, including new tools and policies to reduce the sharing of this type of content across both our public platforms and private messaging.