LGBTQ+ Safety

Policies

Our Community Standards are designed to help people understand what is acceptable to share on Facebook and Instagram. For example, these standards prohibit hate speech, bullying, and harassment. This includes content that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, disability or disease.

When we believe there is a genuine risk of physical harm or a direct threat to public safety, we remove content, disable accounts and work with local emergency services.

We do not allow any content created for the express purpose of outing an individual as a member of a designated and recognisable at-risk group or content that puts LGBTQ+ people at risk by revealing their sexual orientation or gender identity against their will or without permission.

In addition to those policies, here are other key policies that support the LGBTQ+ community online:


Hate Speech Against the LGBTQ+ Community Is Prohibited

We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. We don’t allow hate speech on Facebook. It creates an environment of intimidation and exclusion, leads to dehumanization and in some cases can promote offline violence. We know, however, that things are constantly evolving in this space. That’s why we partner with experts and organizations to stay ahead of trends and keep our policies updated.

We define hate speech as a direct attack against people - rather than concepts or institutions - on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.

We define attacks as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. We also prohibit the use of harmful stereotypes, which we define as dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence.

Learn more: https://www.facebook.com/communitystandards/objectionable_content


We Do Not Tolerate Bullying Behavior

Bullying and harassment happen in many places and come in many different forms from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact. We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook.

We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that use derogatory terms, calls for sexual assault or exploitation, calls for mass harassment, and threats to release private information. For private individuals, our protection goes further. We remove content that's meant to degrade or shame someone for their sexual orientation or gender identity, among other protections.

We recognize that bullying and harassment can have more of an emotional and physical impact on minors, which is why our policies provide even more protections for young people.

To learn more, visit https://www.facebook.com/communitystandards/bullying.


Non-consensual Intimate Images

The non-consensual sharing of intimate images violates Facebook’s policies, as do threats to share those images. We remove images shared in revenge or without permission, as well as photos or videos depicting incidents of sexual violence. We also remove content that threatens or promotes sexual violence or exploitation. If someone has shared, or has threatened to share your intimate images without your consent, report it by using the “Report” link that appears when you tap on the downward arrow or “...” next to a post. In addition, there are also organizations and resources available to support you.

Once you submit a report, specially trained representatives from Facebook’s Community Operations team will review the reported image and remove it if it violates our Community Standards. In most cases, we will also disable the account that shared the intimate images without permission. We then use photo-matching technologies to help thwart further attempts to share the image on Facebook and Instagram. If someone tries to share the image after it has been reported and removed, we will alert them that it violates our policies and that we have stopped their attempt to share it. For more detailed information on how to report a photo, please visit the Help Center.

We also encourage you to report sextortion, which is when people threaten or force someone to share intimate photos or videos. This is against Facebook's Community Standards and in some instances, it's also against the law. You can report the people threatening you, their threats, and any images they’ve shared by clicking the “...” on the upper right corner of any post. To learn more, visit the Safety Center or the Help Center.


How to Report Content that May Violate Our Policies

If you want to report something that goes against our Community Standards, like hate speech, bullying & harassment, or violence, go to the content you want to report and use the Find Support or Report link to report it to us. The best way to report abusive content on Facebook is by using the Report link near the content itself. We have teams of experts who review reports of violating content 24/7 in more than 70 languages and artificial intelligence technology that finds and removes this content before users even see it.


Transparency and Accountability

At Facebook, we strive to be open and proactive in the way we safeguard users’ privacy, security, and access to information online. We’ve published biannual transparency reports since 2013. We also release a quarterly Community Standards Enforcement Report, which includes data on action we take against violating content on Facebook, Messenger, and Instagram. We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too.

In our Community Standards Enforcement Report, you will find information on the amount of content we removed for violating policies, including those policies related to the safety of the LGBTQ+ community online, as well as information on trends and prevalence. You can read our most recent enforcement report here.