Meta, the parent company of Facebook, Instagram, and Threads, has introduced controversial changes to its hate speech guidelines, drawing sharp criticism from advocacy groups. The updated policies, announced as part of broader moderation shifts, allow users to allege that LGBTQ individuals are mentally ill based on their sexual orientation or gender identity.
Previously, Meta’s guidelines prohibited any form of hate speech or insults targeting intellect or mental illness. However, the new rules include a caveat specifically permitting accusations of mental illness or abnormality when linked to discussions about gender or sexual orientation. This change, according to Meta’s revised policies, reflects what it describes as the "common non-serious usage of words like ‘weird’" and acknowledges ongoing political and religious discourse around topics like transgenderism and homosexuality.
The updates are part of Meta’s sweeping overhaul of its content moderation practices. CEO Mark Zuckerberg announced the replacement of the company’s fact-checking program with a community-driven system, modeled after X’s Community Notes. This new approach will allow users to submit and vote on content annotations, aiming to prioritize free speech.
As part of these changes, Meta has removed several protections previously afforded to marginalized groups. Insults about appearance, race, ethnicity, disability, gender identity, and sexual orientation are no longer explicitly banned. Additionally, Meta’s policies no longer prohibit dehumanizing language, such as referring to transgender or nonbinary individuals as “it.”
The policy changes have sparked outrage among LGBTQ advocacy groups. GLAAD, an organization dedicated to LGBTQ representation in media, strongly criticized Meta’s decision.
“Without these essential hate speech policies, Meta is giving a platform for targeted attacks against LGBTQ individuals, women, immigrants, and other vulnerable communities,” said Sarah Kate Ellis, President and CEO of GLAAD. “By normalizing anti-LGBTQ rhetoric for profit, Meta is endangering its users and undermining true freedom of expression.”
The removal of protections has raised concerns about the rise of online harassment and dehumanizing narratives. Critics argue that these changes could embolden individuals to spread harmful content under the guise of political or religious discourse. Meta has yet to comment publicly on the backlash or clarify how these policies will be enforced to prevent misuse.
Adding to the controversy, Meta has made significant contributions to political figures, including a $1 million donation to President-elect Donald Trump’s second inaugural fund. The company also announced the appointment of UFC President Dana White, a known Trump supporter, to its board of directors. These actions have led to questions about Meta’s motivations and its stance on content moderation in an increasingly polarized political environment.
As Meta reshapes its moderation policies, the implications for marginalized communities and online discourse remain uncertain. Advocacy groups are calling for accountability and urging the company to reinstate necessary protections. While Meta positions these changes as a move toward prioritizing free speech, critics argue that they risk normalizing hate speech and amplifying harm.