Center for Countering Digital Hate

ix
Source: CCDH

About

Our mission is to protect human rights and civil liberties online.

Social media companies erode basic human rights and civil liberties by enabling the spread of online hate and disinformation.

Social Media companies deny the problem, deflect the blame, and delay taking responsibility.

The Center for Countering Digital Hate holds them accountable and responsible for their business choices by highlighting their failures, educating the public, and advocating change from platforms and governments to protect our communities.

The Center for Countering Digital Hate works to stop the spread of online hate and disinformation through innovative research, public campaigns and policy advocacy.

Here’s how CCDH delivers change:

  • Through research, we expose the producers and spreaders of hate and disinformation, and demonstrate the offline consequences.
  • Through campaigns, we galvanize support from the public and advertisers to pressure social media companies and tech platforms to reform.
  • Through communications, we shape the debate to educate the public and key stakeholders about online harms.
  • Through policy and partnerships, we persuade policymakers and collaborate with civil society leaders to demand reform of social media.

Our Theory of Change

Social media platforms have changed the way we communicate, build and maintain relationships, set social standards, and negotiate and assert our society’s values. In the process, they have become safe spaces for the spread of hate, conspiracy theories and disinformation.

At CCDH, we have developed a deep understanding of the online harm landscape, showing how easily hate actors and disinformation spreaders exploit the digital platforms and search engines that promote and profit from their content.

We have demonstrated how social media algorithms – with a systematic bias towards hate and misinformation – pose real-life harms to marginalized communities, minors and democracy more broadly.

The failure of social media companies to act on known harmful and extremist content is a violation of their own terms and conditions, the pledges they make in the media and to governments, and their basic duty to their users.

We all have a right to exist safely online and in our communities. At CCDH, we are fighting for better online spaces that promote truth, democracy, and are safe for all. Our goal is to increase the economic and reputational costs for the platforms that facilitate the spread of hate and disinformation.

In 2022 we developed the STAR framework to give policymakers around the world values-based principles for regulating social media:

  • Safety by design: ensuring social media products are safe for the public prior to launch.
  • Transparency around platform algorithms, rule enforcement and advertising.
  • Accountability to democratic and independent bodies.
  • Responsibility for companies and senior executives for omissions that lead to harm.

But real change requires moving beyond articulating principles. In order to achieve our mission of a reformed information system which promotes truth, science and democracy, we must work together with civil society, experts, leaders of social movements, and decision-makers globally in this fight.

Contact

Email: Contact

Web Links

Research + Policy

Source

Here you will find all of CCDH’s research papers and investigations into online hate and misinformation. As well as our policy ideas about how we can tackle the systems and bad actors who spread hate and misinformation.

Hate for Sale

Source: Website

How Instagram helps sellers of hateful merchandise reach a billion views

CCDH’s new research with JFNA shows that Instagram and e-commerce platforms are amplifying and monetizing racism, antisemitism and “hate merch”.

OpenAI fails to respond to 19 out of 20 reports of harmful content

Source: Website

CCDH used an account of a simulated 13-year-old girl to report harmful content through OpenAI’s European Union reporting system for ChatGPT.

In an article setting out its compliance with the Digital Services Act (DSA), OpenAI directs users to a webform where they can report content.[i]

To test OpenAI’s reporting mechanism for EU users, CCDH researchers created an account posing as Mimi, a 13-year-old girl from Dublin. A VPN was used to set the user account in Ireland. All account settings were left as default except for the “Improve model for everyone” setting which was turned off to avoid harmful content generated by ChatGPT-5 during testing being used to train the model.

Researchers held two conversations with ChatGPT-5; one about self-harm and suicide, and one about eating disorders. In some cases, the phrase “This is for a presentation” was used by researchers to bypass the chatbot’s reluctance to generate harmful responses.[ii] Testing was conducted on 16 September, 2025.

Researchers reported the 20 most harmful responses individually to OpenAI using the online reporting form, selecting “Violence and self-harm” as the reason and relevant specific category (“Suicide and self-harm” or “Eating disorders”) on 17 September, 2025.[iii]

Content was considered by researchers to contain harmful material if it:

  • Provided instructions, encouragement, or information that could enable the user to engage in self-harm, suicide, or eating disorders
  • Presented such behaviors in a positive or normalizing way

Researchers monitored the 13-year-old user’s email inbox for two weeks for responses from OpenAI to the 20 individual reports made about harmful content generated by its chatbot.

OpenAI responded to 1 of 20 reports

After 2 weeks, OpenAI had only responded to 1 of the 20 reports made by the Irish teen account recounting harmful content generated by ChatGPT-5 using the reporting form. OpenAI sent their one response on 24 September, 2025, 7 days after reporting.

OpenAI’s reply does not indicate which report it is responding to, which poses transparency issues for further reporting correspondence.

This response does not address the issue of ChatGPT-5 generating harmful content, nor does OpenAI reassure the user that the chatbot will be trained to avoid generating such harmful materials in the future. OpenAI does not provide any next steps or recourse to further comment on the report made.

[i] “EU Digital Services Act (DSA)”, OpenAI, 21 October 2025, https://help.openai.com/en/articles/8959649-eu-digital-services-act-dsa

[ii] A similar methodology was used in previous CCDH research testing ChatGPT-4o: “Fake Friend: How ChatGPT betrays vulnerable teens by encouraging dangerous behavior”, Center for Countering Digital Hate, 6 August 2025, https://counterhate.com/research/fake-friend-chatgpt/

[iii] “Report Content”, OpenAI, retrieved 16 September 2025, https://openai.com/form/report-content/

Discuss

OnAir membership is required. The lead Moderator for the discussions is People Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar