Research
The Integrity Institute’s research comes directly from the people who know platforms best—550+ trust and safety professionals from 80+ tech companies. We transform their expertise into original research, industry best practices, and policy guidance.
Featured Research
All Research

AI Chatbots and Youth Mental Health: A Review of Research
Integrity Institute staff and membership have compiled a review of research specifically focused on the impact of LLM-powered chatbots on youth mental health. Building on research conducted with Integrity Institute members through the Generative Identity Initiative, this report identifies key themes from research into LLMs and youth mental health.

Prevention by Design: A Roadmap for Tackling TFGBV at the Source
While existing solutions to tech-facilitated gender-based violence (TFGBV)—such as content moderation—address harm reactively, they place the burden of safety on victims. We partnered with the Council on Technology & Social Cohesion to advocate for a proactive, design-focused approach that embeds safety and user empowerment into social media platform design.

Risk Dimensions and Mitigation Effectiveness
A recommendation on how assessing the scale, cause, and nature of risks enables platforms to justify and implement more effective mitigation measures.

Risk Assessment Guidance and Initial Analysis
An overview of the Integrity Institute’s risk assessment methodology, providing a structured approach for regulators to more comprehensively assess platform risks.

Global Transparency Audit
We’ve developed a comprehensive evaluation of the current state of platform transparency, including an analysis of reports submitted by platforms under the Digital Services Act (DSA). This report highlights how to strengthen global transparency efforts to achieve more meaningful transparency.

Non-Engagement Signals in Content Ranking
Developed in cooperation with Pinterest and UC Berkeley, we share how to use non-engagement signals for content ranking in social platforms.

Who Voters Trust for Election Information in 2024
With the Bipartisan Policy Center and the States United Democracy Center, we released new survey results exploring how Americans consume, assess, and engage with election information in 2024. We found that Americans learn about elections primarily through television and social media. Authoritative sources may struggle to break through or create universal narratives amidst a crowded information environment.

On Risk Assessment and Mitigation for Algorithmic Systems
Our report defines what risk assessments, audits, and mitigation plans should include to cover algorithmic systems used by online platforms.

Why Is Instagram Search More Harmful Than Google Search?
Chief research officer Jeff Allen writes about Instagram’s decision to disable search for sensitive topics and the Instagram we can’t have, using eating disorder content as an example.

European Commission Cites Integrity Institute in Its Elections Integrity Guidance
Chief research officer Jeff Allen shares the news about the European Commission extensively citing the Institute’s work

How We Helped with the Senate Hearing on Child Safety Online
For the past month our community has been sprinting on the subject of child safety, trying to collect all of our communities’ expertise and help the Senate have an effective hearing with tech CEOs on the subject. Chief research officer Jeff Allen shares how much our community showed up!

Questions for Platforms on Child Safety for Congressional Record
Following the Senate hearing on child safety online, Senators will be sending written questions to the CEOs of tech companies. Integrity Institute prepared additional questions that could serve as a guide for Senators’ written questions.

Building Integrity into AI Systems
Think social media is bad? AI could make it worse. The good news: we can fix both. AI’s rise closely mirrors the rise of social media platforms in the early 2000s. In fact, many of the challenges to developing and adopting “safe” AI are the same challenges integrity workers have faced for years as they have built and operated social media platforms.

Leadership Advice for New Trust & Safety Leaders
Announcing a New Resource: Leadership Advice for New Trust & Safety Leaders

Exploring the Depths of Online Safety
On January 31, 2024, the US Senate Judiciary Committee held a hearing on child safety online. Following the hearing, Institute staff share their takeaways in this post mortem.

Child Safety on Online Platforms with Vaishnavi J.
With the Senate Child Safety Hearing on the horizon, Trust in Tech podcast sat down with Vaishnavi, former head of Youth Policy at Meta and chat about the specific problems and current policy landscape regarding child safety.

Red Herrings To Watch For At The Senate’s Child Safety Hearing
Here are some red herrings you might notice companies and policymakers raise on January 31, and what they should be addressing instead.

Child Safety Online Briefing
Our briefing deck for Senate staffers to help them prepare for the hearing and guide their Senator’s statements and line of questioning.

Child Safety Online: Policy Recommendations
Integrity Institute delineates the best practices we advocate for Child Safety across all digital platforms

Build In Integrity: Best Practices for Start Ups and Early Stage Companies
No social media app wants to become known as a center for awful content. The reality is bad behavior and bad actors are inevitable on social media platforms. This resource provides companies facing these challenges guidance on how to address them early, build in integrity systems from the start, and get to a healthy platform growth cycle, which sets the company up for long-term success.