Online Child Safety at the MIT Policy Hackathon
By April Lat, Community Lead at the Integrity Institute
Last November, the Integrity Institute was proud to serve as a Challenge Sponsor for MIT’s 8th annual Policy Hackathon, presenting the Internet Governance challenge on patchwork state approaches to child online safety in the United States. Over an intensive 48-hour period, twelve multidisciplinary teams of graduate students from around the world used the Integrity Institute’s Technology Policy Tracker to conduct qualitative analysis of existing state legislation, identify trends and gaps, and translate those insights into proposed comprehensive federal policy solutions.
Integrity Institute staff Jeff Allen and April Lat, alongside Marena Tedaldi of Duco, presented the challenge and served as judges, supporting teams as they refined their policy proposals. Institute members Henriette Cramer and Prachi Wadekar, joined by April Lat, also participated in the Careers Panel, sharing their experiences in tech and internet policy with students exploring pathways into the field. With more than 50 teams competing across four policy areas, we were inspired by the rigor, creativity, and interdisciplinary thinking students brought to this work—and proud to support the next generation of technology policy leaders.
Why this challenge, and why now
The Integrity Institute partnered with the MIT Policy Hackathon to pose a question that has been looming over U.S. tech policy for years: how can we move from a fragmented, state-by-state approach to child online safety toward a comprehensive federal framework that actually works?
Right now, the United States is operating under a patchwork of state-level technology regulations. States like California, Colorado, Virginia, and Connecticut have all taken meaningful—but different—approaches to data privacy, youth protections, and platform accountability. While these efforts often emerge from real urgency, the cumulative effect is regulatory complexity for companies and uneven protections for users. A teenager’s online rights shouldn’t depend on their ZIP code.
Youth online safety, in particular, has become the most active—and contentious—policy arena. States are experimenting with age verification mandates, platform design restrictions, and digital wellness requirements, often running headlong into constitutional questions, technical feasibility issues, and privacy tradeoffs. Meanwhile, Congress has struggled to pass comprehensive legislation. The near-miss of the American Privacy Rights Act in 2024 and the repeated stalling of the Kids Online Safety Act illustrate both momentum and gridlock.
This tension made the challenge timely and difficult by design. We asked participants to imagine what a federal approach to youth online safety could look like—one that balances child development, parental rights, free expression, privacy, and long-term adaptability—while grounding their recommendations in real legislative data.
The challenge brief in practice
Our challenge centered on “Youth Online Safety Standards” and encouraged teams to draw from the Integrity Institute’s Tech Policy Legislative Tracker and related datasets. Rather than starting from scratch, participants were asked to analyze existing state and federal proposals, identify areas of alignment and disagreement, and think critically about what should remain at the state level versus what demands national consistency.
As judges, we were especially interested in how teams handled a few core tensions:
The technical and privacy risks of age verification
Defining meaningful duties of care for platforms without collapsing into vague or unenforceable standards
Navigating First Amendment constraints while still addressing real harms
Designing policy that can survive rapid technological change
What impressed us most was how seriously teams took these tradeoffs. Many resisted the urge to propose overly simplistic solutions. Instead, they acknowledged uncertainty, surfaced unintended consequences, and proposed layered approaches that combined federal baselines with room for state innovation.
Judging the work: rigor, creativity, and humility
Across presentations and policy memos, a few themes consistently emerged. First, teams recognized that federal policy doesn’t have to mean federal micromanagement. Several proposals outlined baseline national standards for data minimization, transparency, and youth protections, paired with clearly defined areas where states could go further.
Second, many teams grappled deeply with age verification. Rather than treating it as a silver bullet, participants explored privacy-preserving alternatives, risk-based design requirements, and duty-of-care models that shift responsibility onto platforms instead of users. This reflected a sophisticated understanding of both technical constraints and civil liberties.
Finally, there was a notable emphasis on adaptability. Teams proposed mechanisms like sunset clauses, regular regulatory review, and delegated rulemaking authority to ensure policies don’t become obsolete as technologies evolve. This kind of future-proofing is often missing from real-world legislation—and seeing it prioritized by students was genuinely encouraging.
Judging these proposals was less about ranking ideas and more about engaging in a collective exercise of policy imagination. It reinforced why spaces like hackathons matter: they create room to think beyond political stalemates and explore what could be possible.
Why this matters beyond the weekend
The MIT Policy Hackathon wasn’t just an academic exercise. It reflected a broader crisis—and opportunity—in democratic governance. When federal institutions struggle to act, states fill the void. When both stall, corporations and market forces quietly set the rules by default.
The risk is not just regulatory inefficiency, but a deeper erosion of democratic sovereignty. If we can’t collectively decide how technologies shape childhood, speech, privacy, and mental health, we outsource those decisions to systems optimized for profit rather than public good.
What gave us hope was seeing the next generation of policymakers, technologists, and researchers take these questions seriously. They weren’t naive about the constraints. They understood the legal, technical, and political challenges. And yet, they still believed better governance is possible.
Celebrating Standout Solutions
One of the most energizing parts of judging the challenge was seeing teams move beyond critique and into concrete, implementable frameworks grounded in real legislative data.
First Place: Team Hotfix
Team Hotfix’s winning proposal stood out for its rigor, creativity, and scalability. Drawing directly from the Integrity Institute’s Legislative Tracker, the team built a structured database that classified 146 state bills and congressional testimonies into quantifiable policy mechanisms. They paired this with analysis of nearly 68,000 real-world youth online sessions, allowing them to map documented harm patterns to specific regulatory approaches.
What impressed us most was not just the depth of analysis, but the design logic of their solution. By examining why certain laws failed judicial scrutiny—such as California’s content-based approaches—and why others, like Connecticut’s design-focused legislation, survived, Team Hotfix identified durable policy pathways that could withstand constitutional challenges. Their proposed Youth Safety & Design Standard (YSDS) applied a tiered, design-based framework with protective defaults, platform duties of care, and mandatory transparency requirements that could scale federally while remaining adaptable over time.
Equally important, their work was fully open-source. The interactive dashboard they created makes legislative data accessible not just to policymakers, but to researchers, advocates, and the public—exactly the kind of knowledge-sharing infrastructure this challenge hoped to inspire.
Honorable Mention: Team HackEurasia
We were also deeply impressed by Team HackEurasia, who placed second with a proposal that centered co-design and youth voice—an often-missing element in technology policymaking. Using the 4C’s framework (Conduct, Contact, Content, Commerce), the team focused on reducing harmful advertising exposure for teens while preserving First Amendment protections.
Their two-phase model proposed shared federal definitions and standards, paired with state-level implementation informed by a federal council.
What stood out was their insistence that young people themselves should be part of governance, through Youth Advisory Boards that co-create policy alongside regulators. Supported by technical mechanisms like ad labeling systems, pre-delivery ad blocking for teens, and privacy-conscious age verification concepts, their proposal demonstrated how participatory design can coexist with regulatory pragmatism.
Together, all 11 teams that participated in this Internet Governance challenge reflected the core goal of the challenge: to move past fragmented debates and toward solutions that are legally durable, technically informed, and human-centered.
From Policy Ideas to Career Pathways: The Careers Panel
Beyond the competition itself, the weekend also created space to talk candidly about careers at the intersection of technology, policy, and trust. The careers panel brought together practitioners whose paths illustrated just how interdisciplinary—and impactful—this work can be.
We were honored to have Henriette Cramer and Prachi Wadekar, members of the Integrity Institute community, contribute their perspectives alongside other panelists. Henriette shared insights from her career building and leading algorithmic safety and data teams across major platforms, emphasizing the importance of combining quantitative data with qualitative understanding to manage real-world harms. Prachi spoke about having a technical background is also useful, reflecting on how trust, privacy, and human outcomes must be embedded into product design from the start.
They were joined by Dr. Drew Story of MIT’s Policy Lab, who offered a window into translating expertise into legislative impact, and Kate Machet of the Essex County Community Foundation, who discussed navigating policy work across nonprofits, startups, and government. Together, the panel underscored a key message for participants: there is no single path into this field, but there is a growing need for people who can bridge disciplines, ask hard questions, and stay grounded in real-world impact.
Looking Ahead
At the Integrity Institute, we often talk about building the connective tissue of the tech policy ecosystem—helping practitioners learn from one another, share hard-earned insights, and push the field forward together. Events like the MIT Policy Hackathon are a powerful reminder of why that work matters.
To the students who participated: your ideas were rigorous, thoughtful, and urgently needed. To the organizers and fellow judges: thank you for creating a space where complexity was welcomed rather than flattened. And to anyone considering a career in this space: we need you.
The challenges of youth online safety won’t be solved in a single weekend. But weekends like this help us imagine the kind of future—and the kind of policies—we’re trying to build.

