Insights

Discover updates and stories from our staff and guest contributors, highlighting the work and impact we’re making around the world.

Blog

Community Oct 22, 2025

Join Our Next Open Community Call!

Mala Kumar

Podcasts & Videos

Oct 15, 2025

Annie Brown on the Accessibility Bias Bounty Challenge Set

In this flashpod, Annie Brown, the founder of Reliabl and Humane Intelligence’s Bias Bounty Data Scientist, speaks to Brent Philipps on Humanitarian AI Today Voices, about the Accessibility Bias Bounty Challenge set that opened on October 6, 2025 and closes on November 7, 2025. Annie speaks about the data and design tracks, the bias bounty partner organizations, and her experience as someone who identifies as a neurodiverse person designing a challenge for neurodiversity in video conferencing platforms.

Aug 08, 2025

Andre Heller and Mala Kumar Discuss Signpost’s Pilot AI Assistant

Humanitarian AI Today, guest host Mala Kumar, Head of Impact at Humane Intelligence, sits down with Andre Heller, Director of Signpost at the International Rescue Committee (IRC). They discuss Signpost’s recent research paper on piloting an “information assistant,” detailing the technical architecture, evaluation methods, and lessons learned from the project The conversation also addresses the significant challenges facing the sector, including a funding crisis that has impacted the pace, scale, and scope of critical research being carried out across the humanitarian community advancing humanitarian applications of artificial intelligence.

Jul 09, 2025

Building Self-Sufficiency with AI

Resettlement providers handle a variety of daily tasks, including documenting cases in multiple languages, matching clients with limited housing options, analyzing program data, and coordinating time-sensitive services. Artificial intelligence (#AI) tools can streamline these tasks, helping both providers and clients achieve specific goals. This panel discussion will explore how AI tools can support information services and training for newcomers, emphasizing how to safely use this emerging technology. This session will pull themes from Switchboard’s recently published guide on Using AI in Service Delivery: A Framework to Evaluate Organizational Readiness and the AI Odyssey website.

Jun 19, 2025

How We Build the Tech Future We Want

This week’s episode is for anyone who’s ever felt overwhelmed by the pace of AI—or unsure of what power they really hold to shape the future. Rumman Chowdhury is a trailblazer in the responsible AI space—she’s worked in big tech, launched her own nonprofit, advised the Biden administration, and now runs Humane Intelligence. What makes her stand out isn’t just her résumé—it’s her refusal to settle for cynicism.

Jun 11, 2025

Hamburg Sustainability Conference

Humane Intelligence was proud to join global leaders to explore how AI can support a more just, inclusive, and sustainable world at the Hamburg Sustainability Conference. Watch, Dr. Rumman Chowdhury, Humane Intelligence Co-founder and Distinguished Advisor speak on the high-level panel titled “Hamburg Declaration on Responsible AI for the SDGs.” Rumman emphasized the importance of inclusive evaluation and public participation in AI governance.

Apr 04, 2025

What Artificial General Intelligence Could Mean For Our Future

Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images. Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.

Feb 17, 2025

Attention Needed: How to Interpret AI Safety

In this episode of Attention Needed, we’re thrilled to welcome Dr. Rumman Chowdhury. As a leader in responsible technology, she is the CEO and co-founder of Humane Intelligence and was the first U.S. Science Envoy for Artificial Intelligence appointed by the Department of State. Dr. Chowdhury previously directed the META (ML Ethics, Transparency, and Accountability) team at X (formerly Twitter), founded Parity, an algorithmic audit platform, and was the Global Lead for Responsible AI at Accenture. In our discussion, we’ll explore the critical intersection of AI, ethics, and governance with Dr. Chowdhury. She will share her insights on the role of government in shaping responsible AI practices and policies as well as her work on ensuring AI systems prioritize human well-being at Humane Intelligence.

Feb 11, 2025

Too fast to process? The global challenges of artificial intelligence

As France and India co-host an AI summit in Paris, we ask if technology has advanced too far beyond our control and that any well-intentioned bid by world leaders to forge common rules and guidelines is too little, too late?

Feb 11, 2025

Can We Fix AI Before It’s Too Late? | Dr. Rumman Chowdhury (Ep. 9)

Life With Machines is a technology show about the humans and asks, “How can these machines help us become more human?” In this video podcast, host Baratunde Thurston helps us understand our rapidly emerging future, and encourages us to shape it and laugh at the absurdity. Baratunde demystifies A.I. and makes it personal with a wide range of guests: entrepreneurs, artists, policymakers, technologists, business leaders, creators, educators, and scientists. Maybe even an A.I. or two. Everyone should have a say in where technologies like A.I. take us, not just a handful of people in California who think nitro cold brew on tap should be a human right.

Feb 01, 2025

How to Fix AI Before it’s too Late?

From the Innovation Civilization Podcast, episode #30

Jan 27, 2025

AI red teaming and right to repair

How to identify risks in AI models? Red teaming is one of the options, says the guest of AI at Scale podcast – Dr. Rumman Chowdhury, CEO of Humane Intelligence, US Science Envoy for Artificial Intelligence. Red teaming roots are in the military. It was used to identify whether the base would hold up, and what are the vulnerabilities.

Research

The Playbook for AI Red Teaming cover
2025

The Playbook – AI Red Teaming for Good

We partnered with UNESCO to create a free Red Teaming Artificial Intelligence for Social Good Playbook so that any organization can get started with red teaming their AI models and systems.

Digital Violence and Real World Harm report cover
2025

DIGITAL VIOLENCE, REAL WORLD HARM

Humane Intelligence worked with the UK Government’s DFID / FCDO on a report about Evaluating Survivor-Centric Tools for Intimate Image Abuse in the Age of Gen AI.

2025

CAMLIS Red Teaming Report

In this report, we summarize these operations and offer four observations:

  • (1) Red teamers successfully carried out 139 different attacks against four target AI systems.Most exploits resulted from social engineering or virtualization techniques, although asignificant number of techniques were technical in nature.
  • (2) We found NIST 600-1 to be readily operationalizable in several – but not all – key areas.Although most NIST 600-1 risk categories were sufficiently defined, others were difficultto validate in the field. We suggest some revisions to NIST 600-1 accordingly.
  • (3) We comment on red teaming practice in general, especially when operationalizing arobust risk classification framework like NIST 600-1.
  • 4) And finally, we reflect on some of the limitations inherent to red teaming AI systems, suchas the persistent challenge of precisely locating vulnerabilities within model, guardrail,system, application, or user behavior levels.

2025

SINGAPORE AI SAFETY RED TEAMING CHALLENGE

Humane Intelligence worked with the Singaporean Infocomm Media Development Authority (IMDA) on a red teaming event and contextual evaluation, covering nine languages and drawing participants from across ASEAN.

2025

Microsite: Tech-Facilitated Gender-Based Violence

Humane Intelligence created an interactive taxonomy to guide, name, understand, and build toward solutions for technology-facilitated gender-based violence. The aim of the taxonomy is to serve policymakers, product-managers, and platform executives by mapping abuse patterns, impacts, and recommended interventions. View the site

Generative AI Red Teaming Challenge report cover
2024

GENERATIVE AI RED TEAMING CHALLENGE: TRANSPARENCY REPORT

In 2023 at DEF CON, Humane Intelligence brought a record 2500+ participants together for a red teaming exercise that would eventually go on to inform our approach to red teaming for sociotechnical issues. The Generate AI Red Teaming Challenge: Transparency Report offers insights learned from that exercise

Additional Resources

Browse our collection of resources and publicly available project outputs.

Sign up for our newsletter