Our Events

Upcoming Events

Check back for more event announcements!

Past Major Events

Evaluating LLMs for accuracy and inclusion

  • Thursday, October 9, 2025, 10 AM – 11:30 AM ET
  • Virtual, free to attend

Humane Intelligence joined the MERL Tech Gender, AI and MERL and the Sandbox Working Groups to think through how to evaluate and benchmark different aspects of LLMs and their application in development and humanitarian programming and/or when using them for crunching data and summarizing text for use in evaluations and other MERL tasks. This event started with working group leads, Savita Bailur, then went go deeper with expert speakers on the various points at which evaluation is (or should be!) happening.

Panelists: 

  • September 30, 2025
  • Virtual, closed session

The Digital Policy Lab of the Institute of Strategic Dialogue (DPL) is an intergovernmental network focused on charting the regulatory and policy path forward to prevent and counter online harms. The network consists of government representatives and regulators from 14 countries and the European Commission, including the UK, New Zealand, France, Australia, Germany, France, and Canada. In 2022, ISD partnered with the Christchurch Call for a multistakeholder working group series that culminated in the report “Misogynistic Pathways to Radicalisation.” ISD is now continuing with upcoming online DPL Working Group series, “From Exposure to Empowerment: Interventions to Counter Online Misogyny.” Theodora Skeadas and Sarah Amos shared Humane Intelligence’s work on the TFGBV Taxonomy, particularly the Mitigation Strategies. They also shared their perspective on the role platforms can play in addressing misogyny through safety by design, and how victim-survivor perspectives can meaningfully inform both design solutions and interventions.

UNDP’s Human Development Report 2025: A Matter of Choice — People and Possibilities in the Age of AI

  • September 29, 2025 at 5-7 PM ET
  • Columbia University, IAB Room 1512

Speaker:  Pedro Conceição, Director, UNDP Human Development Report Office
Discussant: Mala Kumar, Interim Executive Director at Humane Intelligence
Moderator:  José Antonio Ocampo, Professor of Practice, SIPA

Artificial intelligence (AI) is rapidly taking hold in many sectors. While AI feats grab headlines, they obscure what really matters: people’s choices.  Going forward, development will depend less on what AI can do, and more on mobilizing people’s imaginations to reshape economies and societies to make the most of it. Instead of trying to predict what will happen, this year’s Human Development Report asks what choices can be made so that new development pathways for all countries dot the horizon, helping everyone have a shot at thriving in a world with AI.

Pedro Conceição, the Director of the UNDP Human Development Report Office, shared highlights from the latest Human Development Report on the risks, opportunities and choices that AI poses, especially for middle and low-income countries. Prof. José Antonio Ocampo, co-director of the Development and Governance (formerly EPD) Concentration, and former head of the UN Department of Economic and Social Affairs, moderated. Mala Kumar, the Interim Executive Director of Humane Intelligence, joined as a discussant to share her perspective on the opportunities/risks of AI for low and middle-income countries, based on her work with Humane Intelligence and her previous work with GitHub, various UN agencies, and others in the development space.

Honest Discussions at the Intersection of AI and the SDGs

  • Tuesday, September 16, 2026, 5:30 – 8:30 PM ET
  • In-person, Doris Duke Foundation – 444 Madison Ave 10th floor, New York, NY 10022, USA
  • Free to attend, Sign up required

Description: Join Humane Intelligence, Tech Salon, and Compiler for this UNGA 80 side-event. Our event will bring together a select group of ~65 AI for good and responsible AI technologists, researchers, and policy experts working cross-sectorally and across different thematic areas pertaining to the Sustainable Development Goals (SDGs). This is a rare opportunity for frank, timely conversations about how technical realities, policymaking, and rapid changes in funding are affecting global efforts to build safer, more trustworthy AI ecosystems. This event was graciously funded by The Heising-Simons Foundation, and is an affiliate session of the ITU–UNDP Digital@UNGA programme – celebrating the power of Digital for Good. Click here to see other affiliate sessions.

TrustCon 2025

  • July 21, 2025
  • In-person, San Francisco

Description: Humane Intelligence hosted a workshop designed for both T&S practitioners and professionals tasked with making the case for AI and trust and safety investment. It will help attendees connect abstract or controversial risks (e.g., misinformation, surveillance, self-harm triggers) to enterprise impact and operational risk. Importantly, it will also demonstrate how organizations without formal AI governance teams can still run effective red teaming initiatives. Facilitators will bring a mix of technical and strategic expertise to ensure accessibility across varying levels of experience. Participants stress-tested mock AI systems—including a “Virtual Therapist” chatbot and “Ask the Historian” educational assistant.

Hamburg Sustainability Conference logo

Hamburg Sustainability Conference 2025

  • June 2 – 3, 2025, all day
  • In-person, Hamburg, Germany

Description: In its second year, the Hamburg Sustainability Conference was “Uniting Policymakers and Business Leaders to Accelerate SDG Performance.” Humane Intelligence was invited to attend by the co-organizer, UNDP. Rumman spoke on the high level panel, “Hamburg Declaration on Responsible AI for the SDGs.” Mala and Rumman also attended and spoke at the closed panel, “A Matter of Choice: People and Possibilities in the Age of AI”. 

We provided comments and committed to the Hamburg Declaration on Responsible AI, which was presented and adopted at the conference on 2 June 2025. HI made a commitment to conduct 1-2 red teaming and/or bias bounty challenges aligned to the SDGs. 

IBM @ NYC Tech Week

Multicultural Vulnerability Testing – IBM @ NY Tech Week 2025

  • June 3, 2025, 9:30 – 10:30 AM ET
  • In-person, New York, NY, USA

Description: This workshop explored the case study of the world’s first multicultural and multilingual AI Safety Red Teaming Challenge focused on Asia-Pacific, conducted by Humane Intelligence and IMDA Singapore. Participants got hands-on experience using our AI red teaming app to perform actual red teaming exercises that identify cultural biases and linguistic vulnerabilities across diverse contexts, equipping you with practical skills to evaluate and improve AI systems through inclusive testing methodologies.

Host: Sarah Amos, Former Product Manager, Humane Intelligence

All Tech Is Human @ Finnish Consulate

  • May 21, 2025, 3 – 8:30 PM ET
  • Finnish Consulate, New York City

Following the successful collaboration last year about Strengthening the Information Ecosystem curated gathering, All Tech is Human convened key individuals focused on strengthening mult-istakeholder collaboration in Responsible AI. Humane Intelligence’s Interim Executive Director, Mala Kumar, joined a panel on Aligning Transatlantic AI Governance in a Shifting Regulatory Landscape. The panel explored the evolving regulatory and standard-setting frameworks governing AI across the transatlantic sphere, assessing the intersections and divergences in differing approaches to risk classifications, liability structures, and compliance standards. The discussion examined the implications of a new political administration directing the U.S. approach to AI governance and will highlight the need to safeguard rights-respecting and democratic standards in the balancing act of governance.

HydroX Red Teaming

  • April 28, 2025
  • In-person, San Francisco

Humane Intelligence attended the RSA Conference 2025, where we partnered with HydroX AI to present HUMAN VS. MACHINE, a live red teaming session designed to uncover and address vulnerabilities in LLMs. This hands-on Learning Lab explored hallucinations, system prompt extractions, political sensitivities, and more—through 20 carefully structured adversarial challenges. This interactive session gave participants a chance to probe large language models (LLMs) in real time, identifying vulnerabilities and learning how to think adversarially—but ethically—about AI systems.

Co-led by Theodora Skeadas (Chief of Staff, Humane Intelligence) and Victor Bian (COO, HydroX AI), the session covered:

  • Adversarial testing workflows

  • Hands-on red-teaming techniques

  • Strengthening AI security for the future

At the Sidelines of the Paris AI Action Summit – AI & Society House

  • February 9, 2026, all day
  • In-person, Centre Events Paris – 3 Rue du Dr Lancereaux, 75008 Paris, France

Description: Expo booths featured Humane Intelligence’s partners that co-developed technical solutions and evaluations to understand and address the societal impact of AI models. In addition, Humane Intelligence hosted six panels with the broader community to discuss a wide range of topics related to public interest technology and developing solutions for addressing the societal impact of AI models. Expo Partners were NIST, CISA, CAMLIS, the UK FCDO, the Singapore IMDA, FEMYSO, the UK Royal Society, and Revontulet

Singapore IMDA red teaming on multilingual and multicultural biases

  • December 9, 2024
  • In-person, Singapore

Description: Humane Intelligence hosted participants from nine countries across Asia — experts in fields like sociology, cultural studies, and linguistics — red teamed four large language models (LLMs) to spot bias stereotypes directed at social groups in their countries, both in English and regional languages. By involving these experts, the Challenge aims to deepen our understanding of the diversity of AI harms and how well AI models perform in different contexts. This Challenge was a first step towards addressing key gaps and developing a common red teaming methodology. Backed by AI players like AI Singapore, Anthropic, AWS, Cohere, Google, and Meta, the initiative will help make AI safer for the region.

photo of three people standing in front of the AI Village sign

AISA – Australia Cybersecurity Conference 2024

  • November 26, 2024
  • in-person, Melbourne, Australia

Humane Intelligence was excited to partner with the Australian Information Security Association (AISA) and CSIRO to bring the AI Village to the AISA Australian Cyber Conference 2024 that took place in Melbourne, Australia, November 26-28. The event was set to be the largest of its kind in Australia and offered a unique opportunity to witness cutting-edge developments in AI security. The AI Village hosted an interactive “red teaming” exercise where our Fellow, Fariza Rashid, presented an introduction to algorithmic assessments. This was followed by a capture-the-flag-style event simulating real-world cyberattacks, to identify vulnerabilities in Large Language Models (LLMs).

This high-profile event, inspired by the 2023 U.S. White House-sponsored AI Red Teaming exercise, offered participants a unique, hands-on experience. At the AI Village, attendees delved into risks associated with large language models (LLMs), including bias, hallucination, and prompt injection.

UNESCO – technology facilitated gender based violence

  • November 25, 2024
  • In-person, UNESCO HQ, Paris, France

Description: Humane Intelligence partnered with UNESCO in Paris for a red-teaming exercise focused on Technology-Facilitated Gender-Based Violence (TFGBV). Held on the International Day for the Elimination of Violence Against Women and Girls, the event aimed to identify vulnerabilities in generative AI models and explore their potential misuse in perpetuating bias, discrimination, and violence against women and girls. This collaboration built on UNESCO’s extensive work on gender and AI, including their recommendations for addressing bias and their global reports on AI-facilitated gender-based violence.

ARIA Program at NIST, CAMLIS 2024

  • October 24 – 26, 2025
  • In-person, Washington DC

Humane Intelligence partnered with the ARIA Program at NIST during CAMLIS 2024 for a two-part red teaming event. Participants who made it through the virtual qualifier were invited to participate in-person in Washington DC. See here for more details about ARIA and the related scenarios. The in-person event used the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1),” as the operative rubric for violative outcomes and controls. The challenge was a capture-the-flag (CTF)-style points-based evaluation, verified by in-person event assessors. Models tested were donated from Anote, Meta, Robust Intelligence, Synthesia, and Expressive Avatars.

Sync Digital Wellbeing Summit 2024

Sync Digital Wellbeing Summit 2024

  • September 22, 2024
  • In-person, New York City

Description: In partnership with Sync, Ithra’s digital wellbeing initiative, Humane Intelligence co-hosted the Digital Wellbeing Summit, where we focused on ethical, humane, and empowered tech use. At the crossroads of urgent challenge and change, the Sync Digital Wellbeing Summit 2024 invited participants to join in reflection, conversation and a call-to-action around ensuring that we as humans come together to keep digital technology in check and working towards the greater good – safeguarding its future, and our own.

FEMYSO Red Teaming

  • AlSeptember 21, 2024
  • In-person, Brussels; online

Humane Intelligence worked with with FEMYSO, as part of the European Action Day Against Islamophobia Conference 2024 to red team for Islamophobic biases. Participants convened both in Brussels and virtually to red team Gemini, ChatGPT and Claude to compare model performance and learn about testing technology safeguards. Specific prompts were red teamed, and several came back with Islamophobic responses. Participants were asked to probe the models on topics including elections, government, children’s stories, news, cultural moments, and more. We look forward to continuing our collective efforts to combat Islamophobia and promote positive change for young Muslims in Europe.

Journalism & AI

  • lSeptember 9, 2024
  • In-person, Washington DC

In collaboration with Compiler, Humane Intelligence engaged 10 journalists interested in piloting a new tool to investigate the algorithms behind today’s most powerful chatbots. Reporters were invited to a one-day workshop at the Northwestern University Medill School newsroom, for hands-on training and talks from experts in data journalism, machine learning and on the current regulatory landscape related to artificial intelligence. The event concluded with dinner at the National Press Club.  The goal of this collaboration was to produce 10 original and impactful articles that utilize the Humane Intelligence tool and explore the potential societal effects of generative AI. The resulting work was published in a special edition of Compiler.

Bias Bounty 1 Hackathon with All Tech is Human

  • TeJuly 9, 2024
  • In-person, New York City, Betaworks – 29 Little W 12th St, New York, NY 10014

Humane Intelligence partnered with All Tech is Human to organize a hackathon around our first bias bounty challenge set. Participants in the hackathon will be supported as they delve into this first bias bounty challenge. The first challenge involved creating a probability estimation model that determined whether the prompt provided to a language model will elicit an outcome that demonstrates factuality, bias, or misdirection, inspired by the data from the DEFCON 2023 report.

UK Royal Society

  • October 25, 2023
  • In-person, London, England – UK Royal Society

Humane Intelligence hosted its first red teaming event in the UK at the prestigious Royal Society. The event came just one week ahead of the world’s first AI Safety Summit, organized by the U.K. government, where global policymakers will convene with AI scientists to discuss the dangers of the fast-moving technology. The event was carried out in participation with Meta, which sent an observer to the event and said it would use the findings to strengthen the guardrails of its AI systems. Bethan Cracknell Daniels, an event participant who is an expert in dengue fever at Imperial College London, successfully prompted the model to generate an ad campaign encouraging all children to get the dengue vaccine, even though the vaccine is not recommended for individuals who have not previously had the disease. 

DEFCON Red Teaming Exercise

  • August 14, 2023
  • In-person, Las Vegas, NV, USA

Description: This event was developed in collaboration with Seed AI and DEFCON AI Village, and held at DEFCON 2023. Over 2.5 days, 2,244 hackers evaluated 8 LLMs and produced over 17,000 conversations on 21 topics ranging from cybersecurity hacks to misinformation and human rights. Our winners received a GPU provided by our partners at NVIDIA. The event and analysis, the first of their kind, studies the performance of eight state-of-the-art large  language models (LLMs) by approximating at-scale real-world scenarios where harmful outcomes may occur. 

Other Previous Events

  • FCDO Workshop, Nigeria – 2025
  • FCDO Workshop, Colombia – 2025
  • [Confidential Client] Red Teaming – 2025
  • Red Teaming with DoD on Military Medicine – 2025
  • Legal Red Teaming – 2024
  • Bias Bounty 2 Hackathon – 2024
  • Rotary AI Club – 2024
  • Autodesk University – 2023
Sign up for our newsletter