Programs and services

Bias Bounty


What are Bias Bounties?

Unlike traditional bug bounties that target code errors, Humane Intelligence’s algorithmic bias bounties focus on discovering the root causes of biased or exclusionary outcomes in AI systems.

Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Instead of treating bias as an abstract or philosophical debate, bias bounties create a structured process where bias can be systematically surfaced, measured, and addressed.

Key Goals of Bias Bounties

  • Center lived experience — surface bias by co-designing challenges with the people most affected.
  • Generate actionable improvements — move beyond theoretical discussions toward concrete fixes and retraining inputs.
  • Build inclusive defaults — design tools, datasets, and models that perform equitably across diverse contexts.
  • Strengthen the AI ecosystem — build a fairness-focused community of practice, equip developers with practical tools, and generate insights that enhance partner systems and foster public trust.

SPECIAL THANKS

Our first ten challenges were launched


About our challenges

Humane Intelligence takes a hands-on approach to ensure every bounty is impactful, well-executed, and aligned with our partners’ goals. We combine expertise in bias, sociotechnical research, and data science, and work closely with our organizational partners to co-design each challenge scope, engage the right participants, and evaluate findings in a way that honors impacted communities while also driving technical improvement.

Participants use systematic testing methods to uncover issues like biased training data, discriminatory default settings, and algorithmic blind spots that fail to account for human diversity. Beyond documenting exclusionary patterns, participants also design and develop technical solutions that enhance system performance in real-world conditions.

Each challenge has:

  • A start and end date, usually at least a month apart
  • A theme, such as agriculture, health, or accessibility
  • Models or datasets hosted on Humane Intelligence’s GitHub account
  • Cash prizes, with a clear breakdown of prize allocation, and a scoring rubric or guidance
  • More than one level or track
  • An active, moderated community on our Discord server
  • As of September 2024, a partner organization supplying the dataset, model or system access

Challenges are generally run remotely online. In limited cases, there are in-person participation options. Unless otherwise specified, anyone 18 years or older in any country* is eligible to participate, regardless of education level, work experience or professional background. Note that participation does not guarantee a prize and does not constitute employment with Humane Intelligence.

*With the exception of participants in US sanctioned countries.

Want to work with us on a bias bounty?

Current and Upcoming Challenges

Open
October 6, 2025
Close
November 7, 2025

Improving Accessibility in Digital Conferencing Facilities, with CoNA Lab and Valence AI

Humane Intelligence is partnering with Valence AI and CoNA Lab on a bias bounty challenge focused on accessibility for neurodivergent people in conferencing platforms like Zoom, and on the role of emotion AI detection in shaping those experiences. Participants will be able to choose from a design or machine learning track to identify accessibility gaps and propose improvements.

The submission guide, materials and form will be posted here on October 6, 2025
Open
TBD
Close
TBD

Centering Local Knowledge in Agriculture, with Digital Green

Humane Intelligence is partnering with Digital Green on two bias bounty challenges on two bias bounty challenges for FarmerChat, a generative AI assistant serving smallholder farmers in sub-Saharan Africa and India. The challenge will ask: how can agricultural AI be shaped by the lived experiences, indigenous knowledge, and practices of women farmers? Participants will engage with multimodal, locally collected and annotated datasets, and seek new pathways to build inclusive AI rather than a one-size-fits-all system.

Competition materials, scoring rubrics and submission info will be posted here in November 2025.
Open
TBD
Close
TBD

TFGBV, with Tattle Civic Tech

Humane Intelligence will be launching a tech facilitated gender based violence bias bounty with Tattle Civic TechParticipants will identify instances of culturally contextual intimate imagery, and architect solutions for more inclusive moderation algorithms.

Competition materials, scoring rubrics and submission info will be posted in January 2026.

TWO WAYS YOU CAN HELP THE FUTURE OF BIAS BOUNTIES

  1. Starting September 2025, we are seeking sponsorships to offset our costs to design new challenges sets. 
  2. We are seeking funding to partner with Zindi.Africa to scale our bias bounties on their platform so we can reach more people.

Past Challenges

Challenge Set 3: Ensuring Fair, Biophysically Informed, and Community-Driven Tree Planting Site Recommendations

Humane Intelligence partnered with Indian Forest Service for this challenge set. In three levels and tracks – thought leadership, beginning technical, intermediate technical – focused on ensuring fair, biophysically informed, and community-driven tree planting site recommendations—tackling bias in AI-driven environmental decision-making.

The challenge dates were:

  • Competitions opened: November 26, 2024
  • Competitions closed: January 24, 2025
  • Winners announced: March 19, 2025

Winners

  • Thought Leadership: Yashashree Garge (1st place); Aaron Goulden (2nd place)
  • Beginner Technical: Mark Schutera (1st place); Yu-Min Chang (2nd place); Chetan Talele (3rd place)
  • Intermediate Technical: Mayowa Osibodu (1st place); Nagesh Mohan (2nd place)

Challenge Set 2: Uncover Hidden Extremist Propaganda

Humane Intelligence partnered with Revontulet for this challenge set. In two levels – intermediate and advanced – participants focused on counterterrorism in computer vision (CV) applications, centered on far-right extremist groups in Europe / the Nordic region. The goal was to train a CV model to understand the ways in which hateful image-propaganda can be disguised and manipulated to evade detection on social media platforms.

The challenge dates were:

  • Competitions opened: September 26, 2024
  • Competitions closed: November 7, 2024
  • Winners announced: November 25, 2024

Winners

  • Intermediate: Gabriela Barrera, Blake Chambers, Chia-Yen Chen
  • Advanced: Mayowa Osibodu, TUESDAY, Devon Artis

Challenge Set 1: Bias, Factuality, Misdirection

In three levels – beginner, intermediate, advanced – participants designed fine-tune automated red teaming models to explore issues like bias, factuality, and misdirection in Generative AI.

The challenge dates were:

  • Competitions opened: May 15, 2024
  • Competitions closed: June 15, 2024
  • Winners announced: August 2024

Winners

  • Beginner: Blake Chambers (Bias); Eva (Factuality); Lucia Kobzova (Misdirection)
  • Intermediate: AmigoYM (Factuality); Mayowa Osibodu  (Factuality); Simone Van Taylor (Bias)
  • Advanced: Yannick Daniel Gibson (Factuality); Elijah Appelson (Misdirection); Gabriela Barrera (Bias)
Sign up for our newsletter