
Unlike traditional bug bounties that target code errors, Humane Intelligence’s algorithmic bias bounties focus on discovering the root causes of biased or exclusionary outcomes in AI systems.
Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Instead of treating bias as an abstract or philosophical debate, bias bounties create a structured process where bias can be systematically surfaced, measured, and addressed.
SPECIAL THANKS
Our first ten challenges were launched
Humane Intelligence takes a hands-on approach to ensure every bounty is impactful, well-executed, and aligned with our partners’ goals. We combine expertise in bias, sociotechnical research, and data science, and work closely with our organizational partners to co-design each challenge scope, engage the right participants, and evaluate findings in a way that honors impacted communities while also driving technical improvement.

Participants use systematic testing methods to uncover issues like biased training data, discriminatory default settings, and algorithmic blind spots that fail to account for human diversity. Beyond documenting exclusionary patterns, participants also design and develop technical solutions that enhance system performance in real-world conditions.
Each challenge has:
Challenges are generally run remotely online. In limited cases, there are in-person participation options. Unless otherwise specified, anyone 18 years or older in any country* is eligible to participate, regardless of education level, work experience or professional background. Note that participation does not guarantee a prize and does not constitute employment with Humane Intelligence.
*With the exception of participants in US sanctioned countries.
Humane Intelligence is partnering with Valence AI and CoNA Lab on a bias bounty challenge focused on accessibility for neurodivergent people in conferencing platforms like Zoom, and on the role of emotion AI detection in shaping those experiences. Participants will be able to choose from a design or machine learning track to identify accessibility gaps and propose improvements.
Humane Intelligence is partnering with Digital Green on two bias bounty challenges on two bias bounty challenges for FarmerChat, a generative AI assistant serving smallholder farmers in sub-Saharan Africa and India. The challenge will ask: how can agricultural AI be shaped by the lived experiences, indigenous knowledge, and practices of women farmers? Participants will engage with multimodal, locally collected and annotated datasets, and seek new pathways to build inclusive AI rather than a one-size-fits-all system.
Humane Intelligence will be launching a tech facilitated gender based violence bias bounty with Tattle Civic Tech. Participants will identify instances of culturally contextual intimate imagery, and architect solutions for more inclusive moderation algorithms.
Humane Intelligence partnered with Indian Forest Service for this challenge set. In three levels and tracks – thought leadership, beginning technical, intermediate technical – focused on ensuring fair, biophysically informed, and community-driven tree planting site recommendations—tackling bias in AI-driven environmental decision-making.
The challenge dates were:
Humane Intelligence partnered with Revontulet for this challenge set. In two levels – intermediate and advanced – participants focused on counterterrorism in computer vision (CV) applications, centered on far-right extremist groups in Europe / the Nordic region. The goal was to train a CV model to understand the ways in which hateful image-propaganda can be disguised and manipulated to evade detection on social media platforms.
The challenge dates were:
In three levels – beginner, intermediate, advanced – participants designed fine-tune automated red teaming models to explore issues like bias, factuality, and misdirection in Generative AI.
The challenge dates were: