Bias bounty Oct 06, 2025

Submit to our Accessibility in Digital Conferencing Facilities Bias Bounty Challenge Set!

Solve real accessibility problems through UX design and machine learning

Annie Brown

On October 6, 2025, Humane Intelligence, in collaboration with the Cognitive and Neurodiversity AI (CoNA) Lab at the newly established Center for Responsible AI at Virginia State University and Valence AI, will launch the Global Accessibility Bias Bounty challenge set. We’re inviting participants worldwide to reimagine video conferencing and emotion AI technologies for neurodivergent users, building digital spaces that are more inclusive for everyone.

How to Participate

This challenge is open to everyone, whether you are a student, researcher, technologist, or someone with lived experience navigating digital barriers. We especially encourage neurodivergent individuals to take part, as their lived experiences are essential to building systems that meet real-world needs.

Key Details

Participants can choose between two tracks, each offering three experience levels with corresponding prizes:

Design Track: Focus on creating inclusive conferencing experiences through empathy-driven design and AI-enhanced interactions. Challenges range from visualizing barriers neurodivergent users face, to prototyping features like persistent view preferences, quiet modes, and turn-taking cues, to designing A/B tests that help users configure settings for sensory and communication needs.

  • Beginner (USD $500): Visualize design solutions to accessibility challenges
  • Intermediate (USD $1,000): Prototype new interactions for inclusive conferencing
  • Advanced (USD $1,500): Create A/B tests to evaluate design improvements

Data Science Track: Tackle bias in emotion AI by auditing human labeling practices, evaluating systemic misclassifications, and testing technical improvements. Challenges move from uncovering how assumptions about “normal” emotion expression encode bias, to analyzing model outputs for harmful patterns, to modifying models with fairness techniques to improve accuracy for neurodivergent and intersectional users.

  • Beginner (USD $500): Conduct human labeling bias audits on voice emotion data
  • Intermediate (USD $1,000): Perform statistical bias evaluation of emotion AI outputs
  • Advanced (USD $1,500): Test Valence’s API and propose architecture improvements

Submission materials for all tracks and levels are available here.

About The Challenge

The best digital communication happens when tools match how people naturally think and process information. But what’s intuitive for one person can be overwhelming for another. Someone with sensory processing differences might need different video call settings, while rapid-fire messaging doesn’t work well for people who need more time to process information.

This goes deeper than accessibility checkboxes—most platforms are built from narrow assumptions about how people communicate. Designs are often  for the “average” or most common user. Solutions are retrofitted for everyone else.

What if we flipped that approach? Instead of building one platform and adding accommodations, we could design communication systems that are inherently flexible. Think interfaces that let users control sensory input and conversation pacing, paired with AI-powered tools trained on neurodiverse communication patterns that can adapt format and timing in real-time.

Valence AI CEO Chloe Duckworth explains their participation:

“This challenge aligns perfectly with our mission to bridge different communication styles and gives us valuable feedback on our API’s potential biases from the broader community.”

Valence AI is committed to challenging deficit models and supporting authentic communication through inclusive design. By making emotion AI more transparent and adaptive, they help ensure that diverse communication styles—including those of neurodivergent users—are recognized and valued.

Valence AI CTO Shannon Brownlee emphasizes the technical importance:

“Bias bounties provide a structured way to stress-test our models against real-world scenarios we might not have considered. Having diverse voices challenge our assumptions early in development is invaluable for building robust, inclusive AI systems.”

Gabrielle Waters, Director of the Center for Responsible AI at Virginia State University, notes how bias bounties advance existing neurodiversity research:

“Accessibility isn’t only about removing barriers, it’s about rethinking the foundations of how we design technology. Digital tools are typically built for the average user and that leaves anyone who falls outside that narrow framing to struggle with retrofitted fixes. This bias bounty flips that approach and invites a variety of voices, especially neurodiverse ones, to create systems that are inclusive by default. As we build with accessibility at the center, we’re not just solving for edge cases, we’re making technology better for everyone.”

Our Approach to Accessibility

Each bias bounty is designed with inclusivity in mind, but given the subject matter, we were especially conscious of this. Every aspect of this challenge has been built around accessibility from the start:

  • Participation tracks in both Design and Data Science, available at Beginner, Intermediate, and Advanced levels, so that anyone can contribute meaningfully
  • Accommodations available upon request, recognizing that standard formats do not work for everyone
  • Resources and toolkits to help participants understand neurodivergent communication styles and accessibility needs
  • A community-centered process that values neurodivergent voices as co-designers and co-builders, not just research subjects
  • In-person bounty events will take place at Morgan State University and Virginia State University. Register for the in-person events here

This challenge offers more than prizes. It’s an opportunity to demonstrate what’s possible when accessibility guides technology from the start, not as an afterthought.

Click here to find the submission materials and form.

Sign up for our newsletter
Sign up for our newsletter