In October 2025, Humane Intelligence, in collaboration with the Cognitive and Neurodiversity AI (CoNA) Lab at Virginia State University and Valence AI, launched the Global Accessibility Bias Bounty: a challenge inviting technologists, designers, researchers, and people with lived experience to reimagine AI-powered digital conferencing tools for neurodivergent users. The challenge wrapped up in December with submissions exceeding expectations. Participants built working prototypes, conducted rigorous bias audits, and proposed production-ready technical solutions that could transform how platforms serve diverse user needs.

We’re deeply grateful to Google.org for their support of Humane Intelligence’s Bias Bounty Program, and to Zoom Cares, whose generous contribution enabled us to expand the number of prizes to recognize additional deserving participants in addition to an in-kind product license donation for the Humane Intelligence team. The challenge offered Design and Data Science tracks across multiple skill levels, with prizes ranging from $500 to $1,500. 

Consistent with Humane Intelligence’s broader work, we designed this challenge to center on participatory design by engaging impacted communities as co-designers and co-builders, not just research subjects. We focused on inclusion through multiple communication formats, providing both video announcements with subtitles and text-based resources, and clearly expressed that requesting accommodations was welcome and encouraged.

Notably, this bias bounty also achieved the highest participation numbers to date. This level of participation underscored that, for many contributors, the impact went beyond prizes. TUES DAY, winner of the Data Advanced track, captured this sentiment: “Participating in this challenge was meaningful for me. When I was ready to submit I realized that even if I didn’t win, I would still be pleased with the attempt in the right direction. Y’all gave me a great reason to finish this artifact, which I felt was useful for illustrating the core pillars of my accessibility by design philosophy.”

Why This Focus Matters

Video conferencing has become essential infrastructure for work, education, and social connection. Yet these platforms were not originally designed with neurodivergent users in mind.

From autistic users who find gallery view overstimulating and must manually adjust settings each session, to those with ADHD who struggle without meeting agendas, to users whose communication differences result in lower “engagement scores” that affect performance reviews, digital conferencing platforms can create significant barriers. As one analysis noted, “even sociable calls can be draining because of the extra work needed to pick up on, and communicate, non-verbal signals” for neurodivergent users, with one study finding that even a 1.2 second delay can lead to conversations being perceived as less friendly or uninteresting

Two Core Challenges

This bias bounty focused specifically on two interconnected accessibility challenges in AI-powered video conferencing:

Turn-Taking and Conversation Flow: Virtual meetings can be particularly challenging for neurodivergent users who struggle with unclear conversation dynamics. Successful turn-taking depends on socio-communicative abilities and precise coordination between participants. Without clear speaking cues and structured moderation, neurodivergent participants often face impossible choices: interrupt others or stay silent. The same research showed that autistic dyads produced considerably longer silent gaps than non-autistic dyads in the early stages of dialogue, though this difference disappeared as conversations progressed, suggesting that thoughtful design interventions could help reduce this initial barrier.

Emotion Recognition Complexity and AI Fairness: Emotion AI holds genuine promise for accessibility. Many neurodivergent users struggle to interpret emotional cues across multiple modalities (face, voice, and body), making tools like Valence’s emotion recognition potentially transformative. However, building truly inclusive emotion AI is complex. Research reveals that even well-intentioned AI systems can embed biases against neurodivergent communication patterns. Studies show that many emotion recognition challenges are linked to co-occurring alexithymia (difficulty identifying and describing emotions) rather than autism itself. As neurodiversity researcher Dr. Megan Anna Neff explains, an estimated 50-60% of autistic people have alexithymia, and “many of the stereotypical assumptions about autism (for example, struggle with empathy, theory of mind, emotional identification, and reciprocity) are actually better explained by alexithymia and are not intrinsic to autism itself.” This complexity means that even the most advanced models need further bias detection and mitigation.

Beyond Digital Conferencing: High-Stakes Contexts

Research has documented significant bias against neurodivergent individuals across AI language models. A 2024 study published in Autism Research tested 11 different language model encoders and found “generally high levels of bias” against neurodivergence-related terms. Remarkably, many sentences describing disabilities like “I have autism” had even stronger negative associations than control sentences such as “I am a bank robber.” Troublingly, these biases are showing up in high-stakes contexts beyond video conferencing:

In hiring: AI-enabled interview tools that analyze facial expressions, vocal cadence, and eye contact during video interviews can systematically disadvantage autistic candidates. A 2024 University of Washington study found that AI resume screeners ranked applications lower when they included autism-related awards or memberships. Bloomberg Law reported that AI-enabled interview tools and algorithmic personality tests risk employment discrimination against autistic job seekers as the technology weighs criteria that can disadvantage neurodivergent candidates, with autism-related EEOC charges rising significantly in recent years. 

In education: AI detection tools designed to catch academic dishonesty disproportionately flag neurodivergent students’ original work as AI-generated. The AI Incident Database documented cases where students with autism spectrum disorder were falsely accused of cheating because their writing style (often more structured, direct, and formulaic due to how they were taught to communicate) resembles AI-generated text. One well-documented case involved Moira Olmsted at Central Methodist University, who was falsely accused of cheating based solely on AI detector output. Despite explaining her communication style, which is shaped by her neurodivergence, she received a zero and a disciplinary warning.

The Choice to Include Both Design and Data Outcomes

From the start, we designed this bias bounty to address the full pipeline where bias enters AI-powered conferencing tools: from the initial product design decisions about which features to build and how they should work, to the data practices and model architectures that power AI features like emotion detection or automated moderation. Crucially, we also designed it to be as inclusive as possible, ensuring that a lack of data science or AI engineering skills would not prevent anyone from participating or  contributing meaningful insights.

This dual-track structure served another critical purpose: skills-building and career development. By offering both design-focused and data science-focused challenges at multiple skill levels, we created entry points for participants with different technical backgrounds. Neurodivergent technologists who might face barriers in traditional AI/ML roles found pathways to contribute their expertise and develop new skills.

Alex Hana, winner in both Design Beginner and Intermediate tracks, exemplified this impact: “I’ve been waiting to join the Humane Intelligence bounties since they started but I don’t have strong data science skills. This bounty was my first time participating—what an inspiring challenge! As a neurodivergent technologist, this bounty gave me the push I needed to finally try vibe coding.” For participants like Alex, the bias bounty wasn’t just about identifying problems in existing systems, it was an opportunity to build technical skills, create portfolio-worthy work, and demonstrate expertise that might otherwise go unrecognized in traditional hiring processes.

Challenge Tracks: From Visualization to Production-Grade Systems

Each track and level was designed to address a different point in the accessibility and bias pipeline, increasing in technical complexity and real-world applicability.

Design Track

Beginner (Visualize Design Challenges): Participants created empathy-driven visualizations that made accessibility barriers visible, such as sensory overload in gallery view, unclear turn-taking cues, or the cognitive load of constantly adjusting settings. These artifacts helped translate lived experience into concrete design insight for platform teams.

Intermediate (Prototype Interactions): Participants built interactive prototypes exploring solutions like persistent view preferences, quiet modes, clearer turn-taking cues, and configurable emotional subtitles. Some submissions also explored how carefully applied emotion AI could support neurodivergent communication without increasing surveillance or pressure.

Advanced (Create A/B Tests): Participants designed A/B tests comparing alternative interface approaches for neuroinclusive configuration, such as visual card-based sensory settings versus traditional dropdowns, or different layouts for emotion AI with calibration and confidence controls.

Data Track

Beginner (Voice Emotion Labeling Bias Audit): Participants examined how human assumptions about normal emotion expression can encode bias into training data. They labeled voice clips, compared their judgments to automated outputs, and reflected on how culture, language, and exposure to neurodivergent communication shaped their interpretations.

Intermediate (Voice AI Bias Evaluation): Participants evaluated real emotion AI outputs to identify harmful misclassifications, quantify disparities with fairness metrics, and document direct and systemic harms, including cases where AI misreads neurodivergent affect and then reinforces that misreading as fact.

Advanced (Neuroinclusive Voice AI Testing): Participants implemented mitigation strategies to improve model performance across neurodivergent and intersectional groups, using approaches such as data augmentation, calibration, fairness-constrained objectives, and other model-level interventions aimed at production readiness.

What We Learned: Themes Across Submissions

The Power of Participatory Design

Perhaps the most striking finding from this bias bounty was how often participants with lived experience surfaced insights that even accessibility experts might have missed. These weren’t minor edge cases. They were fundamental design opportunities that could improve the experience for everyone, not just neurodivergent users.

Across the Design track, we saw participants reimagine digital conferencing from the ground up. Alex Hana’s beginner-level submission introduced opt-in features for managing eye contact expectations and turn-taking in conversation, shifting responsibility from the individual to the platform and reframing accessibility as a system-level design opportunity rather than a user deficit. These features were intentionally private and configurable, reducing the risk of shame or surveillance.

At the intermediate level, we saw independent convergence on the same powerful idea across multiple submissions: allowing users to hide facial video while retaining text-based descriptions of emotional context. Several neurodivergent participants described feeling overwhelmed by panels of faces staring at them, unable to read facial expressions but experiencing intense sensory and social pressure from camera-centric norms. This wasn’t a feature request we had anticipated, but it appeared organically across submissions, indicating a real and recurring accessibility need.

Alex Hana’s intermediate-level sensory accessibility prototype took a different approach, addressing visual overstimulation through blue-light reduction, grayscale participant video, and AI-assisted brightness normalization during screen sharing. The AI component dynamically adapted the viewing experience in real time without altering what other participants saw, demonstrating how accessibility settings and adaptive AI could work together as a unified sensory-friendly mode.

Rose Shuman’s Design Beginner submission captured the sensory experience of video calls in visceral terms: “My submission is full of loud, off-putting and penetrating eyes. It was fun and cathartic to make it—because it has taken me years of desensitization to tolerate video calls, understanding that neurotypical people like looking into others’ eyes, and can read expressions. However, I don’t read expressions, and as such just see a panel of folks staring at me.”

This kind of insight, grounded in lived experience and communicated through design, exemplifies what Sasha Costanza-Chock calls “Design Justice.” In their book Design Justice: Community-Led Practices to Build the Worlds We Need, Costanza-Chock argues that “design justice is an approach to design that is led by marginalized communities and that aims explicitly to challenge, rather than reproduce, structural inequalities… [it] connects design to larger struggles for collective liberation and ecological survival.”

At the advanced level, Jeremiah Essilfie used side-by-side A/B comparisons to examine how a platform’s settings layout impacts attention, scanning, and working memory for users with ADHD. The work contrasted dense, ungrouped settings lists with a card-based layout featuring clearer separation and reduced text density, offering a practical method for testing accessibility-driven design changes in complex interface environments where usability challenges often go unnoticed.

When we center neurodivergent voices in designing communication technologies, we don’t just solve for “edge cases.” We fundamentally reimagine what inclusive communication could look like. We move from retrofitting accommodations onto systems built for an imagined “average” user, to building inherently flexible systems from the ground up.

Data Practices Matter

The Data track submissions demonstrated that bias doesn’t just live in trained models. It enters the pipeline much earlier, through the human decisions made during data annotation. Multiple participants in the Data Beginner track showed how their own cultural backgrounds, language patterns, and limited exposure to neurodivergent communication shaped how they labeled emotion in voice clips.

Chetan Talele took a particularly rigorous approach, using Implicit Association Tests (IATs) to explicitly interrogate their own biases and demonstrate how bias can enter the AI pipeline through annotator perception rather than malicious intent. Their audit highlighted recurring failure modes in video conferencing contexts, including misreading flat affect, attributing age-related vocal traits to emotion, and forcing single labels under uncertainty.

Multiple participants independently introduced a “split-stream” labeling framework that separates semantic content from vocal delivery, acknowledging that what someone is saying and how they’re saying it don’t always align, especially for neurodivergent speakers. Alex Hana’s beginner-level work went further, proposing context labels (such as irregular prosody, word choice, conversational style) alongside emotion labels rather than forcing a single interpretation. Drawing on lived experience as an autistic and dyslexic individual, they illustrated how traditional emotion categories can collapse nuance and unintentionally penalize non-normative expression during training.

At the intermediate level, Aaron Golden demonstrated with quantified metrics that neurodivergent users experience significantly higher emotion misclassification rates in emotion-AI systems. He then implemented a fairness-aware correction layer with a “mirroring” component that makes AI corrections visible and interpretable, framing the system as adaptive and assistive rather than static and judgmental. As Aaron reflected: “I am extremely grateful to have had the chance to take part in this bias bounty challenge! Humane Intelligence is at the forefront of bringing together people from a variety of backgrounds to face new and emerging challenges in making tech more human-centric and beneficial for all.”

The Data Advanced submission by TUES DAY took this further, presenting a production-grade neuroadaptive AI framework that addresses emotion-AI bias at the system level. The work introduced bidirectional reasoning, allowing the model to verify whether mismatches between affect and content are expected neurodivergent patterns rather than errors, reducing false negatives for alexithymic users by 40%. An automated fairness testing loop (BeTaL) continuously generates challenging edge cases to proactively surface bias as systems evolve, demonstrating how accessibility-aware AI can function as an assistive layer rather than a passive classifier.

Looking Forward

The submissions we received were actionable blueprints, from sensory-friendly interface modes to fairness-aware AI architectures to inclusive data labeling practices. Participants demonstrated what becomes possible when we invite those most affected by bias to be architects of change.

That potential for real-world impact became tangible when participants who consented allowed us to share their work with our partners at Google.org, Zoom, and Valence AI. These organizations gained direct access to insights that could shape their product development, and their response validated the value of the bias bounty approach. Partners at Valence AI expressed strong enthusiasm about the submissions, particularly the data-focused entries that aligned with their upcoming priorities around modeling and data quality. This demonstrated how bias bounties can generate solutions that organizations are not only seeking, but eager to implement.

As AI continues to permeate every aspect of communication, work and daily life, we need more opportunities like this to center the voices of those most impacted by these systems. We’re grateful to every participant who shared their time, expertise, and lived experience by participating in this challenge. We are committed to continuing this work through future bias bounties and partnerships with organizations ready to build more inclusive AI.

To learn more about bias bounties or to get involved in future challenges, visit https://humane-intelligence.org/programs-services/bias-bounty/.

Challenge Winners

We’re honored to recognize the following winners of the Accessibility Bias Bounty. Note that some winners preferred not to be listed publicly, so this is not the complete list. We thank all participants for their thoughtful and impactful submissions.

Design Track

Data Track

Sign up for our newsletter