Strategy Sep 04, 2025

Bringing our strategy forward – what we’ve done and what’s ahead

A look into our leadership transition and three strategy shifts going forward

Mala Kumar

If you’ve ever worked in AI, you’ve probably heard many analogies and metaphors to describe the idea. In my professional circles, I’ve heard AI compared to cars, pizza, dogs, and my parents. Somewhere among the list of comparisons, hype cycles and doom cycles that can define our collective understanding of AI, lies a much more complex and nuanced reality. 

In just a few short years, Humane Intelligence, a nonprofit organization, has helped many people examine that complexity through our new and novel methods of AI / ML evaluations. While nuance will always remain a constant at Humane Intelligence, the needs, opportunities, markets and demands of AI are rapidly evolving, and so too must our organization. If you’ll indulge me for a few minutes, I’d like to walk you through our next strategic chapter.

Our Leadership Transition

Dr. Rumman Chowdhury co-founded Humane Intelligence, a nonprofit organization, in 2022 and has served as its CEO since. Along with an incredible team, Rumman has led the organization from a small idea to a globally recognized name. 

Earlier this year, Rumman decided to step down as the CEO of the nonprofit to focus on her new public benefit corporation.

Please wish her well and send her good vibes! Fortunately, we’re lucky that Rumman will continue advising the nonprofit as our Distinguished Advisor. In the coming weeks, we’ll post more about her new company.

On August 1, 2025, I officially stepped up as interim Executive Director of Humane Intelligence, the nonprofit. It’s great to be here!

More than 15 years ago, I got my start in tech, first at the United Nations, where I designed, implemented and deployed software products throughout sub-Saharan Africa, India, Europe, and at home in New York City. I’m proud to say I was one of the first people to bring UX research and design methods to the field of tech for international development. I’ve worked in 20+ countries and in English and French, which has greatly shaped my outlook on both the powers and limitations of digital technology solutions. 

After the UN, I went to GitHub for four years, where I was the Director of Tech for Social Good. As my time at GitHub covered the entirety of COVID as an active pandemic, much of my work was with the World Health Organization and the important role of open source software (OSS) in public health. Together, we launched the first Open Source Programme Office in the UN system. While at GitHub, I also started a qualitative research program on OSS for good, a skills-based volunteering program, an OSS Maintainers for digital public goods program, the MERL Center, and contributed to the launch of the GitHub Innovation Graph. 

Rumman and Mala standing at the Hamburg Sustainability Conference

Rumman and Mala at the Hamburg Sustainability Conference with colleagues from BMZ (Government of Germany) and UNDP.

Last year, while I was the Director of Program Management, AI Safety at MLCommons, I first met Rumman. Suffice to say that we have complementary skills and outlooks on many issues, and decided we needed to work together. I’m honored she asked me to step in as Interim Executive Director of this wonderful organization, and I’m grateful its Board of Directors agreed. As with everything I do, I promise to bring clarity, thought leadership, kindness, humor, and address real needs in leading Humane Intelligence through its transition period.

Strategy Shift 1: A Focus on Open Source Software

According to research from GitHub, open source software (OSS) can increase adoption and improve sustainability globally, especially in geographies where the cost of labor is lower than proprietary software-as-a-service (SaaS) alternatives. OSS has also been shown to improve local technology capacity and foster collaboration. At Humane Intelligence, we see OSS as a way to break down barriers so that more people and organizations can access AI evaluations, and unlock the potential of AI to strengthen and scale their impact. That’s why we’re planning to release our red teaming web application as an OSS. 

Our Forking Strategy

Our goal is to end with a robust OSS release + improve our managed instance, to give organizations wishing to red team their AI systems different licensing and engagement options to fit a wide range of budgets and needs. In partnership with the Indian company, Tattle Civic Tech, we have completed the investigation of our code base and have come up with initial recommendations about how to release the OSS, which you can read about here

And yes, we are seeking funding to do this right!

Strategy Shift 2: Responsible AI and AI for Good

Over the past three years, Humane Intelligence has pioneered new and novel ways to approach AI evaluations using methods from software development and cybersecurity, such as red teaming and bias (bug) bounties

Humane Intelligence’s AI evaluation service offerings

Both methods are effective ways to assess for algorithmic bias, which sits in the fields of Responsible AI and AI Ethics. That isn’t changing. What is shifting slightly is how we focus those ideas. Based on past Humane Intelligence engagements in domain areas like education and gender, we know we can achieve some of our greatest impact by applying our responsible AI work to AI for Good deployments. We see AI evaluations as a critical component in any AI systems build, especially in high-stakes domains like education, gender rights, democracy and governance, public health, and humanitarian response. 

Questions that Humane Intelligence can help answer and the building blocks of AI systems

We’re here to help if you’re building AI models and systems for better economic, social and cultural outcomes and prosperity. We’ve worked with organizations across industry, international civil society, governments, academia, and more. 

Strategy Shift 3: The Future of Bias Bounties

Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Instead of treating bias as an abstract or philosophical debate, bias bounties create a structured process where bias can be systematically surfaced, measured, and addressed. Our bias bounties will continue to: 

  • Generate actionable improvements — move beyond theoretical discussions toward concrete fixes and retraining inputs.
  • Build inclusive defaults — design tools, datasets, and models that perform equitably across diverse contexts.
  • Strengthen the AI ecosystem — build a fairness-focused community of practice, equip developers with practical tools, and generate insights that enhance partner systems and foster public trust.

Our new challenges will also center lived experience – surfacing bias by co-designing challenges with the people most affected. 

We will soon be launching two new bias bounty challenge sets. Google.org graciously funded these, plus our first eight challenges. 

Starting September 2025, we are seeking corporate / organizational sponsorship or funding to offset our costs to design and run new challenges. We are also seeking funding to partner with Zindi.Africa to scale our bias bounties on their platform so we can reach more people more effectively.

Zindi is a data science challenge platform that has more than 90,000 users in 185 countries participating. More than 60 percent of Zindi’s global user base is in 50+ sub-Saharan African countries and 28% are women.

Get in Touch!

With the support of funders and partners, we can sustainably scale our AI evaluation service and product offerings, and help ensure that AI serves a greater global public good. If you’re interested in supporting our work, please reach out to us at info@humane-intelligence.org. We’d love to hear from you!

Sign up for our newsletter
Sign up for our newsletter