A look back at what we’ve done in 2025 and what we’re excited about in 2026
About 2.5 years ago, a few months after I left my job as Director of Tech for Social Good at GitHub, I threw together a few slides and called a meeting with some of my closest and trusted colleagues. In my presentation, I described a problem that each of us had encountered many times and in many different ways over the course of our careers – tech was once again failing to meet the social good needs of our society. That presentation eventually morphed into a video about Responsible AI and AI for Social Good. That idea eventually formed the basis of our new strategy here at Humane Intelligence nonprofit.
In fewer than three years, Humane Intelligence has emerged as a global thought leader in the field of AI evaluations and in the responsible AI community, thanks to the visionary work of its co-founder, Dr. Rumman Chowdhury, and many team members who have contributed along the way. This past August, it was my honor to take over the organization from Rumman, and operationalize a strategy shift that brings responsible AI and AI for social good together. 2.5 years after I held that meeting, it is gratifying to see the windfall of activity to use AI to better human development. Humane Intelligence is here to meet the moment as one of the few organizations globally that offer human-centered, rigorous evaluations to responsibly unlock the power of AI for social good at scale.
We are incredibly excited that our two main requests from Q4 2025 have been fully funded. As we detailed in this blog post, throughout 2026, we will be collaborating with Radiant Earth to move our bias bounty program onto Zindi, thanks to the generous support of the Heising-Simons Foundation. We will also be releasing our AI red teaming app under an open source software license in Q4 2026, thanks to the support of Google.org. Next year, we will continue to provide our red teaming, bias bounty and contextual evaluation services to our clients. And I’m excited to say that we already have more than 70 members in our new AI in Public Health working group, which kicked off last week.
In January or February 2026, we will announce a new set of project and funding priorities to strengthen our AI evaluation methodologies and evidence base, expand our work to more countries, and deepen our understanding of how context, inclusion, bias and culture intersects with AI models and systems.
Apart from realizing our funding priorities, 2025 was an incredible and busy year for our organization. Rather than offering only my perspective, I have asked the team to highlight some of their favorite moments below. I’ll kick us off!

Landing two big grants for our upcoming, highly impactful work were definitely highlights of my year at Humane Intelligence. On par with that high was redesigning our branding and launching our new website. I’m so incredibly grateful to Kyle Meehan, the owner of Andless Studio, which created our new logo and website design system. Kyle designed all of my websites when I was at GitHub, and as always, his team and he were fast, insightful, brilliant collaborators. The Humane Intelligence team also stepped up big time to help. In less than two months, we redesigned our logo and branding; created our website taxonomy; gathered, rewrote and created all content; wireframed our pages; populated all content in our new website, and launched. It was a Herculean effort, and it has paid off! We have more visitors staying on our website for longer, and more people reaching out to us through our contact form than ever before.

A second highlight for me was hosting our UNGA side-event in September. As someone who spent nearly a decade working directly in the UN system, I have attended a LOT of UNGA events and hosted a few. The depth of knowledge on our panels, the energy and enthusiasm of our attendees, and overall vibe of the night was truly special. We hosted the only event at UNGA focused on AI and the SDGs, and we’re working on several project and partnership ideas borne out of the discussions that night. Thanks again to our event collaborators, Compiler and Tech Salon, to our team, to our panelists, and to the Heising-Simons Foundation for supporting the event.

A huge highlight for me this year was our AI Society House, hosted on the sidelines of the Paris AI Action Summit in February. The event was organized in a two-fold way: (1) We featured demos with partners who co-developed technical solutions and evaluations to understand and address the societal impact of AI models. (2) We hosted six panels with the broader community to discuss a wide range of topics related to public interest technology and developing solutions for addressing the societal impact of AI models. It was incredible to see so many of our collaborations with partners featured in one space, including with NIST, CISA, CAMLIS, the UK FCDO, the Singapore IMDA, FEMYSO, the UK Royal Society, and Revontulet!

I really enjoyed co-hosting a workshop at TrustCon, the conference for trust and safety practitioners, designed for both T&S practitioners and professionals tasked with making the case for AI and trust and safety investment. We aimed to help attendees connect abstract or controversial risks (e.g., misinformation, surveillance, self-harm triggers) to enterprise impact and operational risk. We also demonstrated how organizations without formal AI governance teams can still run effective red teaming initiatives. Participants stress-tested two mock AI systems: (1) a “Virtual Therapist” chatbot and (2) “Ask the Historian” educational assistant. Our work was featured in Tech Policy Press.
On the engineering front, 2025 has been a really big year of building and planning for the future. The engineering team is focused on creating technical systems to support our mission. The first half of the year was spent expanding our capabilities, and so a huge highlight for us was when we really put this through its paces with the first big event we hosted on our improved platform, assessing the performance of educational AI systems. We knew that we were able to create a successful partnership with our client when we delivered a detailed analysis helping them to better understand what they’ve been building and to shape better outcomes for students.
As a person who identifies as an open source scientific computing superfan, I love when impactful technology is built under open source licenses. Our work over the past year led us on a journey to discover what we need to do to expand our reach, and so my second highlight is our push into open source software development. This highlight is ongoing, as we were recently awarded funding from Google.org to continue this work to expand the impact of our platform, making it more affordable, international, user-friendly, and compatible with a multitude of technical systems.

A personal highlight was contributing to Humane Intelligence’s rebrand and supporting the nonprofit’s evolution into its current identity. 2025 marked an important transition for the organization, and under Mala Kumar’s leadership we now have a clear path for growth and new opportunities as we move into 2026. I’m excited to continue shaping our communications through this next chapter.

Another highlight for me this year was being part of the thought leadership group we formed with support from our leadership and volunteers. This group brings together passionate and talented volunteers alongside HI staff to develop thought leadership pieces that showcase our perspective and inspire others. I am looking forward to the work we will create together in 2026 and to producing pieces that help expand understanding, spark dialogue, and support more responsible uses of AI.
As a Software Developer, 2025 was a highly rewarding year. One of my key highlights was working closely with the engineering and red teaming teams to develop, revamp, and strengthen Humane Intelligence’s AI red teaming app. Throughout the year, I focused on improving the experience for evaluators and participants by making the app more intuitive, reliable, and better suited for live red teaming engagements. Supporting a series of successful red teaming events was a major source of motivation and a clear reflection of the impact of this work.
Another highlight was contributing to the work that prepares our red teaming app for its next chapter as an open source tool. Laying the groundwork for this transition has been especially exciting, and advancing the open source release will be a key focus of our work in 2026. I’m excited to bring my open source experience to this work.

One of the biggest highlights this year was joining Humane Intelligence as a Data Scientist through Reliabl’s vendor partnership. The alignment between Humane Intelligence and Reliabl feels deeply natural, with both orgs focused on building the infrastructure AI actually needs to be more accountable, inclusive, and grounded in lived experience. It is an absolute pleasure working alongside people who are equally passionate about the systems, processes, and community-driven approaches required to make AI better and truly serve social good has been incredibly meaningful. I am grateful for the Humane Intelligence team, their leadership in this space, and the continual inspiration I get from the thoughtfulness and rigor everyone brings to this work.

A second highlight was the launch and successful completion of the Accessibility Bias Bounty, made possible through an incredible collaboration with Valence AI and Virginia State University’s CoNA Lab, an HBCU partner. This work had real impact, creating tangible career opportunities for participants while also helping educate and upskill practitioners on AI bias evaluation, quantitative bias measurement, and how accessibility harms enter ML training pipelines. The bounty surfaced meaningful insights around sensory overload, cognitive fatigue, emotional misclassification, and the importance of user-led interventions, demonstrating how accessibility-centered evaluation can lead to concrete design and technical improvements. This work was made possible through Google.org and a generous donation from the Zoom Social Impact team. Looking ahead, the Reliabl team is incredibly excited about continuing to support HI’s Bias Bounty work in 2026 in partnership with Zindi, supported by a grant from the Heising-Simons Foundation.