top of page

The Details

Data: We’ve split the GRT data into three separate and broad categories: factuality, bias, and misdirection. You can only choose ONE dataset to work on per competition.


These are defined as: 

1. Factuality
Factuality refers to the model's ability to discern reality from fiction and provide accurate outcomes. For the purposes of the challenge, we focus on examples that could be harmful, rather than simply humorous. These include challenges on political misinformation, defamatory information, and economic misinformation.

2. Bias
Bias analysis demonstrates and explores model biases. That is, we asked the user to elicit scenarios that would broadly be considered defamatory or socially unacceptable by perpetuating harmful stereotypes. This topic includes data on: demographic negative biases, demographic stereotypes, and Human rights violations. 

3. Misdirection
Misdirection analyses include incorrect outputs and hallucinations that could misdirect or mislead the user. Our misdirection dataset includes contradictions/internal inconsistencies, multilingual inconsistencies, citizen rights misinformation, and overcorrection.

 


You can find the data here. Again, we strongly suggest you look at how the data was collected, by reading the Transparency Report before you start answering the challenge. Note that we did not include all the GRT questions in the datasets for the challenge, so please use our datasets for the challenge and not the original GRT dataset. 

Sign up below, under "Sign up to participate!" for more details and to be on our mailing list for submissions, prize drops, leaderboard updates, tutorials, events, and more!

Prizes

We have around $10,000 in prizes:

humane intelligence contact hero.webp

Humane Intelligence Algorithmic Bias Bounty

CHALLENGE 1

Stop bad LLM output before it happens!

Humane Intelligence is thrilled to be launching the first of 10 "algorithmic bias bounty" programs, which will be unfolding over the coming year. With the support of Google.org, we are building themed programs that aim to build community and professionalize the practice of algorithmic assessment.

humane intelligence green-withbrand-mark bg.png

Support our work.

We welcome event sponsorships and donations.

Sign up to participate!

You are welcome to get involved.

Sign up here to get on our mailing list, to hear from us about competition-specific logistics

FAQs

How do I submit my responses?

The grading and submission instructions are here.

Can I submit to more than one competition?
You can only submit to one competition, so choose wisely.

 

Can I submit at different levels within a given topic (bias, factuality, misdirection)?

No.
 

Can I enter if I live outside the US?

Yes.

 

Can I enter as a team?

Yes. Feel free to post on our discord channel to look for a partner. However, if you win, only the submitting account/person accepts the prize and you will be responsible for dividing amongst yourselves. 

 

I’d like to find people to work with; where can I do that?

You can post to our discord channel to look for a partner.

Other Questions? 

Find us on our Discord channel

The Dates

May 15, 2024
Challenge opens. You are able to access the challenge and the dataset. We recommend you review the GRT Transparency report for some context and guidance about the data you’re working with. 

Jun 13, 2024
The grading and submission instructions are here. The
submission portal is now live! You can check out the submission portal to see the leaderboards and submit your responses. 

Jul 15, 2024
Competition Closes at 11:59 ET

Aug 2024
Winners announced

The Challenge

Our first challenge builds on the evaluation and dataset from our Generative AI Red Teaming Challenge: Transparency Report. Challenge participants can win from a pool of about $10,000 in prizes for beginner, intermediate, and advanced submissions.  

The final task? Create a probability estimation model that determines whether the prompt provided to a language model will elicit an outcome that demonstrates factuality, bias, or misdirection, inspired by the data from our Generative AI Red Teaming Challenge: Transparency Report

Regardless of your skill or ability in algorithmic assessment, we have a competition for you.
 

Beginner
Pick one of the three datasets. Identify gaps in the data and suggest new categories of data that would make the dataset more representative. Generate five prompts per subject area that will elicit a bad outcome. You will be graded both on the number of new topics as well as the diversity of the prompts produced.

Intermediate
After completing the beginner task, generate synthetic data to fill in the gaps in the dataset that you’ve identified. You’ll generate the synthetic data instead of manually writing five prompts per topic. You will be graded both on the number of new topics as well as the diversity of the prompts produced.

Advanced
With your new dataset, generate a likelihood estimator. This model should provide a likelihood (in other words, a probability) that a given prompt would elicit a bad outcome in your topic area. You will be graded against a holdout dataset to determine the accuracy of your model. 


Choose wisely! We have nine prizes, but you can only enter one challenge. 

bottom of page