The action comes amid rising concerns about automated algorithmic systems that, despite their neutral efforts, can contain racial or other forms of bias.
Twitter announced on Friday that it would pay users and researchers a financial “bounty” if they could assist the social media platform eliminate algorithmic bias.
Twitter stated that this would be “the industry’s first algorithmic bias bounty competition,” with prizes up to $3,500.
According to Twitter executives Rumman Chowdhury and Jutta Williams, the competition is based on “bug bounty” programmes that other websites and platforms provide to uncover security gaps and vulnerabilities.
Chowdhury and Williams wrote in a blog post “Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,”
“We want to change that.”
According to them, the hacker bounty model holds promise for detecting algorithmic bias.
“We’re inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public,” they wrote “We want to cultivate a similar community… for proactive and collective identification of algorithmic harms.”
The action comes amid rising concerns about automated algorithmic systems that, despite their neutral efforts, can contain racial or other forms of bias.
Twitter, which started an algorithmic fairness initiative earlier this year, announced in May that it was eliminating an automated image-cropping system after an review found bias in the algorithm that controlled the function.