4 questions to ask when evaluating AI prototypes for bias

0
9

There it is true Progress has been made on data protection in the US through the passage of several laws, such as the California Consumer Privacy Act (CCPA), and non-binding documents, such as the Blueprint for an AI Bill of Rights. Yet there are currently no standard rules that dictate how tech companies should reduce AI bias and discrimination.

As a result, many companies are lagging behind in developing ethical tools that put privacy at the center. Nearly 80% of data scientists in the US are male and 66% are white, indicating an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to biased data results.

ALSO READ  'It's not just a text message': Apple Watch saves life of pregnant woman and her unborn baby

Significant improvements in design review processes are needed to ensure technology companies consider all people when creating and customizing their products. Otherwise, organizations risk losing customers to the competition, tarnishing their reputation and risking serious litigation. According to IBM, about 85% of IT professionals believe consumers choose companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue to oppose harmful and biased technology.

What should companies look for when analyzing their prototypes? Here are four questions development teams should be asking themselves:

ALSO READ  Tier Mobility and Spin are laying off another 100 employees

Have we ruled out all types of bias in our prototype?

Technology has the ability to revolutionize society as we know it, but it will eventually fail if it doesn’t benefit everyone in the same way.

To build effective, bias-free technology, AI teams need to create a list of questions to ask during the review process that can help them identify potential problems in their models.

There are many methods that AI teams can use to assess their models, but before doing so, it is critical to evaluate the end goal and whether there are any groups that may be disproportionately affected by the results of the use of AI.

For example, AI teams should be aware that the use of facial recognition technologies can unintentionally discriminate against people of color – something that is far too common in AI algorithms. Research conducted by the American Civil Liberties Union in 2018 showed that Amazon’s facial recognition inaccurately matched 28 members of the US Congress with mugshots. A whopping 40% of incorrect matches were people of color, despite only making up 20% of Congress.

ALSO READ  Daily Crunch: Stripe responds to report it wants to raise $2 billion with a terse 'no comment'

By asking challenging questions, AI teams can find new ways to improve their models and aim to avoid these scenarios. For example, a close examination can help them determine whether they need to look at more data or whether they need a third party, such as a privacy expert, to review their product.

Plot4AI is a great resource for people looking to get started.

LEAVE A REPLY

Please enter your comment!
Please enter your name here