Artificial Intelligence (AI) offers companies many benefits, but as it becomes more commonplace, there are concerns about building bias into AI. Using flawed data can lead to consumer exclusion and discrimination, which is bad for everyone; more diverse users and teams tend to be more successful. Hence the growth of best practices for building ethical AI, which can hopefully avert some of these issues.
“Ethics is a mindset, not a checklist,” said Kathy Baxter, architect of ethical AI practice at Salesforce, speaking at the 2018 Dreamforce conference. “AI has tremendous potential for good, but there's so much potential for bias as well. We have to make sure what we create is positive.”
These are her top tips for companies implementing AI.
1. Cultivate an Ethical Mindset
Company culture is key to making sure ethics are important from the get-go, she said. “Employees need to be motivated to do the right thing. If they don't feel psychologically safe to challenge each other then bad things will happen.” This starts at the hiring process, as having a diverse team is a great way to make sure everyone's voice gets heard. “You want to make sure you don't create an echo chamber in your team's research,” she said. “Recruit for a diversity of backgrounds to avoid bias and feature gaps.”
—Kathy Baxter, architect of ethical AI practice, Salesforce
2. Identify User Segments
“Equality isn't the same as equity,” she said, giving the example of a skin cancer screening. Doctors sometimes lack knowledge on identifying melanoma on dark skin—an AI trained on that data would have the same problem. “Applying the same yardstick to everyone doesn't work,” Baxter said. “You may need to create different models for different user types. Don't let a minority group suffer for predictions built on the majority.”
3. Contextualize to Mitigate Harm
At every stage of the development cycle, you must question how the AI's recommendations could potentially injure someone, Baxter explained. “Are the risks and rewards being applied to all?” For example, someone turned down for a loan by AI could experience a knock-on effect of losing their housing, job and social care. “The negative impact can be extreme,” she said. She highlighted that there's always a chance the AI can be wrong and that counterfactual fairness tools should be used to check all different user groups for inherent bias.
4. Track the Human Impact
This can mean a variety of things, according to Baxter. You want to collect feedback from users, and be transparent about using an AI. There should also be due process for people who want to appeal decisions that have harmed them. To keep on top of the process, she recommends hiring dedicated staff to monitor and iterate. “Admins should stay current on the latest features and create a culture of agile AI.” Before rolling out a feature, it's important to pilot it with small groups and monitor it on a regular basis, to make sure the identified metrics are succeeding.