How AI forces us to confront the ethics of our work
In late 2019, Apple’s latest foray into AI made headlines for all the wrong reasons. Their newly-launched Apple Card, which used algorithms to generate and offer credit limits to applicants, was accused of gender bias when it offered significantly higher limits and more favourable interest rates to men, even when they had worse credit histories than women applicants.
Adding fuel to the controversy, when this discrepancy was pointed out — by Apple co-founder Steve Wozniak, among others — no one at Apple or Goldman Sachs, the issuing financial institution, could explain why. The algorithm, it seems, was too complicated, too opaque, for the humans working there to understand.
Apple and Goldman Sachs both claimed the algorithm couldn’t be biased because it didn’t officially factor in gender. However, the effect was clear, and consumers’ faith in the application process was understandably shaken.
“It’s all about trust,” said Mattias. “It’s about really knowing what you’re doing. Really understanding how these underlying systems work in the context of specific applications such as KYC or credit scores. Are you able to explain well enough why that output was the case?”