Below Peter van der Putten, assistant professor at Leiden University and global director AI at Pegasystems, explains why these claims were especially painful for Apple as it brands Apple Card as a product that represents “All the things that Apple stands for”. Like simplicity, transparency and privacy”.

It all started with a recent tweet by David Heinemeier Hansson, claiming his Apple Card credit limit is 20 times higher than his wife’s. Things got even worse for Apple when Steve Wozniak, co-founder of Apple, also tweeted he can borrow 10 times more than his partner. Both state that all other circumstances are the same, for example the couples share bank and credit card accounts and are filing joint tax returns. Goldman Sachs subsequently issued a statement that neither gender or marital status is known to the bank in the application process and that customers that have lower limits than expected should get in touch with the bank. This triggered presidential candidate Elizabeth Warren to criticise the bank for putting the burden too much on the consumer.

Machine bias is a serious matter, but how would we know whether Apple Card credit policies are truly gender biased? How can bias creep into these models? And with more and more banks and financial organisations harnessing the power of AI for a variety of tasks, how can these businesses ensure that bias in artificial intelligence is kept to a minimum in the future?

First and foremost, it needs to be realised that from a pure capitalist perspective the bank would get no commercial benefits out of ‘being sexist’.  By not giving credit to customers who can actually afford it the bank is missing out on potential profit.

Also, AI is not some magic potion, with secret evil intentions. AI algorithms are not perfect nor objective, a better description would be to call them blind. AI is as biased as the data used to create it. To make things worse, even if its designers have the best intentions, errors may creep in through the selection of biased data for machine learning models as well as through prejudice and assumptions in built-in logic. Therefore, financial organisations need to make sure that the data and rules being used to create their algorithms is absent of bias as much as possible. Also, one should realise that human decisions can also be subjective and flawed, so we should approach these with scrutiny as well.

[ymal]

Given the recent statements from the bank and also considering the rigorous regulatory environment it operates in, it is highly unlikely that the Apple Card policies are explicitly built to take gender into account, as credit policies are typically subject to strict external regulations and internal model approval. That said, it is not simply enough to remove gender from a bank’s prediction models and rules, as other more innocent looking pieces of data such as disposable income might be correlated with ‘protected’ variables like gender and age. The goal is not to remove all correlation, but customers with the same characteristics and different genders should be offered a similar credit limit. Also, it will not be possible to eradicate bias for every single customer, the bias will need to be assessed on the full base of customers to see whether it is within bounds.

In the Apple scenario, the claimants’ statement that ‘all other data was equal’ between partners may very well not hold when looking at the data and decision in detail. There may be material differences that have been overlooked by the Hansson and Wozniak, such as credit history. Also, by definition the bank may only have a partial picture of the customer characteristics and context. For example, Apple’s values of simplicity and privacy means that the information at application time is limited. In other words, this is fundamentally a data problem.

The point is, how would you know as a customer what’s driving an automated decision like this. That’s why regulators are introducing the ‘right to an explanation’, and we can expect customers to exercise this right more and more.

With these negative reports in the media about how AI might be being used incorrectly, that presents a challenge for businesses in how they can prove to their customers that they are using it right.

With these negative reports in the media about how AI might be being used incorrectly, that presents a challenge for businesses in how they can prove to their customers that they are using it right. Interestingly, a recent Pega survey into consumer attitudes to artificial intelligence found that 28% aren’t comfortable with its use by businesses. Stories, such as this one about the Apple card, will only help to perpetuate this opinion.

To combat these beliefs, financial organisations must be absolutely transparent with their use of algorithms and AI. The key is for banks to balance transparency with accuracy. The more ‘material’ the AI’s outcome, for example these credit limit decisions, the greater need for transparency and control.

Having a human approach to AI makes sure that the technology is used responsibly and with the customers’ best interests fully in mind. This will allow decisions to be made by the technology within the context of customer engagement that would be seen as empathetic if made by a person. If an organisation can successfully cultivate a culture of empathy within a business, AI can also be used as a powerful tool to help differentiate companies from their competition.