Prisoner 1, an Arab-Israeli, appeared before the parole board at 8:30am having served 20 months of a 30-month sentence for fraud. Prisoner 2, a Jewish-Israeli, appeared at at 3:10pm 11 months through a 16-month sentence for assault, and Prisoner 3, an Arab-Israeli, appeared at 4:25pm having served 20 months of a 30-month fraud sentence.
In a landmark study of two different Israeli parole boards serving four prisons across 1,112 judicial rulings over a 10-month period, researchers discovered some interesting patterns, and they’re not what you might be thinking. But let’s come back to that.
How many of you ever had the experience of having your credit card purchase declined for a completely legitimate purchase? It’s annoying, embarrassing and probably incredibly inconvenient. In card fraud industry jargon, this is known as a “false positive”. A fraudulent event is detected or suspected and the system acts to block the transaction, but in fact your transaction is perfectly legitimate. Customers object to such treatment and may abandon the transaction, switch cards, or even change banks. All these events lead to reduced fees for the card-issuing bank.
On the other hand, card fraud rates remain a growing problem. The Nilson Report of October 2016 estimated global card fraud losses at $21.84bn across all card types, equating to losses of 7.26c in every $100 spending volume.
Losses are expected to grow to more than $30bn in 2018. So banks, merchants and acquirers clearly have a big problem on their hands, and effective measures must be deployed to combat fraud.
What’s not shown in these statistics is the additional cost to banks and others of the fraud-fighting efforts they’ve deployed: The huge departments that are deployed with customer service staff who field calls from angry customers whose card was just declined; the software systems designed to detect fraud; the lost revenue from customers who switch banks because they object to the way they were treated. Direct and indirect losses from card fraud are actually much greater than statistics show.
Back to our prisoners. In the three parole applications I cited at the start of this article, the first prisoner who appeared at 8:30am received parole. The third prisoner who appeared at 4:25pm and was serving the same sentence for the same crime didn’t receive parole. The second prisoner who appeared at 3:10pm and was serving a lesser sentence was also denied parole.
What the researchers found was a pattern to the parole board decisions that wasn’t based on race, gender, ethnicity, or any other factor. The pattern was in fact timing. Prisoners who appeared before the parole board early in the morning received parole about 70% of the time, while those who appeared late in the day were paroled less than 10% of the time.
The chart above shows how the parole board decisions fluctuated during a single day. What’s interesting is that breaks had a restorative effect, but never back to the original level, and always short-lived. The mental effort expended in hearing the cases gradually depleted the parole board’s decision-making capabilities.
The phenomenon is popularly known as “decision fatigue”. The theory says that you pay a biological price for each decision you make during the day. Each of us has a finite amount of decision making ‘energy’. Another way to think about it is as a muscle that burns energy: As you make more decisions, each decision consumes some mental energy. Complex decisions require more mental energy, especially those requiring trade-offs and balancing different factors. Decision performance declines as your energy is gradually depleted.
You will have it experienced it yourself in your everyday life. Tough day at work, arrive home exhausted, binge on junk food and/or alcohol, eat that whole tub of ice cream, an angry outburst at a family member and so on. Did you ever buy something and a few hours later wonder why you bought it? The salesperson probably caught you at one of these energy low points.
It turns out that the available decision-making energy is also different for each of us, and affects all society in varying ways. Willpower is decision-making in action. Choosing not to do something is at least expensive as choosing to do something. Business people, athletes, people on diets, public figures, or politicians who make rash decisions over Twitter and unleash a firestorm of criticism. Remind you of anyone? Your decisions may even become reckless and impulsive.
A bank’s credit card operations department receives thousands of fraud alerts each day. Each of those alerts is normally reviewed by a human. Each day, these workers are under pressure to investigate dozens or more alerts. The operator checks the case details displayed on the screen, reviews the information about the card holder and the transaction, the location and so on. In fact, there can be more than a hundred separate pieces of information that must be viewed by the worker. Perhaps they also send a “did you just make this purchase?” message to the card holder. Finally, a decision is made about whether the transaction is valid, should be declined, or referred for closer inspection. The workers are under constant pressure to service a certain number of alerts each day, all within a high-pressure call centre environment that sometimes requires dealing with irate customers.
Imagine for a moment how decision fatigue affects the card fraud department. How much fraudulent activity goes undetected because the operator failed to spot some errant piece of information because the case was assigned to them near the end of their shift, when they have perhaps reviewed dozens of other cases that same day? The research study found that decision fatigue is a factor whenever complex decisions – especially those requiring the ability to assess competing points of view – are made on a consistent basis. The same challenge exists in any job where operators are required to review a lot of data points in order to reach a decision quickly. Loan and credit applications? Insurance claims?
So, how can banks combat decision fatigue in the card fraud department (and banking more broadly)? One of the popular myths of artificial intelligence is that it can identify hidden patterns in the data, and this is true, but only to a limited extent. The reality is that production AI systems today are only capable of supervised learning. This means they can only identify patterns in data that’s carefully structured and labelled and fed to them. Finding the data, labelling the data, scrubbing the data and removing bias and identifying characteristics from the data, remains a human job. Augmented intelligence is the practical application of artificial intelligence, data analytics and associated algorithms in limited but highly-targeted mode, to ‘augment’ human decision-making specifics in areas such as the complex decision environments described above.
Card fraud workers need to have a richer set of intelligent indicators that have been generated by machines that don’t suffer from decision fatigue, and which can be trained to highlight inconsistencies in the transaction or customer data. An augmented intelligence will provide the card fraud operator with other indicators such as:
- Is this transaction within the usual pattern for the customer?
- Is the location within the usual set of locations used by the customer?
- Is this transaction coming from a known device used by the customer?
- Is the customer’s cell phone at the same location where the transaction was initiated from?
- Is this actually the customer’s phone? Does the address book have the expected number of contacts? Does the music library have the expected number of songs? Does the customer’s walking gait have the normal pattern? Do they use the keyboard in the same way?
- Are people on the social graph of the card holders linked to any other suspicious activity?
Augmented intelligence is the biggest, but not the only, factor. Card fraud systems (like many other financial systems) often provide terrible user experiences, so user experience must also be improved to allow the worker to focus on key indicators, and not a mass of data points that make it hard to identify things that don’t fit the expected pattern. Machines are good at this. Humans are not. By preserving more of the worker’s decision-making energy, the effectiveness of decisions will grow, and this will lead to lower fraud losses and happier customers.
We’re only at the dawn of the age of artificial intelligence in many fields, including banking. In credit card fraud and similar complex decision-making roles, augmented intelligence can vastly improve the quality of the decisions. Human operators (not just card fraud teams) need less raw information and more intelligence to help combat decision fatigue. Machine learning and artificial intelligence algorithms can help the decision-making, but not replace it. Placing blind faith in an artificial intelligence system is unlikely to deliver massive improvements, because the technology isn’t yet sufficiently mature. Yet, banks and merchants who employ augmented intelligence systems to their key decision-making staff will reduce fraud losses and improve lending decisions. And that means reduced losses, happier customers and more effective, happier staff.
READ NEXT: Open banking – is data the new currency?
– This article is reproduced with kind permission. Some minor changes have been made to reflect BankNXT style considerations. Read more here. Image by Sergey Nivens, Shutterstock.com