Artificial intelligence (AI) is already transforming financial services and has the potential to bring about even greater change, but we should always remember its limitations.
In the world of Terminator, from the moment machines started to think, their plan was to wipe out their creators. From 2001: A Space Odyssey and Avengers: Age of Ultron to Ex-Machina, the use of AI doesn’t always produce the expected results.
Despite this unrelenting nihilism of fictional AI, its use in the workplace is predominantly a good thing.
When I speak at events, I often ask the audience how they view AI in their working life. While a small percentage worry that they might be replaced by AI one day, the majority (between 50% and 60%) see AI as a partner in the workplace – something that augments their own work. In other words, the bleak Hollywood future is for entertainment only; in real life we are far more hopeful.
Financial services has much to gain from AI and is already reaping some of its benefits. It is helping overcome many challenges, particularly when it comes to compliance, and we have only just begun to explore its potential.
Major regulators around the world are actively pursuing the technology. The Financial Crimes Enforcement Network (FinCEN), for example, launched its innovation initiative this year, inviting fintech and regtech companies to present and demonstrate their new and emerging AI products and services.
In the UK, the Financial Conduct Authority (FCA) has been proactively driving innovation for some time, holding regular ”TechSprint” events, focusing on how new technology can be used to combat financial crime.
The Monetary Authority of Singapore (MAS) has also been exploring the potential of AI for some time. Following experiments and analysis, it is now challenging the industry to ensure the responsible use of AI and data analytics through its FEAT Principles – fairness, ethics, accountability and transparency – which are intended “to strengthen internal governance around data management and use.”
Taking on the Myths
Innovation is always welcome, but we should not assume that AI will cure all ills – indeed, arguably, it raises as many challenges as it solves.
AI can be surrounded at times by a fog of misunderstanding that can cloud our perception of how and where it may be of most use, such as:
1. AI is not the future
Whether we realise it or not, our lives have already been materially changed by AI. It is making decisions on our behalf in healthcare, law enforcement, education, retail, and dating.
It is not even a particularly new concept; the maths that underpins AI algorithms has been discussed for more than 50 years. But maths has never been enough on its own – for AI to become mainstream, we needed processing power that was unimaginable decades ago, as well as the data to fuel it.
It’s only now, for the first time, that we have everything we need to make AI a reality – the maths, academic focus, hardware, data, and a willingness to invest in innovation. Quantum computing, if and when it becomes reality, might well bring about a further leap in computational power.
2. AI isn’t always what it claims to be
Just because someone says an innovation is powered by AI, it doesn’t mean that it is.
Hannah Fry, author of the excellent book Hello World, points out that the unregulated free-for-all that is the world of algorithms allows people to make “bold, unsubstantiated and sometimes irresponsible claims about their inventions” – like the privately-developed “budget tool” purchased by Idaho’s Department of Health and Welfare to make decisions on welfare payments that turned out to be making awards entirely at random.
3. AI can be fooled
For every algorithm there is someone trying to break it or fool it. AI systems are particularly susceptible to optical illusions, or “adversarial examples.”
Teams of researchers have already produced studies showing how AI can be manipulated into identifying a 3-D printed turtle as a rifle, or fail to recognise people in images if they are wearing a simple printed pattern.
4. AI is fallible
Classification is central to machine learning. An image classifier, for example, is one type of image recognition algorithm that can take an image as an input and learn to classify its content. But one academic study found that an image classifier that was able to distinguish between images of a wolf and a husky was actually simply looking for snow.
This last point clearly illustrates the biggest limitation of AI – it is only as good as the data on which it feeds. An algorithm might produce amazing results on the data you have given it to date, but that doesn’t mean that it will not make terrible decisions in the future. What is absolutely essential is that you understand why the algorithm is making the decisions it makes.
This is one of the biggest concerns for financial regulators – accountability for decisions made by AI is an absolute priority.
That is why most in the sector believe that we are heading for a world where AI is widely used for automated false-positive handling – in Anti-Money Laundering (AML), Know Your Customer (KYC), and sanctions screening for example – but with a clear overlay of “explainability.” Deep learning, “black box” automation is unlikely to be an acceptable route for regulators in the near future.
The most important point for anyone considering AI technology is that it is not magic. It will not fix all your problems and it won’t necessarily save you money in the short term. Many organisations find themselves investing huge amounts of time and effort in cleaning the data they need for AI.
Impressive as they are, algorithms have brought new complications such as privacy, bias, the impact of error, accountability and transparency. In other words, AI does not remove the need for rigorous checks and balances.
Three Essential Questions
The key to successful innovation is to use AI wisely – and that means asking the same questions of AI as you would of any new technology.
1) Think of your inventory. What data do you have available? Is it structured or unstructured, text or voice? How clean is it?
2) Define what you want to do. What is the problem you are trying to fix? Are you exploring, or searching for specific answers? How important is speed, timeliness, and certainty?
3) Identify which AI techniques can help (there are many, from Robotic Process Automation and Natural Language Processing to Pattern Recognition and Scenario Comparison). Prepare to experiment with a few before you hit on the right one for you. And expect the unexpected – like humans, no AI is perfect.
If there was one golden rule for implementing AI, it would be to start with the data. In my utopian future, data cleanliness would be the first thought for any AI project. That would remove some of the barriers we know our customers currently face and could really unleash the potential of these new technologies and techniques.
The Future is Collaboration
I believe strongly that the best future for AI lies in collaboration, with humans and algorithms working in partnership, embracing each other’s flaws and exploiting each other’s strengths to deliver the best outcome.
That is our approach – helping our clients identify exactly where AI techniques will bring the most benefit, such as alleviating the pain of inefficient systems and making sure the technology is as reliable as it can be.
The future of AI is bright – but only if we keep our feet on the ground.