An Essay for ThoughtLeaders4, written by Natalie Tenorio-Bernal
Artificial intelligence (“AI”) is one giant magic trick which is being used to augment human intelligence. AI will very soon come into every industry, albeit in some quicker than others, just as mobile phones have. We cannot imagine a life where we don’t have our phones attached to our hand and AI will become the same sort of utility, although currently it is a glorified autocorrect.
This essay will delve into how AI has evolved/continues to evolve and how AI can impact the future of law, in particular the future of fraud, insolvency, asset recovery and enforcement practitioners (“FIRE”).
New ways of working
Over the last 18 months, AI has grown in popularity. Businesses have realised that it is no longer something that would be nice to have but is becoming more of a necessity if they wish to compete with others. A recent report published by MIT Technology Review Insights found that 81% of survey respondents expect AI to boost efficiency in their industry by at least 25% in the next two years with one-third stating the gain will be at least 50%. Furthermore, every organisation surveyed has stated that they will boost spending on modernising data infrastructure and adopting AI during the next year due to the unprecedented growth there has been, and for nearly half of the respondents (46%), have stated that the budget increase will exceed 25%.
AI is a term that encompasses technologies which rely on being fed data to make decisions but I prefer using the term “cognitive computing” – artificial intelligence just sparks an image in my mind of robots walking and talking amongst us.
Cognitive computing mimics human intelligence to solve problems and it is a technology that is trained how to carry out certain tasks rather than being programmed to do a specific task. One industry that springs to mind of where cognitive computing has exploded is the financial industry, in particular financial advice.
Robo-advisors offer tailored portfolios to an investor’s risk appetite, with little to no human interaction, which means the cost of providing this service to customers is low while having their portfolio expertly managed. With the rise of this technology, it is, therefore, no coincidence that there has been a rise of £4.5 billion in 2017 to over £24 billion in 2022 in assets under management.
AI and its many forms
There are numerous types of AI which I don’t intend to detail here as I would probably lose your attention along the way but at a high-level, I will take you through the evolution of AI.
At stage one, AI is capable of mimicking human intelligence by using mathematical rules and large amounts of data. Stage one AI is probably the most familiar form of AI which we are likely to interact with daily without even realising. It is the type of AI that provides us with suggested connections on LinkedIn, enables Siri to set reminders to call our mother-in law and unlocks our mobile phones with facial detection and recognition.
At stage two, general AI is combined with machine learning AI. It also uses maths, algorithms, and large amounts of data to learn how to do something rather than being programmed to do something. For example, a car learns how to park itself or bank transactions are classified as fraudulent (or not) to compliance teams.
The third and final stage is known as Generative AI or GenAI for short. GenAI is in essence a computer creating and dreaming up its own ideas after learning from patterns and previous examples. This includes composing a new song, writing poetry and even drafting legal documents without any explicit human instructions.
However, it is important to also draw attention to natural language processing (“NLP”) – another form of AI. NLP computers analyses large amounts of natural language data which enables them to understand, interpret, and manipulate human language. We want to be able to curate this technology for it to aim towards what we need and want from it. Despite this, AI will replace some jobs, supplement others, and create new ones, including FIRE practitioners.
The future of law and FIRE practitioners
Recent headlines have warned us and even scared us of the dangers of available AI technologies such as ChatGPT and how they are going to revolutionise the workforce. The question on everyone’s mind is: should we be worried and are we right to have these concerns? The answer to that is no.
The World Economic Forum predicts that by 2030, about 30% of all jobs will be at risk of AI automation and, while this seems to be causing some panic, the situation may not be as dire as it seems. So, as exciting as it may sound to have an army of robots take over the reins from FIRE practitioners, I just don’t see it happening. Instead, as with the technological advancements over the past 30 years, AI will alter how we operate.
Once upon a time…
Long gone are the days of legal research and reading pages of court judgments. The allure of an ‘easy life’ is ultimately a driver for cognitive computing and very soon, we should be able to rely on AI to summarise what the law says, leaving practitioners more time to do the actual lawyering, investigating, analysing and advising clients.
Document review will also become a distant memory because, frankly, AI will be able to do it faster and more cost-effectively than us humans ever could. This isn’t meant to undermine what humans have done and can do, but the truth staring us in the face is that computers can process so much information at a higher pace than individuals do. For example, in the time it takes a human to answer one mathematical problem, AI is today capable of solving ten.
An example of where AI-led document review will be transformative can be highlighted in the insolvency space. When a company goes into insolvency and a practitioner is appointed to investigate what has happened, sometimes with a company that operates in multiple jurisdictions, an AI backed document review system, with the input from humans of key words/phrases/subject matter, will complete an initial review to identify the potential lines of inquiry, key individuals to interview etc., which the practitioner can follow-up on.
This will mean that those individuals whose roles are focused on document review or legal research may need to adapt. I don’t think we will see the day where lawyers won’t want to or need to verify the output AI produces i.e. are there documents which haven’t been marked privileged? This verification process is vital in our area of practice because the consequences of these sort of mistakes can be devasting.
Let’s also not shy away from the fact that AI is a copycat. There has been a wave of litigation in the US against AI providers for trademark, copyright, libel, and privacy breaches. The use of copyright material to train AI is an ongoing debate in the courts. For example, Thomas Reuters claims that Ross Intelligence used its legal research platform, Westlaw, to train the AI. In that case, the judge decided that the matter had to be decided by a jury.
Computer says ‘No’
The rise of AI will create new jobs within the FIRE sector. The professionals we will need to hire are changing and we should be looking outside of traditional vocations for these individuals.
Legal engineers will be in high demand as they will be key in developing and managing our AI tools. But does this mean that with the increase in engineers, lawyers will get pushed out of the profession? I don’t think the profession will let this happen. The roles differ widely and there will be enough space for us all.
These engineers should also be able to choose the right AI for the task in hand and construct the right queries to get the most of what these tools can do for us. These individuals are vital to ensure that the AI tools that are meant to free up our time to complete the tasks they can’t, do in fact work. Nobody wants to put up with ‘computer says no.’
70/30 chance of success
In 2016 researchers at University College London, the University of Sheffield and the University of Pennsylvania reported on their AI model. The model was able to predict the outcome of historic European Court of Human Rights decisions with 79% accuracy. Dr Nikolaos Aletras, who led the study explained:
“We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.”
It has been seven years since this report was published, but this model of predicting the outcome of a case could be invaluable to practitioners, clients as well the litigation funding industry. AI will assist with the process of assessing the merits of claims and, in turn, inform their decisions on whether to pursue or invest in certain case and minimise the success rate risk practitioners, and clients are exposed to during litigation.
Efficient and timely detection of fraud is an area in which AI will transform the world of fraud practitioners. Being able to identify and trace funds from a victim’s crypto wallet through exchanges and the multiple wallets it is likely to go through, at the click of a button, would be hugely valuable in terms of recovering funds.
Now imagine a world where you have gathered all bank statements relevant to your case and your legal engineers have fed the AI tool with this data. You’ve then asked the AI to follow the funds and list all the transactions, including account details, of the money your client has sent to the fraudster. Forensic accountants, asset recovery practitioners, lawyers and law enforcement could not only carry out their job more efficiently but could so at the click of a button.
How far off are we from being replaced?
Benchmarking AI legal advice against a lawyer’s advice is where we find ourselves now. Linklaters created LinksAI English Law Benchmark which tests the capabilities of LLMs to provide English law legal advice by asking 50 questions from 10 different practice areas.
You’ll be glad to hear that AI hasn’t passed the benchmark…yet. The answers given by the LLMs were convincing but were not always correct and lacked the nuances and context you would expect from legal advice. Linklaters intend to rerun the benchmarking tests and when it does, the LLMs may just pass with flying colours.
But this doesn’t change my view that the future of law and FIRE practitioners is safe. Some may say my view is naïve, but from my experience clients want to speak to a real person, giving them advice on real life issues, which at times means their liberty or livelihood is at stake.
As FIRE practitioners, we often need to navigate complex and dynamic situations that require a combination of legal knowledge, financial expertise, and interpersonal skills. Building relationships, understanding the nuances of individual cases, and interpreting the broader economic and legal landscape involve a level of complexity that AI may struggle to fully comprehend. In fact, I asked ChatGPT this question: “What jobs will AI be unable to replace?” Within seconds, I was told jobs which require complex human skills, creativity, emotional intelligence, and nuanced decision-making, would be less likely to be fully replaced by AI.
To end, I want to leave you with the following question…Will the generations to come be more trusting of this technology and in turn, will this create a new normal where nuance and lateral thinking are not seen as an advantage?