Digital Assets Report

Newsletter

Like this article?

Sign up to our free newsletter

Possible legal risks arising from the use of AI systems by hedge funds

Related Topics

By Mike Cumming Bruce, Senior Associate, and Andrew Flynn, Associate, both at law firm Cooke, Young and Keidan –
Imagine that you are the boss of a hedge fund that has outsourced all of its core analytical functions to a highly sophisticated AI-driven system and, in doing so, have generated market-beating returns for a number of years. 

By Mike Cumming Bruce, Senior Associate, and Andrew Flynn, Associate, both at law firm Cooke, Young and Keidan – Imagine that you are the boss of a hedge fund that has outsourced all of its core analytical functions to a highly sophisticated AI-driven system and, in doing so, have generated market-beating returns for a number of years. 

Now consider that there is a sustained global downturn in markets (which may not require major leaps of the imagination at the moment) and that one of your investors decides to sue you on the basis that the formerly amazing alpha-generating AI systems were the cause of preventable trading losses by reacting inappropriately to unusual trading conditions. Is this just a tedious but thoroughly traditional cost of doing business? Or does the fact that you replaced your human quants with computer systems create new types of legal risk?

While the legal answers to such questions inevitably turn on their own facts, there are reasonable grounds for thinking that in some circumstances the answer could be “yes”. By way of illustration, consider the following questions, in the context of the example in the paragraph above:

1 Why did the trading loss happen? This is obviously likely to be a key question in the lawsuit. This may also be difficult question to answer. Reverse engineering the output of complex neural networks and describing them in terms that explain how a particular trade came to be made may not be possible, given the nature of the technology. This may cause particular issues where trading strategies are entirely ‘machine-devised’ and, for example, cannot be justified with reference to economic theory. While contracts may offer some protection to asset managers, you do not need a great legal mind to appreciate that “we don’t really know why those trades were made” is not an immediately attractive starting point when being asked to justify them before a sceptical human judge.
 
2 What are the risks of determining why the trading loss happened? It might be tempting to think about the problem of establishing why the trading loss happened as the claimant’s cross to bear. However, the normal process for determining factual matters in the English courts is disclosure, which is likely to involve the claimant’s lawyers demanding as many sensitive documents relating to the issues in the litigation as possible. If a key issue in the litigation is the operation of your systems, and if how your systems operate is highly sensitive proprietary information, then this has the potential to cause major headaches (and one which more cynical claimants may be relying on to pressure for settlement).
 
3 How will the court assess liability?  AI does not currently have an established regime that the courts can use to assess liability in the way that other complex and technical fields, such as pharmaceuticals, benefit from (although the FCA and the Bank of England have established a public-private forum to examine the use of AI in financial services in greater detail). In the absence of such an established regime, do you assess liability by comparing the performance of the system to that of a hypothetical reasonable human asset manager (i.e. the individual replaced by the AI system in the first place)? Or do you compare its performance to that of other comparable systems in the market and, if so, how, given that the obvious reference points are competitor’s operations, which are likely to be highly bespoke, proprietary and confidential?
 
4 How will the court assess liability for alleged breach of client duties in this context? For example, will it be sufficient for an asset manager to rely on statistics demonstrating a particular system’s ability to beat a particular benchmark over a specific period of time? Or will it be necessary to evidence some more sophisticated evaluation process, demonstrating deep understanding of the system and a specific rational explanation for why this particular system was in the interests of the fund’s clients? On a more practical level, what protection do your existing insurance policies give you against the kind of risks posed by this kind of potential liability, and the operation of AI-systems more generally? 
 
5 Who is responsible for the alleged trading loss? If the systems were built from the ground up and operated internally, then the simple (if uncomfortable) answer may be that the buck stops with you. However, if they were in whole or part procured externally then it may be that more complex questions are engaged, particularly in circumstances where the system output is derivative both of code and data (and more specifically the interaction of the two), and where the creation of the software and the selection of the data give rise to more than one potential defendant. In such circumstances, it is reasonably likely that everyone will blame each other for everything, which may again create the practical disclosure-related issue identified above.

These are of course not an exhaustive list of the potential questions that litigation concerning questions of liability regarding the use of AI in a hedge fund context may cause, nor are they reasons for panic (depending on the circumstances, there are for example likely to arguments that could be raised in the hedge fund’s defence on the above issues). However, what the questions may reflect is the fact that the law, which tends to make rules for social and economic developments after they have happened with deliberately measured consideration, has yet to provide answers to a number of the potential issues that can be foreseen to arise in this context.

This is likely to be something that is remedied over the long term, but in the interim it may leave a number of legally unsettled (and perhaps unsettling) issues, that it may be prudent to consider as potential risks to consider with your legal advisors, with a view to sensible risk management. 

Mike Cumming Bruce, Cooke, Young & Keidan

Mike Cumming Bruce, Senior Associate, Cooke, Young & Keidan

Andrew Flynn, Cooke, Young & Keidan

Andrew Flynn, Associate, Cooke Young & Keidan

Like this article? Sign up to our free newsletter

Most Popular

Further Reading

Featured