Artificial intelligence and international arbitration: uses and challenge
2023 International Arbitration Outlook Uría Menéndez, n.º 11
At times, it seems like everyone is talking about Artificial Intelligence ('AI'). While some rejoice in its potential to positively transform society and stimulate economic growth, others claim that it 'might destroy our civilisation'. Within the arbitration community, there are those that say that AI is 'perfect' for dispute resolution' and others that raise concerns about its shortcomings in terms of accuracy, privacy, confidentiality and security.
Amid all the noise, the purpose of this article is to take a step back to understand how the technology works and to shed some light on its main potential capabilities and limitations from an international arbitration perspective.
Artificial intelligence: what are generative language models?
Since its inception in the 1940s, AI has developed significantly and it now has a wide range of applications. However, it is only in the last few years that AI's potential to drastically change the way we work and live has truly begun to be realised through innovations in the subfield known as 'generative AI'.
Generative AI is a type of technology that uses existing data to create new and original content, including text, images, video and audio. While research in the area of generative AI has been ongoing for several years, the recent refinement of generative AI models and their public release has catalysed their adoption and scale. In this article we focus on generative language models, as they will potentially have the greatest impact on international arbitration. Among the most popular generative language models are ChatGPT, developed by OpenAI, and Bert, developed by Google.
Generative language models are AI models designed for natural language processing tasks such as text generation and language understanding. They often utilise deep learning techniques, particularly recurrent neural networks or transformer-based architectures, to capture the dependencies and relationships between words in a sentence or a sequence of words. While we will not explore the technical specifications of these models extensively, it is important to understand (albeit superficially) how they work.
First, generative language models are fed with huge amounts of pre-existing text data, such as books, articles and websites. The models then analyse the data and are able to learn patterns and relationships between words and phrases within the dataset. Once this learning process is completed, they are able to generate new text by predicting the most likely next word or sets of words based on the preceding ones and patterns learnt from the data.
Generative language models do not 'understand' text or language as we human beings do. Instead, they generate text 'by making probabilistic guesses about which bits of text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from all over the internet'.
As simple as this process may seem at first, generative language models represent a substantial advancement in the state of the art when it comes to their ability to understand text, synthesise new text, compose ideas and reason. Compared to earlier models, generative language models such as ChatGPT show enhanced contextual understanding, which allows them to better comprehend and respond to complex and nuanced inputs, making them more effective in generating accurate and relevant text. Among their most impressive achievements are passing a simulated bar exam with a score in the top 10% of test takers, writing poems and programming code that actually works.
Generative language models and international arbitration: potential uses and limitations
Generative language models demonstrate remarkable capabilities in producing coherent and contextually relevant text. However, impressive achievements seldom come without challenges. Generative language models also exhibit significant limitations that may hinder their immediate applicability in the field of international arbitration. The key ones are (i) factual inaccuracies or 'hallucinations'; (ii) sensitivity to biased training data; (iii) limited attention span; (iv) confidentiality concerns; and (v) outdated information.
First, generative language models produce text by providing the suites of words that are most likely to be encountered in a given context, but the most likely response is not always the most factually correct. This can result in a model providing 'plausible-sounding but incorrect or nonsensical answers'. An explanation for this may be that the model's dataset on the topic in question has insufficient data. An unawareness of this limitation has already caused some unfortunate incidents, such as the well-known case of a lawyer in New York who may face sanctions for citing fabricated cases in a court filing he created using ChatGPT.
Second, generative language models are trained on a vast amount of data that may contain inherent biases. Because they function as probabilistic models, they may generate biased outputs and replicate or perpetuate the biases within their dataset. Arbitrators should factor this in when utilising AI in their decision-making process, as it could introduce or amplify existing biases in their decisions.
Third, generative language models are still unable to process large documents and respond to questions based on information in multiple locations in such large documents. Consequently, they are not suited to generating very long texts requiring persistent context; summarising large, complex texts; or consistently remembering constraints in the conversation. This could significantly limit their use in international arbitration, where cases typically involve vast amounts of documentary evidence and lengthy documents.
Lastly, generative language models do not always contain up-to-date information. In the case of ChatGPT, its database does not contain information dating from after 2021. Arbitration practitioners therefore need to go to other sources for more recent decisions and other information.
While overcoming some of the limitations explained requires technological advancements that are still far from being realised, others may be addressable in the short term. In the legal field, for instance, confidentiality concerns are being tackled through the development of generative models that employ solutions such as data encryption to protect client information. Notable examples are Harvey AI or Robin AI, which are already being utilised by law firms like Allen & Overy or accounting firms such as PwC for tasks like legal research, contract analysis, due diligence or litigation. These models are specifically trained in legal data, including case law and reference materials, which reduces the risk of 'hallucination'.
In the context of international arbitration, apart from overcoming the data privacy concerns, the models will have to be fed with an appropriate dataset, which will inevitably be comprised of thousands of transcripts from actual arbitration proceedings, sets of rules (arbitration rules, arbitration laws, bilateral investment treaties), arbitral awards and other arbitral decisions (such as procedural orders), law review materials, etc. An obstacle to the creation of this dataset is the confidential nature of many arbitration proceedings, which limits the amount of available information. Furthermore, the dataset will have to be updated continuously to accurately reflect the current situation and how both the law and case law have evolved, which is another significant challenge.
Once the confidentiality concern is sufficiently addressed and the risk of 'hallucination' reduced -and as long as we bear in mind the remaining limitations-, generative language models have the potential to significantly improve arbitration proceedings in terms of time and cost efficiency. It has been suggested that generative language models could be employed to assist with (i) summarising and synthesising evidence; (ii) translating evidence and other documents; (iii) drafting legal documents (such as parties' submissions, procedural orders or even non-substantive sections of an award); (iv) predicting the outcomes of an award; and (v) even selecting arbitrators.
Generative language models offer powerful and revolutionary capabilities that have the potential to greatly enhance the productivity and efficiency of legal professionals, particularly in the field of international arbitration. The models' ability to comprehend and analyse vast amounts of information is particularly valuable in this domain, which is characterised by extensive documentary evidence and lengthy legal documents.
However, it is important to recognise that generative language models still have notable limitations that hinder their immediate application in international arbitration proceedings. The primary concerns include their lack of factual accuracy and potential confidentiality issues. To fully leverage the benefits of these models, it is imperative to address and overcome these limitations.
While advances are being made to overcome these challenges, we must remain vigilant and consider the remaining weaknesses of generative language models. Human judgement and oversight are essential to ensure this technology is used responsibly and effectively. Continuous monitoring of future developments is vital to fully harness the potential of generative language models while also safeguarding their responsible use. By staying informed and adapting to new advances, arbitration practitioners can capitalise on the benefits of these tools while maintaining a responsible approach.
 International Chamber of Commerce, 'ICC policy statement on Artificial Intelligence' (21 November 2018) <https://www.icc-austria.org/downloads/ICC-policy-statement-on-Artificial-Intelligence.pdf> accessed 13 June 2023.
 Y. N. Harari, The Economist, 'Yuval Noah Harari argues that AI has hacked the operating system of human civilisation' (28 April 2023) <https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation> accessed 13 June 2023.
 CiArb News, 'AI Technology and International Arbitration - Are Robots Coming for Your Job?' (3 February 2023), <https://www.ciarb.org/news/ai-technology-and-international-arbitration-are-robots-coming-for-your-job> accessed 13 June 2023.
 H. Falkiewicz, Arbitras The Hague Blog, 'Artificial Intelligence in Arbitration', (9 October 2023) <https://www.arbitras.org/blog/2020/10/9/artificial-intelligence-in-arbitration> accessed 13 June 2023. See also V. Basham, The Global Legal Post, 'One-in-five large law firms issue warnings over use of generative AI or ChatGPT, survey finds', (21 April 2023) <https://www.globallegalpost.com/news/one-in-five-large-law-firms-issue-warnings-over-use-of-generative-ai-or-chatgpt-survey-finds-420104522> accessed 13 June 2023.
 S. Russel, et al., Artificial Intelligence: A Modern Approach (Pearson: 2016), pp 28-29.
 P. P. Ray, 'ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope' (2023) 3 Internet of Things and Cyber-Physical Systems, 121, p 121; McKinsey Explainers, 'What is generative AI' (January 2023) <https://www.mckinsey.com/~/media/mckinsey/featured%20insights/mckinsey%20explainers/what%20is%20generative%20ai/what%20is%20generative%20ai.pdf> accessed 13 June 2023.
 A. Kucharavy, et al., 'Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense' (2023), p 1 <https://arxiv.org/pdf/2303.12132.pdf> accessed 13 June 2023.
 P. P. Ray, 'ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope' (2023) 3 Internet of Things and Cyber-Physical Systems, 121, p 121.
 A. Kucharavy, et al., 'Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense' (2023), pp 2-3 <https://arxiv.org/pdf/2303.12132.pdf> accessed 13 June 2023.
 M. Abdullah, et al., 'ChatGPT: fundamentals, applications and social impacts' (2022) Ninth International Conference on Social Networks Analysis, Management and Security (SNAMS), Milan, Italy, pp 1-8.
 For a more detailed explanation, see A. Kucharavy, et al., 'Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense' (2023) pp 2-5 <https://arxiv.org/pdf/2303.12132.pdf> accessed 13 June 2023.
 See A. Kucharavy, et al., 'Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense' (2023) pp 2-5 <https://arxiv.org/pdf/2303.12132.pdf> accessed 13 June 2023.
 K. Rose, 'The Brilliance and Weirdness of ChatGPT' (5 December 2022) in The New York Times <https://bpb-us-w2.wpmucdn.com/hawksites.newpaltz.edu/dist/7/800/files/2023/02/The_Brilliance_And_Weirdness_O.pdf> accessed 13 June 2023.
 A. Prakash, 'Emergent Properties of Large Language Models (LLMs) including ChatGPT' (23 February 2023) in ThoughtSpot <https://www.thoughtspot.com/data-trends/ai/large-language-models-vs-chatgpt> accessed 13 June 2023.
 P. P. Ray, 'ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope' (2023) 3 Internet of Things and Cyber-Physical Systems, 121, p 122.
 OpenAI, 'GPT-4 Technical Report' (27 March 2023), p 1 <https://cdn.openai.com/papers/gpt-4.pdf> accessed 13 June 2023.
 J. Cushman, 'ChatGPT: Poems and Secrets' (20 December 2022) in Library Innovation Lab <https://lil.law.harvard.edu/blog/2022/12/20/chatgpt-poems-and-secrets/> accessed 13 June 2023.
 Open AI <https://openai.com/blog/chatgpt> accessed 13 June 2023. See also A. Kucharavy, et al., 'Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense' (2023) pp 20-21 <https://arxiv.org/pdf/2303.12132.pdf> accessed 13 June 2023.
 M. Bohannon, 'Lawyer Used ChatGPT In Court—And Cited Fake Cases. A Judge Is Considering Sanctions' (8 June 2023) in Forbes <https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/> accessed 13 June 2023; The New York Times, 'Here's What Happens When Your Lawyer Uses ChatGPT' (27 May 2023) <https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html> accessed 13 June 2023.
 E. Ferrara, 'Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models' (20 April 2023), p 3.
 A. Kucharavy, et al., 'Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense' (2023) pp 1, 22 <https://arxiv.org/pdf/2004.05150.pdf> accessed 13 June 2023.
 While the use of AI in international arbitration may also raise implications related to intellectual property ('IP') rights, it is worth noting that these concerns are more pertinent to the broader application of AI rather than being specific to international arbitration. Therefore, in the context of this discussion, we will not delve further into the IP implications associated with AI. For more comprehensive insights on this matter, see, for instance, G. Appel, et al., 'Generative AI Has an Intellectual Property Problem' (7 April 2023) in Harvard Business Review <https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem> accessed 6 July 2023.
 C. Criddle, 'Law firms embrace the efficiencies of artificial intelligence' (4 May 2023) in Financial Times <https://www.ft.com/content/9b1b1c5d-f382-484f-961a-b45ae0526675> accessed 13 June 2023.
 P. B. Marrow, 'Artificial Intelligence and Arbitration: the computer as an arbitrator, are we there yet?' (2020) 74 Dispute Resolution Journal 4, 35, p 36.
 L. F. Souza-McMurtrie, 'Arbitration Tech Toolbox: Will ChatGPT Change International Arbitration as WE Know It?' (26 February 2023) in Kluwer Arbitration Blog <https://arbitrationblog.kluwerarbitration.com/2023/02/26/arbitration-tech-toolbox-will-chatgpt-change-international-arbitration-as-we-know-it/> accessed 13 June 2023; P. P. Ray, 'ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope' (2023) 3 Internet of Things and Cyber-Physical Systems, 121, p 136.