The integration of
artificial intelligence (AI) in the legal industry, particularly in
international arbitration, is rapidly reshaping traditional practices. In
arbitration, AI now plays a pivotal role in areas such as arbitrator selection,
legal research, document review, and even predictive analysis of decisions. However,
as AI usage grows, so do the risks. Unregulated AI could jeopardize a client’s
right to fair legal representation and undermine due process, potentially
leading to miscarriages of justice and the invalidation of arbitral awards.
Furthermore, AIgenerated errors, such as ‘hallucinations’ by language models,
have raised concerns about the accuracy of legal work that uses AI. In
response, arbitral institutions have developed guidelines to mitigate these
risks, with organizations such as the Silicon Valley Arbitration &
Mediation Centre (SVAMC) and the Chartered Institute of Arbitrators (CIArb)
offering guidelines for responsible AI use in arbitration proceedings. These
guidelines aim to safeguard confidentiality, ensure due process, and provide
clarity on AI’s application in arbitration. As AI continues to evolve, these
guidelines will be crucial in balancing innovation with the integrity of the
arbitration process, ensuring that AI’s transformative potential is harnessed
without compromising fairness and justice.