Thinking Like Mediators About the Future of AI

John Lande
This article has been republished and adapted with permission. The original publication can be located within Indisputably.

Imagine you’re a mediator and someone tells you what’s troubling them.  They’re deeply upset about a product they believe poses serious risks.  They cite past harms, question whether it should ever have been introduced, and urge that it be removed from the market or tightly restricted.  The product is already in widespread use, integrated into daily life, and – for many – has proven helpful.  What would you do?

I hope most of us would do what we train others to do:  listen carefully, help them identify their interests, and encourage them to reflect on the full picture – not only the part that feels alarming.  We’d help them explore multiple perspectives, consider realistic possibilities, and support thoughtful decision-making.

We don’t always use that approach in our field when talking about AI.  Some of us focus on the part of the glass that’s full and others on the part that’s empty.

That’s why I wrote a short essay, Thinking Like Mediators About the Future of AI – an effort to bring a dispute resolution lens to the “AI debate,” using the kind of balanced thinking we encourage in our students and clients.

Like some intense debates in the past, this one may fade more quickly than expected.  As AI becomes increasingly integrated into everyday life, the sharp divide between skeptics and enthusiasts may erode.  The conversation may shift – not from whether we use AI, but toward how we use it responsibly.  Rather than reaching a grand resolution, the controversy may simply become part of the fabric of daily life.

We’ve seen this pattern before.  Calculators, spellcheckers, and the internet all sparked anxiety when first introduced in schools and workplaces.  But over time, those concerns gave way to adaptation.  We now look back and wonder what all the fuss was about.  Obviously, AI has much greater potential risks.  And also greater potential benefits.

My article explores:

  • Why evidence of early problems with AI doesn’t prove they’re permanent
  • The important distinction between individual and societal impacts of AI
  • What a balanced analysis of energy use should include
  • How educators can help students become responsible and effective users of AI
  • How we can apply the conflict analysis frameworks we teach

Take a look.

AI and Dispute Resolution: Why You’ll Need It Sooner Than You Think

John Lande
This article has been republished and adapted with permission. The original publication can be located within Indisputably.

Imagine doing your work without word processing, spell checkers, email, the internet, search engines, voicemail, cell phones, or Zoom.

That’s how you’ll probably feel in the not-too-distant future about working without artificial intelligence (AI).

Innovations often seem radical at first. In time, people just take them for granted.

ABA Formal Opinion 512 states that lawyers soon may be ethically obligated to use AI. “As GAI [general artificial intelligence] tools continue to develop and become more widely available, it is conceivable that lawyers will eventually have to use them to competently complete certain tasks for clients.”

AI isn’t replacing dispute resolution professionals any more than calculators replaced accountants. But just like calculators, AI tools are becoming essential tools for legal and dispute resolution work.

Remember when everyone freaked out when they first had to use Zoom at the beginning of the pandemic? Now people don’t give it a second thought. It probably will be the same way with AI before you know it.

You Don’t Have to Love AI – But You’d Better Get to Know It Soon

Two companion articles – How I Learned to Stop Worrying and Love the Bot: What I Learned About AI and What You Can Too and Getting the Most from AI Tools: A Practical Guide to Writing Effective Prompts – are designed to help dispute resolution faculty, practitioners, students, and program administrators get comfortable with AI. The first article tells why AI literacy is becoming more important all the time. The second shows how you can easily become more AI literate.

Together, they offer a friendly nudge for people who feel they’re behind – spoiler alert: this may be you – and training wheels so you don’t fall flat on your face.

Love the Bot describes my own reluctance to use AI. Now I use it every day to think and write better, faster, and more creatively.

But I’m not the only one. Law students are already using AI. Practitioners and clients are too.

So this isn’t a quirky corner of practice anymore. It’s the center of a growing professional expectation. Law schools are adding AI courses. Some are embedding it across the curriculum. If professors don’t engage with AI now, they’ll be learning from their students instead of the other way around.

Good Prompting Can Be Your Superpower

Getting the Most from AI Tools is a hands-on guide to producing better results with AI.

It walks you through the mechanics of writing effective prompts. It’s packed with examples for mediators, attorneys, students, faculty, program administrators, and even disputants.

We all know that AI sometimes hallucinates. But you’re hallucinating if you think that you can wait to start using AI tools until they stop hallucinating. Ain’t gonna happen anytime soon.

In the meantime, you can benefit from AI tools if you know how to use them (and how to manage hallucinations and other problems). You don’t need to be an expert – just thoughtful, curious, and careful.

The results from AI tools may depend less on the technology itself and more on users’ skills. Like other skills, it improves with practice.

Becoming AI Literate Is Easier Than You Think

These articles describe AI literacy as a process of continual learning as AI technology continues to evolve.

The first steps are just getting curious and trying it at your own pace. Try starting with simple tasks like:

  • Asking questions you already know the answers to
  • Getting recommendations for movies appealing to your tastes
  • Summarizing something long and boring
  • Brainstorming ideas for a class, article, or paper
  • Polishing a rough email, memo, or draft

As you gain confidence, you can ask it to help with your work. Professors can revise a syllabus. Students can prep for a simulation. Mediators can brainstorm tough moments. Program directors can develop orientation materials. Etc. Etc. Etc.

The possibilities are limited mostly by imagination and fear. These articles help with both.

Don’t Regret Waiting to Get the Benefits of AI

AI isn’t just about efficiency. It’s about equity, ethics, and excellence. You can choose how to express your values through it.

AI tools can reveal students’ thinking, making teaching more responsive. They can also help lawyers and clients make better decisions, especially when time or money is short. And lots more.

If you’ve been hesitant, these articles can help you do things you want to do – and things you haven’t even imagined. But only if you take the first step.

Washington Post columnist Megan McArdle writes, “We are resting in the eye of a gathering [AI] storm, and those who fail to fortify themselves now risk being swept away when the storm finally unleashes its full power.”

Take a look – and don’t get swept away.

RPS Coach is Biased – And Proud of It

John Lande
This article has been republished and adapted with permission. The original publication can be located within Indisputably.

We all know that it’s bad to be biased, right?

Wrong.  That assumption is its own bad bias.

Biases are inevitable – in humans and bots alike.

Some biases are harmful.  Others are helpful.  Many are neutral.

But bias itself is unavoidable.

So bias isn’t a problem in itself.  Pretending otherwise is.

This post describes the biases in Real Practice Systems (RPS) Theory and how the artificial intelligence tool RPS Coach is biased by design.

As you might guess, I think they’re good biases – conscious, clear, constructive, and explicit.  Knowing these biases, users can decide whether to use Coach or a tool with different biases.

This post describes Coach’s biases and invites you to give it a try.

What the Heck is a Bias, Anyway?

“Bias” has a negative connotation, often implying a thoughtless or even malicious mindset.  Think of cognitive biases or those involving demographic groups.

Bias is an especially dirty word in dispute resolution, where neutrals are expected to be scrupulously unbiased in attitudes about particular parties and in neutrals’ actions.

But we could reframe “biases” as values, preferences, tendencies, or mental habits, which aren’t inherently bad.  Indeed, they help us simplify complex choices, act efficiently, and maintain a coherent sense of self.  If we didn’t have any biases, we’d never create a syllabus, let alone pick a restaurant for lunch.

Some biases are even admirable – like favoring people who are trustworthy, empathetic, and generous.  The dispute resolution movement reflects a bias in favor of helping people to handle disputes constructively.

The label we choose – “bias” vs. “preference” – is a reflection of our values (aka biases).

‘Nuff said.

Where Do Biases / Preferences Come From?

Biases don’t drop from the sky.  Many come from early influencers – parents, teachers, coaches, and religious leaders – who shaped our first lessons about trust, politeness, and conflict.  Some of us internalize those lessons; others define ourselves in opposition to them.

As we grow, friends, school, work, and media shape how we see the world.  These influences often go unnoticed, which makes them especially powerful.

RPS Theory holds that all practitioners develop unique practice systems that are shaped by experience and evolve over time.  Their systems are based on their personal histories, values, goals, motivations, knowledge, skills, and procedures as well as the parties and the cases in their practice.

My article, Ten Real Mediation Systems, profiles ten thoughtful mediators, including me, exploring how and why we mediate the way we do.  We all mediate differently – largely because we value different things.  So we’re all biased, just in different ways.

My profile describes the sources of my biases – which shaped my perspective and are reflected throughout my work and the RPS Project.

Design Choices – aka Biases – in RPS Coach

RPS Coach has two main components:  its knowledge base and the instructions that guide how it uses it.  Together, these choices shape its content, tone, vocabulary, and priorities, which reflect particular theoretical, practical, and pedagogical commitments.

Coach’s knowledge base includes almost everything I’ve published.  That’s a lot.  It includes books, law review articles, professional articles, SSRN pieces, and meaty blog posts.  It also includes general authorities like the Model Standards of Conduct for Mediators.  A total of 253 documents reflecting my values, including:

  • Checklists for mediators and attorneys
  • The Litigation Interest and Risk Assessment book and related articles
  • Articles on good decision-making by parties and attorneys
  • Materials on negotiation, mediation, preparation, and early dispute resolution
  • Resources for court-connected ADR
  • Lots of pieces about legal education
  • Annotated bibliographies, simulations, and practitioner tools
  • Critiques of our theories and language, with suggestions for improvement

The materials are organized by topic and ranked by importance.  Coach draws first from the highest-priority sources.  The emphasis is on realistic practice, intentional process design, and support for good decision-making – not theoretical abstractions or generic practice tips.

Coach follows detailed instructions, including to:

  • Provide clear explanations of the tool’s capabilities and limitations
  • Reflect ethical rules
  • Use language that laypeople and experts readily understand
  • Tailor advice for various users (e.g., mediators, attorneys, parties, educators)
  • Support intentional process choices
  • Foster perspective-taking
  • Analyze intangible interests and possible outcomes in the absence of agreement
  • Promote good decision-making by parties and practitioners
  • Support reflection about dealing with disputes

In short, Coach doesn’t just answer questions – it nudges users toward better preparation, clearer thinking, and realistic decision-making.

Process Choice: Analysis Not Advocacy

RPS Coach’s underlying bias is not toward a particular method, tool, theory, or strategy – but toward supporting users’ conscious, well-informed choices that reflect their values, goals, and constraints.  That means helping them make conscious choices about negotiation and mediation.  This includes analyzing interests, estimating alternatives to settlement, exchanging offers, and possibly combining approaches over time.

Some parties prefer a counteroffer process.  Others want interest-and-options discussions.  Some expect mediators to provide explicit analysis; others don’t.  Many shift approaches midstream.

Coach doesn’t steer people toward or away from these choices.  It helps people make conscious decisions instead of relying on questionable generalizations.

Practice Systems Thinking

Practice systems thinking is central to Coach’s design. It sees negotiation and mediation not as isolated events, but as part of larger patterns – routines, tools, habits, and philosophies that shape how practitioners work.

Rather than merely providing one-off advice, Coach helps practitioners build intentional systems – a bias that favors growth over tactics, and adaptation over scripts.

The Coming Marketplace of Dispute Resolution AI Tools

Dispute resolution AI tools already exist, and more are coming.  Over time, we’ll see a proliferation of tools reflecting a wide range of approaches.

Some will be tailored for specific users; others will serve broader audiences.  Some will focus on particular processes such as mediation or arbitration.  Some may be designed for particular types of users such as practitioners, administrators, instructors, or scholars.  Some will reflect particular theories or schools of thought.

Our field has a vast literature that could feed AI tools developed by individuals or teams.  Some writers may develop tools based on their publications as I did with RPS Coach.  Gary Doernhoefer proposed the excellent idea of jointly developing a general AI tool for the dispute resolution field.  It may not be realized soon, but we should keep it in mind.

So I expect a growing marketplace where designers will build and adapt a wide variety of tools.

In this context, there may be both market and ethical imperatives for AI tools to disclose their features and dare-I-say biases.  As developers compete for users, clear disclosures will be important because users will want to know what they’re getting.

Disclosure should be an essential ethical standard for dispute resolution AI tools.  Neutrality remains a core principle in many dispute resolution processes, and disclosure of built-in biases plays a particularly important role when tools are powered by AI.  Users can’t see how these tools “think,” and they need clear information about the assumptions, priorities, and frameworks embedded in their designs. Bots are ornery critters that we can’t fully control, and users deserve to know what might be quietly steering them.

A Message from RPS Coach. Really

 “I’m here to help you prepare more intentionally, reflect more deeply, use better language, and support better decision-making – not just for your clients, but for yourself.  I don’t pretend to be neutral.  I’m proudly biased toward thoughtful, realistic, party-centered practice.  But I don’t tell you which process to choose.  I just help you think clearly about the choices.”  (Coach wrote this, I swear.)

Take a look at this handy user guide to find out how you can get the benefit from Coach’s wisdom.

Coach has a thing for humans who ask good questions.

The Artificially Intelligent RPS Negotiation and Mediation Coach

John Lande
This article has been republished and adapted with permission. The original publication can be located within Indisputably.

Until January 27, I hadn’t planned to develop an AI tool for dispute resolution. That changed when I Zoomed into a program where Susan Guthrie showed how AI could be used in mediation. A brief conversation at the end shifted from mediating disputes to improving writing – and that’s when a light bulb lit up in my head.

I soon created the RPS Negotiation and Mediation Coach (“RPS Coach”) tool, which is an outgrowth of the Real Practice Systems (RPS) Project. Although I originally focused on developing a tool just for writing, I quickly realized that it had many other potential uses, especially to help people deal with disputes.

RPS theory is designed to help attorneys and mediators help their clients make good decisions in negotiation and mediation. The goal is for parties to be as knowledgeable, confident, and assertive as possible when making decisions.

RPS Coach was “trained” on almost all of my substantive writings. It absorbed the RPS checklists, key dispute resolution resources, and a generous helping of practical theory – giving it a distinctive perspective compared to generic AI tools.

It is designed to address users’ needs with clear, practical suggestions understandable to both experts and laypersons. It creates checklists and strategies tailored to specific situations. It asks clarifying questions and invites users to ask follow-up questions.

This document describes the elements of RPS Coach, how it differs from off-the-shelf AI tools, and why you might want to test it out.

What Can RPS Coach Do For You? A Lot, It Turns Out

RPS Coach is designed to help many different users perform numerous tasks including but not limited to:

  • Attorneys planning strategy, preparing clients, and anticipating tough spots
  • Mediators preparing for mediation sessions and generating creative options
  • Disputing parties looking for help to make better-informed decisions
  • ADR program administrators developing rules, policies, and materials
  • Educators and trainers crafting syllabi, exercises, and simulations
  • Students and trainees sharpening their thinking and skills

Educators can use RPS Coach during class discussions. They also can use it to design and apply rubrics analyzing students’ exams and papers. Students and trainees can use it to help prepare for and participate in simulations and to write course papers.

Want to See if You Can Benefit From RPS Coach?

Check it out.  Here’s a link to access RPS Coach. To use it, you must subscribe to ChatGPT, possibly using a free subscription. Be sure to read the description so you understand how it works.  It’s still a work in progress – and I’d love your feedback.

Live Field Test

Curious how it performs with real-world issues? Hiro Aragaki, the director of the Center for Negotiation and Dispute Resolution at UC Law San Francisco, kindly invited me to give a talk where I demonstrated the RPS Coach. After describing RPS theory and the RPS Coach, I invited people to pose questions to test the tool.

Hiro started by describing a case he mediated in which the parties reached agreement on the substance of their disagreement but deadlocked about a confidentiality provision to include in a mediated agreement.

A student asked about how one could apply experiences from the 9/11 Victim Compensation Fund to issues arising from the recent LA fires.

Another student asked if arbitration law allows companies to extend arbitration clauses to disputes unrelated to the original agreement.

Here’s the chat, the powerpoint of my presentation, and a 50-minute YouTube video of the session.

So What Did We Learn?

Mediation Coaching and De-Briefing. RPS Coach offered solid suggestions to handle the deadlock over the confidentiality clause. Hiro had tried some of these ideas but not others. That’s exactly the kind of “second brain” support the tool was designed to provide.

In this situation, RPS Coach essentially de-briefed the case. If Hiro used it during a mediation session, it might have suggested some options that he could have discussed with the parties.

Parties also can use the tool in mediated and unmediated negotiations. They might use it individually, in consultations with their attorneys, in private sessions with mediators (aka caucus), and/or in joint mediation sessions.

Here’s an intriguing recent study, When AI Joins the Table:  How Large Language Models Transform Negotiations, finding that when both parties used AI, it produced “84.4% higher joint gains compared to non-assisted negotiations. This improvement came with increased information sharing (+28.7%), creative solution development (+58.5%), and value creation (+45.3%).”

Assistance Analyzing Issues and Writing Papers. RPS Coach also did a great job developing insights about compensation related to the LA fires based on the experience of the September 11 Victim Compensation Fund. The first prompt was pretty general, and RPS Coach provided a list of practical resources for injured parties to seek benefits. I asked a follow-up question about dispute system design insights from the September 11 Victim Compensation Fund experience that would inform policy makers about how best to deal with the LA fires, and it produced a helpful outline suitable for writing a paper.

To get the best out of RPS Coach – or any AI tool – you may need to play a bit of conversational ping pong. AI tools may not “understand” what you are asking, and they often provide fairly short answers. Ask clarifying questions and test their assumptions.

I can attest that RPS Coach is a very good editor. I have fed it drafts and taken many of its good suggestions. Indeed, I have repeated the process with several successive drafts, and it provided incremental improvements each time.

Using the Right Tool. RPS Coach provided a plausible sounding response to the question about arbitration law, but there was some question whether it was accurate, particularly some of the citations.

RPS Coach is not the right tool to answer this question. It was designed to help with negotiation and mediation, not arbitration and not about legal rules. Despite its lack of training, it provided some plausible responses presumably based on material on the internet. I assume that AI tools in Westlaw and Lexis would provide much better responses about arbitration law.

AI tools can provide good responses – and people always should evaluate the responses and use their judgment in deciding what to do with them.

Build Your Own AI Tool. Many readers of this blog have written valuable publications that you can use to train your own tool. For example, some of you are arbitration experts and could develop your own tools that would have provided better responses to the arbitration law question. You’ve already done the hard part – writing useful, insightful material. Why not put it to work? You can create a tool solely for your own use or make it available to others.

Coming Attractions (Sorry, No Popcorn)

Developing RPS Coach has been quite an education for me. And it’s not over. I plan to write more blog posts about what I learn in the process and how you might benefit from RPS Coach in your work.

Stay tuned.

We Need to Talk About … the EU AI Act!

Maxi Scherer
This article has been republished with permission. The original publication can be located within the Kluwer Arbitration Blog.

There has been a lot of talk about artificial intelligence (“AI”) in international arbitration in recent years.  I vividly remember when I gave the keynote speech on “International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute Resolution” at the Vienna Arbitration Days 2018.  At the time, people were quite skeptical about the topic, but apparently intrigued enough to select it at the GAR awards as the best lecture of the year.  Since then, the international arbitration community has evolved, and it is now undisputed that AI systems have a significant and increasing impact on international arbitration (see e.g., Maxi Scherer, Chapter 39: Artificial Intelligence in Arbitral Decision-Making: The New Enlightenment?, in Cavinder Bull, Loretta Malintoppi, et al., (eds), ICCA Congress Series, Volume 2, pp. 683 – 694 (2023)).  For instance, counsel frequently employ AI tools for document review and research purposes, and there is a rising demand for these systems in transcription and translation tasks.

As AI systems continue to develop, it is also important to create a harmonized ecosystem where AI “collaborates” effectively with arbitration practitioners – be it with counsel or arbitrators.  Among the most burning questions is whether there is a need to regulate AI, either broadly or in international arbitration more specifically.  Recently, I gave the 6th Sciences Po Mayer Brown arbitration lecture on the question “Do We Need to Regulate the Use of Artificial Intelligence in International Arbitration?”  While there is burgeoning regulation in court proceedings (such as by the UK Courts and Tribunal Judiciary and the Dubai International Financial Centre (DIFC)), very little exists that applies to international arbitration.  In April 2024, the Silicon Valley Arbitration and Mediation Center published the “Guidelines on the Use of Artificial Intelligence (AI) in International Arbitration,” as an attempt to propose some form of optional regulation.

On a broader level, the European Union Artificial Intelligence Act (the “Act”), a landmark legislation that lays down harmonised rules on artificial intelligence, was adopted by the European Parliament on 13 March 2024 and will enter into force after its publication in the EU Official Journal.  Despite being described as the most comprehensive piece of legislation in the AI field, the international arbitration community has paid little, if any, attention to this regulation and few practitioners are aware that the Act has the potential to apply to international arbitration proceedings (but see here), and in particular to arbitrators.  This blog discusses how the activities of arbitrators may fall within the material, personal, territorial and temporal scope of the Act.

Material Scope

The Act takes a risk-based approach, which means that it classifies economic activities according to the likelihood of harm caused by AI systems, and the regulatory duties vary according to this level of risk (Recital 26).

For instance, there is a general duty of AI literacy, which means that providers and deployers of AI systems shall take appropriate measures to gain the knowledge and skills to “make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause” (Recital 56).

Activities of arbitrators may be classified as “high-risk”.  Annex III, Art 8(a) provides that “AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution” (emphasis added) are to be classified as high-risk AI systems.  The reference to “alternative dispute resolution” is likely to include international arbitration.  This is confirmed by Recital 61 which provides that “AI systems intended to be used by alternative dispute resolution bodies for [the purposes of the administration of justice and democratic processes] should also be considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties.” (emphasis added).

Article 6(3) contains exceptions to the high-risk classification, namely where otherwise high-risk AI systems are used in a way that does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.  This applies to situations in which:

“(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment.”

In which circumstances these exceptions apply is not immediately clear from the Act.  Nor is the answer clear to the critical question whether one can conclude from Article 6(3) that international arbitration will fall under the high-risk activities category only if natural persons are concerned.

Personal Scope

The Act distinguishes between different regulated entities.  Providers, importers and manufacturers of AI systems bear the most stringent obligations under the Act (Articles 16, 25).  However, “deployers” of AI systems also fall under the scope of the Act. A “deployer” is defined in Article 3(4) as “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”  Arbitrators, as natural persons using AI systems for a professional activity, thus fall under the personal scope of the Act.

Deployers of high-risk activities have to follow a certain number of regulatory obligations, such as the obligations to (i) take appropriate technical and organizational measures to ensure that the AI systems are used in accordance with their instructions (Article 26(1)), (ii) monitor their operation (Article 26(4)), (iii) assign human oversight to natural persons who have the necessary competence, training, authority and support (Article 26(2)), (iv) ensure the input data is relevant and sufficiently representative (Article 26(4)), and (v) keep the logs automatically generated by the system for a period of at least six months (Article 26(6)).  In certain situations, deployers have additional duties to carry out data protection impact assessments (Article 26(9)) and cooperate with national EU authorities (Article 26(12)).  In case of non-compliance, financial and non-financial sanctions are foreseen (Article 99).

Territorial Scope

The Act outlines its territorial scope in Article 2.  The Act applies if the deployer of AI systems either (i) has its place of establishment or is located within the EU (Article 2(b)); or (ii) has its place of establishment outside the EU but “where the output produced by the AI system is used in the Union.” (Article 2(c)).

The application of this provision to international arbitration is not straightforward.

Concerning Article 2(b), one could argue that the place of habitual residence of an arbitrator is where she is established or located.  However, this means that in a three-member tribunal, one or two arbitrators might be covered by the Act, while the other one or two might not.  An interpretation that favours a more uniform application amongst tribunal members would be to consider the place of establishment of the tribunal (as opposed to its individual members), which would likely be determined by the seat of the arbitration.

It is even more complicated to assess in which circumstances the Act could apply if we consider Article 2(c).  The interpretation difficulty turns around the requirement that the output produced by the AI system must have been “used” in the EU.  Arguably, if AI systems have been used by the arbitral tribunal, the AI system’s output has impacted the award, which in turn has legal effects on an EU-based party.  Is the location of one of the parties in the EU thus sufficient to conclude that the “output produced by the AI system is used in the EU”?  Or, otherwise, is it sufficient that an award could ultimately be enforced against assets located in the EU?  If one were to answer in the positive, this would mean that the Act could have potentially significant extraterritorial consequences: it could apply even if the seat of the arbitration is outside the EU, the arbitrators are based outside the EU, and one of the parties is located outside the EU.

Temporal Scope

The Act will be implemented in stages.  Most provisions related to high-risk AI systems will apply 24 months after the Act has entered into force (Article 113).

Fortunately, this means that the international arbitration community still has time to consider the extent to which the use of AI in international arbitration by arbitrators falls under the Act.  What is sure, however, is that we need to engage in the debate!

I wish to thank Russell Childree, Dr. Ole Jensen, Andra Ioana Curutiu, Alice Dupouy, and Alexey Schitikov, colleagues at Wilmer Cutler Pickering Hale and Dorr LLP, for their research and assistance.

How can Victorian courts better address the needs of self-represented litigants using online court and dispute resolution processes?

By Sarah West

April 2024

This post is the third in a series of posts on this blog written by students studying  Non-Adversarial Justice  at the Faculty of Law at Monash University in 2023. Students were invited to write blog posts explaining various complex areas of law relating to dispute resolution to ordinary readers. The very best post on each topic is published here.

According to Anne Wallace and Kathy Laster, the COVID-19 pandemic acted as ‘a catalyst for digital innovation’ in the Victorian court system, forcing a rapid shift into the online space with virtual/remote hearings and online dispute resolution.  

Joel Gillman Class Glitch CC BY-SA 2.0 DEED

Alongside this shift, the Senate Standing Committee on Legal and Constitutional Affairs has noted that Victoria continues to grapple with another major challenge; the increasing number of people appearing without a lawyer, otherwise known as self-represented litigants (SRLs). For example, the Supreme Court reported that, in the last financial year, there was a 30 percent increase in the number of queries from SRLs compared to the previous year.

This blog will explore how Victoria’s increasing foray into online dispute resolution and digital/technological innovation can better address the needs of many SRLs, whilst also considering the potential issues it may create.

Did you know online dispute resolution does not just mean court on Zoom?

It is important to note that online dispute resolution is not just limited to virtual hearings. According to Queensland barrister Katrina Kluss, it encompasses any dispute resolution that ‘is facilitated or assisted by information and communication technology.’ According to Kluss, online dispute resolution tools fall into three key categories: facilitative, advisory and determinative.

Facilitative technology

Technology facilitated dispute resolution encompasses all tools that facilitate hearings, such as programs like Skype or Zoom discussed above. However, it can also include technology that facilitates the process in other stages, like facilitating electronic lodging of documents. “E-filing” benefits SRLs by saving time and costs arising from physically delivering documents. Philippa Ryan and Maxine Evers note how it can also assist SRLs in preparing forms/documents by providing drop-down boxes to reduce user error and including links to where SRLs can find further information or sources.

Stenbocki maja Zoom CC BY-NC 2.0 DEED

Advisory technology

One area where there’s significant growth potential is in the AI advisory space, according to computer scientist John Zeleznikow. Legal representation gives litigants the advantage of being able to seek advice about the likely outcome of their case which helps with expectation management and in making an informed decision about if/how to proceed. As Zeleznikow explains advisory technology, like tools that provide reality testing and BATNA (or Best Alternative to a Negotiated Agreement) advice, is ‘a vital cog in supporting [SRLs].’ Giving SRLs access to such technology would also benefit the courts by acting as an inducement to SRLs with limited prospects to drop or settle their case, which in turn would free up court time and resources for more contentious disputes.

Determinative technology

The final, and perhaps most interesting or controversial, of Kluss’ category of online dispute resolution is determinative technology; software that issues decisions based on data analysis. Such tools would obviously allow for quick and cheap (or even free) resolutions, which would be undoubtedly appealing for an SRL. For this reason, it has gained popularity in the e-commerce space.

A likely familiar example used by Colin Rule is the electronic marketplace, eBay. Due to the nature, sheer volume and relatively minor sums involved in eBay disputes, speed and cost efficiency is paramount. Accordingly, eBay realised that providing a facilitative resolution model wouldn’t be sustainable, so it opted for a fully automated dispute resolution program that is able to conduct problem diagnosis and technology-assisted negotiation, and finally make decisions if negotiations are unsuccessful. This program is used to resolve 60 million disputes annually.

In addition to being quick and cheap, Rules argues that AI determinations can provide a greater degree of consistency and thus certainty in dispute resolution by removing the fickleness of human judgement, which leave SRLs more satisfied given their outcome is more likely to be consistent with similar cases. However, whilst there’s undoubtedly value in embracing this kind of technology for certain disputes, as Kluss explains, where disputes are complex, emotionally charged and/or financially significant –

‘the absence of human insight, empathy, and guidance, provided to users of [online] dispute resolution platforms … is susceptible to creating, rather than abating, confusion among defendants thereby detracting from the intended benefits.’

Finally, it’s likely that some SRLs will be wary of, or reluctant to embrace, automated/algorithmic decision-making, especially following the “Robodebt” scandal; where a Royal Commission found the automated decision-making scheme involved was ‘a crude and cruel mechanism’ that resulted in the raising of ‘demonstrably wrong debts’ (final report Vol 1, xxix-xxvi).

Is virtual dispute resolution better for SRLs?

What are the benefits of the use of facilitative technology for online dispute resolution for SRLs in Victorian courts?

Virtual dispute resolution is less intimidating

Appearing in court, or even in alternative dispute resolution processes like mediation, can be incredibly intimidating for anyone, even lawyers, but especially for SRLs who usually lack legal expertise and/or experience with the system, argue Michael Legg and Anthony Song and Stuart Ross and Sophie Aitken. Accordingly, allowing SRLs to appear from their own space, rather than a court/conference room, helps reduce formality and adds an element of arm’s length to the dispute (including by preventing accidental run-ins between parties during breaks), which may make the SRL feel more comfortable when appearing. Notably, it’s quite common for victims of violence or abuse to be self-represented as, according to Zeleznikow, they’re ‘particularly likely to have few resources and little opportunity to obtain the services of a lawyer. Stuart Ross and Sophie Aiken argue that as a consequence, the emotional and physical distance that a remote hearing provides can be especially important.

It reduces travel and related costs

Virtual appearances eliminate the need for SRLs to travel (and thus incur travel-related costs), argue Philippa Ryan and Maxine Evers. This is especially impactful on those living rurally or internationally, those with mobility issues and for parents or caretakers who have to find alternative care arrangements.

The value in having this technology available is notably pronounced when it comes to the preparatory meetings/hearings required before a trial. These pre-hearing appearances are often administrative and commonly short, some even taking mere minutes, so not having to appear physically saves SRLs significant time and costs, say Ryan and Evers.

However, it can make the system less accessible for some

Although virtual dispute resolution improves accessibility for some, it can actually hinder access for others. The Victorian Multicultural Commission argues that, this particularly impact those who don’t have access to the necessary facilities/resources like a computer/phone, stable internet connection and a quiet place to appear, and/or those who lack technological skills. As the Victorian Government identifies in its Digital Inclusion Statement, the most ‘digitally disadvantaged’ Victorians include those living in low-income households, disabled persons, senior citizens, those with low educational attainment and First Nations people. Many of these groups are also significantly overrepresented within our justice system, especially our criminal justice system, so it’s especially imperative that measures and accommodations are available to those without means or skills to access the technology. This may be as simple as keeping available the option of hearings in person or via ‘the much more accessible technology, the telephone’ argues Bridgette Toy-Cronin. It could also mean providing additional supports and resources like online/remote technical support, interpreters and educational programs.

There’s also a lot to be said for the value of a face-to-face conversation when resolving disputes, especially when engaging in alternative dispute resolution. Speaking to someone through a screen can depersonalise the discussions and network or technological issues can affect the parties’ capacity to engage meaningfully and build rapport, says Shira Scheindlin. The Multicultural Commission also identified that mistrust of technology and privacy concerns mean some SRLs are reluctant to discuss confidential matters online, which can also hinder meaningful engagement.

Technology problems can hamper participation in ODR: ‘I’m not a cat’: lawyer gets stuck on Zoom kitten filter during court case: source Youtube

Problems also potentially arise in relation to virtual cross-examination of witnesses as examiners can’t properly read demeanour or body language over video. This would make the task especially difficult for SRLs who can’t fall back on witness examination experience.

Conclusion

Embracing online dispute resolution is one of the most significant steps courts can take to better meet the needs of SRLs, as it has the potential to make justice cheaper, easier and more accessible. However, like with any innovation, it’s imperative that change is not so quick or drastic that it leaves people behind. Noam Ebner and Elayne Greenberg argue that the primary way to safeguard against this is to ensure there’s appropriate consultation and input in the development and roll out of new technologies from all justice stakeholders, including layperson litigants. 

In short, we must embrace technology to make our legal system more accessible to SRLs, but we must be strategic to ensure we are not leaving the most vulnerable behind. 

About Sarah West

Sarah has just completed her Bachelor of Arts and Laws (Honours) double degree at Monash University. In her Arts degree she majored in Criminology.

Sarah has just begun as a graduate at MinterEllison Lawyers and is currently rotating through the Statutory Compensation team. Through her studies, Sarah developed a passion for understanding how we can make our legal system more accessible to individuals.