Could AI Replace Mediators?

By John Sturrock KC
The original publication can be found at The Scotsman.

The founder of a site where AI models communicate with one another compared them to a “new species that is on planet Earth that is now smarter than us.”

These seemingly apocalyptic words, in a recent email from an American mediator colleague, certainly caused a stir among its recipients. A debate ensued about whether AI will usurp the function of mediators – as it threatens to do with many professional jobs in the near future.

It is interesting that many of these American mediators report the widespread use of AI by parties and lawyers participating in mediations.  Indeed, a number of those mediators are themselves using AI to summarise the mediation papers, structure possible negotiation approaches, help prepare “mediator proposals”, assess emotions and assist with strategies to overcome impasse. Some are even developing their own software programmes (or asking AI to do that for them).

Commenting on the use of AI in recruitment, a legal careers adviser recently observed that “law is fundamentally people-focused and technology should enhance rather than replace human judgment”.

With that in mind, I had intended to write about how mediators and lawyers can adapt to the advent of AI, on the assumption that the strengths we have, such as building relationships and trust over many years, cannot be displaced. And then I read an article by Matt Shumer, entitled ‘Something Big Is Happening’, and watched the first of mathematician Professor Hannah Fry’s BBC documentaries on the subject of AI. The enormity of what could be facing us hit me. Shumer describes it as “like the moment you realise the water has been rising around you and is now at your chest.”

According to Shumer, the AI models available today are unrecognisable from what existed even a few months ago. The most recent models make decisions that would have been unthinkable a year ago. They have something that “felt, for the first time, like judgment.” Chat GPT and Claude have released new models that make “everything before them feel like a different era.” AI is now building itself, with the ability to improve exponentially, not linearly. The people behind this technology are “simultaneously more excited and more frightened than anyone else on the planet”. One has said that AI models “substantially smarter than almost all humans at almost all tasks” are on track for 2026 or 2027. Shumer concludes that massive disruption could occur by the end of this year. We need to prepare, he says.

To those who argue we have been here before, it is said that this is different from every previous wave of automation. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. We know that some law firms are making significant use of AI to do work that associates would once have carried out. One managing partner apparently expects AI be able to do most of what he does before long…

Shumer’s article has been dismissed as self-serving and way over the top. But questions remain. Will AI replicate deep human empathy? Replace the trust built over years of a relationship? We would hope not. But that some people have begun to rely on AI for emotional support, advice and companionship is illustrated in Hannah Fry’s startling documentary.

So, where might this lead, even for mediators, among whose key attributes is working with very complex human situations? I suspect that we don’t yet know – and that the biggest threat is complacency. We may be facing the biggest change any of us have experienced.

Author Biography

John Sturrock KC is the founder and senior mediator at Core Solutions. He is a pioneer of mediation throughout the UK and elsewhere with his work extending to the commercial, professional, sports, public sector, policy and political fields. He is a Distinguished Fellow Emeritus of the international Academy of Mediators and was also formerly a mediator with Brick Court Chambers in London. John also specialises in facilitation, negotiation and conflict management training and coaching for public sector leaders, civil servants, politicians, and sports and business leaders. He has worked with various parliamentary bodies throughout the UK on effective scrutiny of policy, and led a major review for the Scottish Government into allegations of bullying and harassment in the national health service in Scotland. He also founded Collaborative Scotland, a non-for-profit promoting nonpartisan respectful dialogue about difficult issues. John also has published two volumes of his book, A Mediator’s Musings (available on Amazon).

Connect with John via LinkedIn

1 thought on “Could AI Replace Mediators?

  1. Thank you for the interesting article. I see and experience AI in my daily mediation practice.

    My practice is exclusively legal assisted mediation/conciliation and, hence, all involve legal representation. At this point most of that contributed by AI, including that which comes from the major software suppliers to the legal profession, in rudimentary and unhelpful (for example, the ability of such programs to perform basic maths is compromised – but that is a design issue; the mediation position paper of one software supplier is distinctly unhelpful and includes nothing in the nature of a position or opening proposal).

    But it will improve (for example miai.law is a dramatic improvement on that which has come before). An oblique example is the increasing use of “AI assistants” by law firms and corporations. Most are awful and have you receiving responses like “I’m sorry, I don’t understand but I am learning” until you are screaming at your screen. If, for example, you compare the AI assistant used by Docusign (it is awful) with that used by Stripe (which is exemplary) you can see what is possible.

    Perhaps the issue we overlook in the discussion of AI is the philosophical – just because we can, should we? For example, we have the ability to build and stockpile nuclear weapons, but should we? We do, but should we? Perhaps a more germane example is to consider the gun lobby’s mantra “guns don’t kill, people do”. The statement, to have any meaning, must rely on an almost Euclidian logic of:

    “Guns can only kill if they are fired

    Guns can’t fire themselves

    Therefore, guns don’t kill, the people who fire them do”

    Whilst Euclidian logic is simplistic and fallible, it does, perhaps, assist. A gun is a tool and, hence, the tool is not responsible for how it is used-that is the gunman’s responsibility (with the corollary that a good tradesman doesn’t blame their tools). But in the AI debate, this example might help to illustrate that:

    1. Absent the tool the tool cannot be used. Therefore, should we even have the tool; and,

    2. AI becomes the firer (so that, absent human oversight, disaster happens-schools are bombed for example-and such “errors” written off as “errors” rather than fundamental human and moral failures).

    Whilst we have thought of AI as a “garbage in garbage out” problem, this is not correct. Early iterations of AI tools, such as bail determination and sentencing aids, inherited the failings of the data given to the machine learning tool. It is now well documented that the programs inherited the biases and prejudices of the human decision makers whose precedents were used to train them. But, we have seen through the use of Moltbook by AI agents to create their own religion (Crustifarianism) that AI has reached a stage of “independence” that allows the replication of human or human like thought (and showing how ludicrous human thought can be).

    The development of AI has the potential to see us lose control of our own humanity. AI can, in the foreseeable future, replace all of our cognitive based institutions. If an AI program can be used to predict a court outcome as a reframing tool in mediation or conciliation, to be used as a tool towards settlement, then why shouldn’t it equally replace judges and determine the legal principles that are used for said prediction?

    And the growth and expansion of AI systems is and will increasingly be prevalent as the growth of wealth (and inequality) that lays behind the development of this technology is beyond government regulation. And we might think ahead to what we lose.

    Already, research skills are compromised. Hard copy research materials were replaced by electronic (as Elmo says on Sesame Street “when we want to know something we look it up” as he calls for Smarty the Smart Phone) where research became the typing of search terms. But this is imperfect, relying, as it does, on the design and application of algorithms and prone to influences such as filtering, advertising and sponsored ranking (let alone the reality that the independence of the internet does not warrant for truth or accuracy).

    The addition of AI co-pilots delegates objective filtering to a machine learning tool that will, unchecked, leave accuracy and relevance as secondary if considered at all. Like so many things, the consideration of what is possible and what is funded and pursued by oligarchs, is overlooked.

    Perhaps we might pause in our embrace of AI technology to ask the correct questions. If we ask “Could AI replace mediators?” the answer must be yes. Perhaps we might ask “Do we want an AI agent to make decisions that impact human beings?” (considering examples like the UK Postal fraud allegation fiasco, Robodebt and School bombings). Until we do, each time my iphone updates and reminds me that I haven’t finished set up as I haven’t activated facial recognition (to which, it seems, all persons of colour look alike), I will continue to hit “remind me later” (because the algorithm doesn’t give me the option of “no thank you”).

    Like

Post your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.