Artificial Intelligence Legal Handshake

Being married to a techie who was once a lawyer, and not without a degree of self-interest I have recently spent a good number of hours musing over the impact of GPT on the future of the legal profession.

GPT stands for Generative Pre-trained Transformer (memorise it, this will soon be a quiz question) and is a type of language model that uses deep learning to generate human-like, conversational text. GPT 1 and 2 were interesting experiments, GPT 3.5 or ChatGPT was impressive, but the recently-launched GPT 4 has been hailed across the globe as an absolute game-changer.

As a self-proclaimed tech sceptic – if it’s not flawless, I don’t want it – I wasn’t initially worried. I was impressed by GPT as a concept, but the first few times playing around with it did not make me fear for my job.  The numerous GPT clangers circulating on social media did not trouble this view (my personal favorite is the one where it provides in moments an impressive list of how to distinguish between a chicken’s egg and a cow’s egg).

But then in March there was a report by Goldman Sachs, predicting that lawyers and administrative staff would be among those at greatest risk of becoming redundant through the march of GPT. This was intriguing, because previously all the reports about artificial intelligence had told us that lawyers did not need to fear for their job, as the language skills of even really good artificial intelligence were fairly bad. However, with this new prediction, mere scepticism is no longer a defence.  I had to do more diligent testing. In this blog, I will be happy to share my findings with you, and at the same time, explain why employment law careers still have time to run.

  1. GPT is an amazing source of inspiration

With a few simple prompts, GPT will start drafting for you. In a couple of seconds, it will have prepared a few well-written paragraphs on any topic, ranging from the impact of abolishing the monarchy to a policy on the prevention of bullying. It will come up with suggestions you may not have thought of even after a two-hour brainstorm. This type of inspiration can be most welcome, for example when you are asked to draft a policy from scratch and have no precedents to guide you.

  1. GPT is incredibly patient

You can instruct GPT as you would a human, but you do not have to worry about politeness or the other person’s feelings. You can interrupt GPT while it’s drafting and instruct it to be more or less detailed, to use plain English or inversely, to write for a more sophisticated audience. GPT will not complain if you change your mind three times or four or twenty.  It will eagerly do whatever you instruct it to do, will not gripe about unclear instructions and won’t want to go home early on its birthday.

  1. GPT does not care for accuracy

All this comes at a price, however, one that can be lethal in the legal profession and almost as dangerous for its clients. And that is that GPT does not care all that much for accuracy. GPT is like this classmate we all had in school, who listened very closely to his parents’ conversations and echoed them on the playground, yet missing the nuances and sometimes even the essence of the grown-ups’ actual conversation.  All the verbal tools, in other words, but no means of applying them.

GPT’s output is based on the billions of documents available online on any specific topic. If you instruct it to write a paragraph on a bonus plan, it will process information on French, German and Californian law based bonus plans. You can be more precise of course and indicate that you are writing for a Belgian audience, but in Belgium we speak Dutch and French, so you should not be surprised to read snippets of text which are clearly inspired by the legislation of the Netherlands or France. And while we may speak the same language, our legal systems differ quite significantly.  Added to which is the over-riding risk, in fact inevitability, that a large proportion of the documents scanned by GPT in its eagerness to please you will be wrong, wrong in fact, wrong in law, wrong in context. 

So its output may well be wrong too, maybe not in any way which is immediately visible, but certainly in a way which could surface embarrassingly for lawyer or client at a later stage.  GPT knows but it doesn’t think, and particularly in a field of law where so much turns on acting reasonably, that will almost always end in tears.  Clients can and do sometimes pay employment lawyers for information or ideas they could equally source from the internet, but increasingly our real value lies not in rehearsing the law but applying all its nuances to an endless variety of minutely-different circumstances.

  1. GPT is a liar

It gets worse though.  In an attempt to please its master, GPT will resort to, well, making stuff up. As part of my trial, I asked GPT to prepare a chapter in the book I’m currently writing. The result was not really useful, which did not surprise me, as the topic is one on which very little has been published so far. But I was struck by one decided authority that GPT quoted. Quoted in detail, in fact: not only the employment tribunal, but also the date and the case number. The logic behind the case number matched that of Belgian case law so I was intrigued, as I had never heard of this decision.  The tribunal clerk dug up the decision for me. Turns out the decision with that case number was not of the date cited by GPT and more importantly, it was on a completely unrelated matter. In other words, GPT had simply made up a verdict. Can you imagine the shame if I had included this decision in my book or in a client memo?  And as a client, the internal mortification of basing advice to your Board or HR team or employee forum on a case that just didn’t exist?

  1. GPT needs lots of data

These generative language models are basically a numbers game: the more information they can choose from, the better their output will be. And while there are certainly times when lawyers are still asked to write lengthy memos on, say, the collective redundancy process in their jurisdiction, this is no longer what clients mostly want or need – it is available on the internet.  They need a tailored response to a novel or very specific question they have, and this question will likely go beyond the information readily available on websites and blogs. Good luck getting a reliable response from GPT on that one.

Similar experiences with GPT have prompted law firms to position themselves on the technology. Some have embraced it while, others have banned it, fearing loss of confidentiality and inaccurate work product. For similar reasons, a number of companies have already issued guidance that lawyers working on their cases cannot make use of GPT.  The Italian Data Protection regulator has blocked it altogether. For all these reasons, I do not fear unemployment quite yet. But this is by no means the time to be complacent. What GPT can already do today (let alone what future models will be capable of) is simply amazing and it requires our full attention as lawyers and if written output is your product, yours as client too.  If asked, GPT will not hesitate more than a second to produce a draft employment agreement. The wording in this draft may not be entirely consistent, nor may the look and feel be fully compliant with local law and practices, but it will nevertheless be impressive. Lawyers will need to be on their toes, perfecting and automating their firm templates and know-how to confirm their added value over artificial intelligence. Feeding the firm’s know-how into these generative language models to ensure higher-quality, more consistent output may also be a next step to consider. Standing still however is no longer an option.