Global Legal Tech Hub Day
Global Legal Tech Hub Day does exactly what it says on the tin. It’s a time for experts from lands near and far to get together around a (virtual) table and discuss the challenges and trends of LegalTech. From digital identity and blockchain, to AI and algorithm ethics, the event provides a rare opportunity to hear about what our neighbours have accomplished and what they are looking to achieve in the LegalTech space in coming years.
Breaking up a day full of talks on more specific topics, were three table discussions and, luckily, I had time to attend all of them: Asia & Oceania, Africa & Middle East, and America & Europe.
Across the globe, there's one trend that stands out like a sore thumb. Legal sectors have been changed radically, but not necessarily by lawyers. The increasing number of tech professionals and other experts joining the space have helped to shape the industry into one with a lot of potential to foster technological growth and innovation. According to one expert, the gap between industry and the legal profession will continue to grow and, within this space, there is an abundance of opportunities for innovation.
Although a growing sector, there are of course many lawyers who either haven’t heard of or aren’t interested in using LegalTech, despite the mounting pressure for them to do so. And this pressure isn’t misplaced. During the Asia & Oceania table discussion, one panelist remarked that in the near future, in order to be a lawyer in the legal eco-system, knowledge of and experience using LegalTech will be obligatory. The question 'do you use a tool?' will be replaced with 'what tools do you use?'
To keep up with the growing use of technology in the field, experts across the globe agree that it is vital that universities adapt curriculums where necessary to complement the skills required in the profession. Although teaching specific technologies may be out of the question, students should be taught critical thinking and technology skills. Importantly, they should be provided with opportunities to discover useful LegalTech resources on their own through their studies.
As one expert put it, if we think of the challenge of integrating LegalTech into our sectors as building a house, we can say that data scientists and technology experts are the plumbers. They are integral to the process, but a plumber can’t build an entire house and nor should they be expected to. Currently, AI is being built by plumbers alone. If we want to successfully build and integrate useful technologies in the legal sector, we must think of the problem in much the same way. Lawyers should understand that their role is vital to ensuring that context is provided for the technology being built and that these technologies actually work to solve existing problems.
Interestingly, although the participants were speaking thousands of miles apart, across several time zones, they all had one thing in common. They were all extremely hopeful about what was to come in the LegalTech space.
For me, with my own interests and biases, the most interesting event of the day was AI & Algorithmic Ethics, which managed to pique my interest enough for me to forget about my afternoon coffee. Ethics has always been a tricky area and is one on which a lot of experts tend to tread lightly. So, I appreciated how straightforward the panel were with their points. One of the first points was regarding laws and regulations to cover the use and implementation of AI. Specific reference was made to the EU’s Artificial Intelligence Act which uses a risk-based approach and has the aim of protecting fundamental rights whilst encouraging the development of AI technologies.
A good example of the importance of such regulation is facial recognition. This technology is being used in healthcare to identify diseases such as dementia through micro expressions. However, on the other side of the coin, facial recognition is also used in video analysis to identify perpetrators of acts of violence or terrorism. The inappropriate and inconsistent regulation of such systems could lead to the violation of civil rights.
The panelists noted that one of the benefits of the Act is that it is broad enough to cover AI systems that are yet to be built. However, is it too late for AI regulation? Has the damage already been done? The general consensus is both yes and no. The reality is that governments are, in many ways, less compliant than the private sector. Regulations may be enforced, but there is a weak system for ensuring that AI is consistently audited and that regulations are actually upheld when something does go wrong. In reality, fragmented laws and shady implementation don’t do much to prevent companies from infringing on civil rights. After all, many people don’t fully trust the technology they use, but they still heavily rely on it. Not a healthy combination.
On the other hand, the panelists explained that, yes, AI regulation might be showing up a bit late to the party, but better late than never. An interesting analogy made by one of the panelists was that class A drugs used to be sold in pharmacies. How is this relevant? Well, if we consider that fact today, we would be shocked. Why wasn’t there more regulation in place to protect consumers? In the same way, it is hoped that in years to come, the idea of poor regulation around AI will be subject to the same level of shock and disbelief. In this way, it is more important than ever that transparency and thorough regulation is demanded. Fortunately, the panel believes that although we have seen a lack of strong regulation in the past, we are coming out of the worst period and these issues will be rectified, slowly but surely, due partly to greater social awareness.
As a takeaway, the panelists asked that we think critically and do better. It’s a good thing we have all the tools to do just that.
If you’d like to read more about the event, here’s the link:
GLTHDay: One day to learn. One day to discover. One day to grow. (meetmaps.com)