Interviewee: Dr Mark Burdon is a Senior Lecturer at TCB School of Law. His research interests include information privacy law, technology regulation and cyber security. Dr Burdon lectures Foundations of Law (LAWS1700) and Privacy Law (LAWS5228).
Interviewer: Jane Hall is a Pandora’s Blog Editor for 2018.
What is artificial intelligence and how does it relate to law?
When we talk about artificial intelligence, what we are really talking about is machine learning. Initially machine-learning frameworks were used to understand vast amounts of data – data that when you put it all together is almost unimaginable in the human context. Artificial intelligence seems to be transiting from that environment of thematically understanding the content of vast amounts of data to moving towards predicting the context of that content. It is not only about understanding the types of information that can be categorised in vast amounts of data; but providing an insight about how that information could be relevant to the case at hand.
In a legal context, this becomes relevant to, for example, individual cases involving millions upon millions of pages of paper which would traditionally require a large paralegal team in order to process the data. What we are starting to see is the shift from human decision-making around legal consequences to machine-oriented decision-making. We then have to ask the question, ‘what’s the role of the human lawyer in the context of decisions that are going to become machine-made?’
One of your research projects at UQ is called the ‘Sensor Society’. Could you explain what this research is about?
The ‘Sensor Society’ was a project started by Mark Andrejevic and I. Mark is a professor of cultural studies now in California but was at UQ at the same time that I was in 2014. We started to become interested in the development of what seemed to be quite rapid technological changes, particularly in the context of drones.
Associate Professor Mark Andrejevic’s research profile is accessible here. His UQ research profile is accessible here.
What we started to see when we looked at these different technological developments were some incredible technological innovations. For example, the ‘magic carpets’ developed by researchers at the University of Manchester. These carpets could not only identify people walking on the carpet from their unique gait patterns; but could then analyse these patterns of data to predict when people were likely to fall. These carpets could have great social benefits in, for example, aged care facilities to identify possible accidents before they happen.
Read more about the ‘Magic Carpet’ project here.
Behind all these developments was something quite simple: the ‘sensor’. The ‘sensor’ collects data on a 24/7 basis. It is always on and it is always collecting data from its environment. The amount of data that we are now generating is significantly greater than at any other time than in human history. We are not necessarily talking about the collection of individual pieces of data but about environmental collections (ie data about individuals and how individuals behave in different environments).
What is interesting at this point, and what our research really focussed on, is that the ‘sensor’ is really just at the forefront of technological advancement. What is more interesting is what is happening at the back end. A lot of the collected data is generated by a sensor – our mobile phones for example are packed with about a dozen sensors. What we are most interested in is how that data can be used to understand patterns of behaviour; patterns of behaviour that we might not ourselves understand because they are so engrained in what we do. For example, looking at a mobile phone, you can start to learn a lot about the person who uses the phone. You can now, or at least researchers are studying this, understand a person’s mood from how they use their mobile phone. If you think about the mobile phone screen, there are a number of sensors in the screen. The strength of the swipe a person uses to unlock their phone could potentially be an indicator of the mood they are in.
This ‘sensorised data’ is becoming increasingly valuable. The value of the sensor does not lie in its ability to collect data; the real value lies in the back end of how that data gets processed, used and stored. It is at those points that we know far less. We do not really understand the consequences of data analysis and data analytics that predict our behaviour. In one sense, the threat to society is about privacy, in another sense it is something broader. It is about the power of omnipotent organisations to understand our behaviours and how those behaviours can be targeted for certain purposes.
What are some of the new problems or issues that artificial intelligence is raising in the legal context?
Thinking about this requires us to find out in a bit more depth about what is actually happening because we really do not have a handle on many of the problems and issues. If you read some of the material that is presented in the context of legal practice, there seem to be two predictions about where the legal industry is moving. At one end of the scale, it seems that the world is changing radically to the extent that law is going to be governed by machine-oriented legal practice in a very short space of time. At the other end of the scale, it seems that machine-oriented legal practice is not going to make that much difference because machines cannot replicate the human legal reasoning processes.
What are some of the ways that the lawyers of the future are going to encounter artificial intelligence and in which areas of law is artificial intelligence likely to be the most disruptive?
To a certain extent, I am not sure that artificial intelligence is going to change distinct areas of law. I think we need to step back and look at legal practice, legal education and the career development of young lawyers.
In the context of AI, the role of paralegals seems to be a prime focus for law firms at the moment. Anybody who has done paralegal work will know something about coding of documents. You have this massive stack of documents that you have to code in a particular way so that the legal team can work with the information in the stack in a more meaningful way. Coding typically takes a team of paralegals a significant amount of time, so there is a significant cost associated with it. One of the areas that we know machines are better than humans is the performance of routine tasks, so these processes are the target of increasing automation. You now have the situation where teams of dedicated paralegals (who are often law students or recent law graduates) may start to be increasingly replaced by machine-learning frameworks.
The consequences for law students and future law students are potentially profound. We have this fairly settled idea where junior lawyers do the more routine work as a sort of apprenticeship that then allows them to grow from junior to senior lawyers. What happens when we cut that early/mid-tier out by automating this routine work? What we may be setting on a path of doing is removing the early- to mid-career development of future lawyers by automating the routine work that early- to mid-year lawyers undertake.
It is clear that artificial intelligence is having a disruptive impact not just on future professions but also legal learning in itself. For us as a law school, we may have to look changing our learning offerings in order to adapt to some of these bigger technological shifts.
Is the legal system adapted and prepared to deal with artificial intelligence or are significant reforms required?
I think the answer to that is ‘we don’t know’. I think it is too early to say whether significant reforms are required. What we can say is that things look like they are changing but without a better understanding of how they are changing and what the rapid pace of change actually means, I think it is difficult to say how the legal system should adapt. But again, as a law school that is thinking about what our future graduates require, this is something we are looking at critically.
Are there any events or forums coming up where students can explore this topic further?
We are going to be running a whole series of events throughout the year beginning with a Q and A session on 16 March about artificial intelligence and machine-learning. The event is called ‘AI 101 Session’ and it is going to be hosted by Professor Janet Wiles from UQ’s ITEE. Janet is an expert in artificial intelligence and social robotics. She is going to come along to explain some of the core concepts of artificial intelligence, using non-technical language.
For more information on the AI 101 Session click here.
For the remainder of the year we are going to try to find out what is actually happening in legal practice. Clearly some significant changes are taking place, particularly in the larger firms, and this seems to be having knock-on effects. What we want to do is quantify, and determine in a bit more depth, what those changes are and, more importantly, how are those changes going to impact our students as future law graduates.
Dr Mark Burdon’s research profile is accessible here.