11.12.19
Now, obviously consumers have inherent and varying perspectives on the subjects of justice and health. But there are certainly a few useful points of comparison:
The problem of ‘uniqueness’
One interesting issue reported in the research was that the reason consumers were reluctant to use purely AI-driven services was not due to worries over receiving inferior treatment, but due to a belief that AI couldn’t sufficiently take into account their unique characteristics.
That is to say, although AI may be able to solve the average medical issue, it couldn’t solve my medical issue. The research offers the example of participants in a survey being asked to choose between two different doctors based on their performance ratings. Obviously in this scenario, participants opted for the doctor with the better ratings. However, when the choice was between doctor and an AI provider having the higher rating, participants still opted for the doctor. Interestingly, the more someone considered themselves ‘unique’, the higher the likelihood they would opt for choosing the human doctor.
I don’t think it takes too great a leap in imagination to see how this could apply to consumers of legal services, and this raises some interesting questions for the future of legal tech solutions. As a regulator of legal services, we have an interest in supporting the uptake and adoption of legal technology insofar as it provides sufficient benefit to consumers in terms of cost, quality of service, and access the justice. In this sense, it is important to remember that the significant impact AI could have on legal services doesn’t just depend its usefulness, but on the likelihood of its adoption both amongst legal service providers, and by consumers.
Trust issues
The uptake of technology hinges on trust – both of the providers, and the technology. According to the Legal Services Consumer Panel’s tracker survey, public trust in legal services has gradually declined, with less than half of survey respondents saying they trust their lawyer. This is in stark contrast to the medical profession with public trust hovering around the 80% mark.
So how would this affect trust in AI?
If trust in doctors is already high, then it is possible to assume that there is less of a need for consumers to place their trust in a disruptor external to the industry (ie. technology). On the flipside, where there is a potential lack of trust amongst consumers (as appears to be the case in legal services), it is possible that an external influence to the sector could be just the ticket for plugging the gap. What this external influence should be is not for me to predict, nor is it necessary for me to extol the vices and virtues of technology in the legal sector and beyond. However, I will say this: if the legal sector is to take full advantage of the benefits that technology can offer, then consumer confidence is key. The largest public-facing advancements in technology in the legal sector (HMCTS Court Digitisation, HM Land Registry’s ‘Digital Street’) will have a significant impact on public perception and could easily set the tone for public trust in such initiatives in the future.
A healthy attitude
To bring the discussion back to the research, one of the solutions proposed was that AI providers could pay particular attention to the gathering of personal patient information and thereby produce a more accurate data profile (eg. lifestyle, family history, genetics, environment) to be fed into the system.
In this way, enough data would be available to sufficiently consider individual characteristics, and allow AI to take a step into the shoes of the ‘local family doctor’ having specific consumer knowledge. Indeed, where participants to the research were offered the prospect of an AI provider capable of studying each patients’ individual characteristics and medical history, respondents were equally likely to follow the recommendations of the AI as the doctor.
However, this brings to light possibly the most contentious issue of modern technological uptake; the issue of data privacy.
The disclosure of personal data is, quite rightly, an incredibly sensitive subject. While it is possible consumers may be willing to disclose certain aspects of their personal information in pursuit of better health or an improved medical diagnosis, it is a stretch to assume the same would apply to securing a more desirable legal outcome. Until issues of data privacy are overcome, AI cannot be expected to take into account the most unique circumstances of our cases, which is arguably the most important aspect of hiring a (human) legal services practitioner.
Conclusions
This study is fascinating in terms of consumer trust, and the future direction of our professional services. While obviously there are many factors beyond the scope of this article that I have left unconsidered, the comparison between the legal and medical fields remains a pertinent one.
If we are to accept that trust in the legal profession and providing individually tailored services are key factors in the future uptake of technology, there are a further two issues that need to be addressed:
That is to say, to circumvent some of the issues present in the use of data and algorithms in legal services, practitioners will need to place a greater emphasis on client care while other less sensitive tasks become increasingly automated. In this way, there will be more time to overcome the vast asymmetry of information present between providers and consumers, and more attention may be given to better understanding the context from which the consumer’s legal issue has arisen.
In this way, while it is important to acknowledge that technology is not the panacea to all the challenges in the legal system, perhaps it may serve to ignite the more human-centric side of the profession.