The latest edition of Legal Studies, the journal of the Society of Legal Scholars, has appeared on IP Draughts’ home desk. Curiously, it has not been caught by the Post Office’s filter, which is redirecting professional correspondence to his firm’s new offices, including the Orvis catalogue.
The contents page of Legal Studies is on the back cover, and this month lists 10 research articles, 9 of which are of no interest to IP Draughts. The 10th looks more promising. It is titled Assessing plain and intelligible language in the Consumer Rights Act: a role for reading scores. The reference is Legal Studies (2019), 39, 378-397.
The authors are academics at the University of Nottingham. Two of them, Conklin and Parente, are from the School of English, and the third (and “corresponding author”), Hyde, is from the School of Law. More and more promising.
As the title of the article suggests, it examines whether techniques that are used to measure readability, and which produce “reading scores”, could usefully be applied to determine whether consumers are likely to understand the contracts that they are asked to sign. And whether reading scores might be used by a court when deciding whether a consumer contract meets the requirement that it be written in “plain, intelligible language” as required by the EU Unfair Terms Directive, implemented in the UK by the Consumer Rights Act 2015.
The authors ran an experiment. First, they set up a method of averaging out the reading scores produced by 5 standard techniques, including the well-known Flesch-Kincaid test. Then they applied the techniques to 7 examples of consumer travel insurance contracts found on the internet. One of the outputs of their research was a “grand weighted mean” that decided how many years of education a person required in order to understand the contract terms. These varied between 13.86 years (second year of university) for one of the contracts, and 19.06 years (beyond a masters degree) for another.
Of course, these extraordinary results may be partly down to imperfections in the standard techniques for calculating reading scores, rather than being solely attributable to the impenetrability of insurance contracts. But the results should make the lawyers who draft insurance contracts sit up and take notice.
The article is more thoughtful than the above summary might suggest. At the start of the article, the authors seem doubtful about the multi-factoral tests applied by judges in such cases, which make it difficult for a business to predict what a judge will accept. They seem to be suggesting that plain, intelligible language might best be decided by an algorithm. But by the end of the article they seem to acknowledge that the subject is more complex than existing tests are able to address. So, perhaps there is a role for human judges, after all.
IP Draughts is not an expert in reading tests. But he wonders whether they can do anything other than assess clarity of expression. If the underlying concepts are difficult to grasp, that is a separate issue and one that is unlikely to be testable by an algorithm. Protecting the consumer from terms whose concepts are difficult to understand may be more of a question of whether the term is “fair” rather than whether it is clearly expressed.