Many commentators have lighted on the paper Crash or Soar – Will the legal community accept “predictive coding?” by Anne Kershaw and Joe Howie, in which they explored whether lawyers will be willing to abide by the results of review accelerators, which they group together with the label “predictive coding”. The article is based on the results of a survey of eleven legal software companies whose applications or services include review accelerators of some kind. Three of those who took part in the survey are companies which I know well, and happy chance enables me to make a plausible title for this article from FTI Technology’s Acuity, Equivio>Relevance and Recommind’s Predictive Coding.
“Acuity” is sharpness or acuteness. “Relevance” connotes bearing upon, pertinent to, the matter in hand. “Predictive” implies foresight and the ability to anticipate. These are good names, therefore, for products or services whose function is to get you to what matters quickly. The Kershaw/Howie article gets its name from the fact that many lawyers are nervous of reliance on any form of automated review, preferring, or at least claiming, to read every document.
Those who advocate human review must address three points: if predictive coding (I will stick with the Kershaw/Howie label for convenience) can save significant costs without significantly reducing accuracy then the burden falls on its opponents to point to its flaws; consistent accuracy by humans – Monday to Friday, morning till night, across multiple reviewers – is impossible to achieve, at least within reasonable time-frames; and even if you could expect such accuracy, you have no way of verifying it without repeating the exercise with a different set of reviewers, whereas (as Kershaw and Howie observe) “predictive coding is based on human-assisted computer analysis, sets of documents can be examined multiple times using different parameters or sample sets”.
Will judges accept the results? Between you and me, they are unlikely (in the UK at least) to challenge results off their own bat unless a manifest flaw appears in the results in the form of a key document (not one which is merely “relevant” in the broadest sense) which is missing. The objections, if any, may come from the client (fearful that you will miss the winning document) or the opponents. The client has a choice: the maths are straightforward (this route costs X, that one costs 4X, which do you prefer?) and the risk is one which the client should be as well able as the lawyer to assess – they are, after all, the client’s documents. As to dealing with the other side, the only mistake is lack of transparency, which is why the new UK Practice Direction requires that parties discuss the methods to be used to trim down the document populations. Debates about review accelerators dovetail into those about my previous topic, proportionality (see Sedona Conference Commentary on Proportionality in Electronic Discovery), and he who argues for the more expensive route must explain why it is necessary. But you must have the discussion.
As I have said, I know three of those who took part in the survey underlying Anne Kershaw and Joe Howie’s paper. Craig Carpenter of Recommind was quick to point out that Predictive Coding™ is Recommind’s own trade mark and has been for three years (see The Critical Difference Between “predictive coding” and Predictive Coding™. Equivio has recently drawn attention to a string of articles on the subject. FTI Technology takes a different approach, well summarised in Greg Buckles’ article Autocoding Take III – Acuity Offering. FTI’s is more a consultative approach, with the technology in the hands of FTI’s Acuity consultants – motto Be Ready. Be Right. The effect on speed and cost is going to be the same – any tools which speed up the review process whilst maintaining accuracy have got to be attractive.
Those who wonder what the UK judicial view is might like to look at paragraph 27 of Master Whitaker’s judgment in Goodale v Ministry of Justice. It would be right to say that few UK judges have Master Whitaker’s grasp of these things. No judge, however, is likely to challenge a method which can be shown to be cheaper, in the absence of strong evidence (as opposed to mere assertion) that it is unreliable. The time to debate this is upfront, not when challenged afterwards.
None of this technology solves the problems on its own. It needs a brain, and a legally-trained brain at that, to set the parameters, to check the results, and to ensure that technology is used as an enabler towards meeting the clients’ objective. That objective is not polishing data but disposing of a dispute in the shortest time by the most cost-effective route. One ought at least to look at the applications which offer help towards that objective.