At Relativity Fest, I moderated a panel on predictive coding whose members included two of the lawyers on opposite sides in the Pyrrho litigation. The parties to Pyrrho had first debated whether or not to use predictive coding and, then, when that was agreed in principle, how it was to be done.
One of the Pyrrho participants was Dan Wyatt, an associate at RPC in London. After our panel, I asked him how the idea developed that predictive coding was the right technology for this case. The interview is below.
The issue, Dan Wyatt said, was a common one – how to deal with very large volumes of documents in a cost-effective and expedient manner. It was clear from the outset that keywords and manual review would be a “challenging” way to do this” and RPC was receptive when the defendant’s Ed Spencer (I have a separate interview with him coming up) suggested using predictive coding.
Dan Wyatt emphasised that getting through large volumes of documents quickly was not the only consideration – it was necessary also to “do a good job” and be satisfied that the client’s interests were looked after.
Agreeing the use of predictive coding was only the start in the Pyrrho discussions. There was a long haul thereafter to agree how it was to be used and, in particular, on the keywords to be used. Both parties, Dan Wyatt said, were willing to make “sensible compromises” where they could.
One of the reasons for my predictive coding panel was to compare approaches in three different jurisdictions, the US, Ireland and England and Wales – not just what the rules say, but how parties work (or do not) to achieve a sensible result. It may be harder to achieve a cooperative approach in the adversarial environment of US litigation, but Dan Wyatt said that he had had conversations at Relativity Fest which implied that even US lawyers might come to see the benefits of getting round a table with their opponents as a means of making progress towards agreement.