How quickly ought one to review documents, and is it any business of the court to consider the relationship between the speed of review and its quality? How can you increase review speed without reducing quality by an unacceptable extent?
Rachel Teisch, VP of Marketing at Xerox Litigation Services, looks at this question in an article called The Fastest Document Reviewer in the World. She considers a US case involving First Technology Capital who gave discovery of a supplemental production of 1,500 documents. On learning that the opponents considered that 45 of those documents “carried hallmarks of privilege”, FTC tried to get them back.
The entitlement to recover privileged documents in the US lies in Federal Rules of Evidence 502(b). Privilege is not waived if the producing party took reasonable steps to prevent the disclosure and acted quickly to put matters right.
As Rachel Teisch explains it, this brought the court to a consideration of the time actually spent on the privilege review, as part of the judicial exercise to determine whether FTC took “reasonable steps”. Having established that the review took on average 9.84 seconds per document, the judge decided that “the rapidity of review indicates an unreasonably small temporal component to the process” or, in plain English, that FTC had not spent long enough on the review to argue that their steps had been reasonable.
How can lawyers review documents at rate which is acceptable in time and cost whilst satisfying the court that it has been done properly? The answer, Rachel Teisch says, lies in the use of technology tools like concept clustering and email threading, or technology-assisted review which can “employ privilege keywords to rank the likelihood that particular documents should be protected”.
Xerox Litigation Services has technology of this kind – its CategoriX Technology Assisted Review application is designed for just this type of exercise, as is its behind-the-firewall solution Viewpoint All-in-One e-Discovery Software. Both are used (amongst other things) to rank documents in a presumed order of relevance, and to do so in collections very much larger than the set at issue in the FTC case.
Ask yourself this: would I rather plough through documents in (say) date order until I find relevant ones, or would I like them put into a presumed order of relevance so that I can look first (and perhaps with the most experienced reviewers) at those more likely to be relevant? The answer is pretty obvious.
The FTC case gives us an example of a review exercise which went wrong because the lawyers could not (apparently) justify the application of adequate time and resources to the job. Given the outcome, it would, I suspect, have been cheaper and quicker to use the kind of technology to which Rachel Teisch refers; it would certainly have been good to avoid the embarrassment of having the judge calculate to the second the degree of attention given to the review.
