Technology Assisted Review (TAR) is a process of having computer software electronically classify documents based on input from expert reviewers, in an effort to expedite the organization and prioritization of the document collection. The computer classification may include broad topics pertaining to discovery responsiveness, privilege, and other designated issues. TAR (also sometimes called Computer Assisted Review, or CAR) may dramatically reduce the time and cost of reviewing ESI, by reducing the amount of human review needed on documents classified as potentially non-material.
The framework below was developed in 2012 by an EDRM team to document the steps of the TAR process. Like the EDRM framework, the TAR framework should be a useful reference for e-discovery practitioners at corporations, law firms and elsewhere; e-discovery services and software providers; and organizations evaluating e-discovery tools. In 2017, a new EDRM team is undertaking a project to develop TAR standards, using this framework as the launching point. Read more about that project here.
The process of deciding the outcome of the Technology Assisted Review process for a specific case. Some of the outcomes may be:
The process of building the human coding rules that take into account the use of TAR. TAR must be taught about the document collection by having the human reviewers submit documents to be used as examples of a particular category, e.g. Relevant documents. Creating a coding protocol that can properly incorporate the fact pattern of the case and the training requirements of the TAR system takes place at this stage. An example of a protocol determination is to decide how to treat the coding of family documents during the TAR training process.
The process of transferring the review protocol information to the human reviewers prior to the start of the TAR Review.
The process of human reviewers applying subjective coding decisions to documents in an effort to adequately train the TAR system to “understand” the boundaries of a category, e.g. Relevancy.
The process of the TAR system applying the information “learned” from the human reviewers and classifying a selected document corpus with pre-determined labels.
The process of human reviewers using a validation process, typically statistical sampling, in an effort to create a meaningful metric of TAR performance. The metrics can take many forms, they may include estimates in defect counts in the classified population, or use information retrieval metrics like Precision, Recall and F1.
The process of the review team deciding if the TAR system has achieved the goals of anticipated by the review team.
The process of ending the TAR workflow and moving to the next phase in the review lifecycle, e.g. Privilege Review.
This model was produced with input from some of the best known providers in Technology Assisted Review:
…as well as leaders from:
|The Grossman-Cormack Glossary of Technology-Assisted Review||Maura R. Grossman, Wachtell, Lipton, Rosen & Katz, and Gordon V. Cormack, University of Waterloo.||2012/12|
|Measuring and Validating the Effectiveness of Relativity Assisted Review||Dr. David Grossman, Ph.D., prepared for Relativity||2013/02|
|Workflow for Computer-Assisted Review in Relativity||kCura Corporation||2012/07|
|Predictive Ranking: Technology Assisted Review Designed for the Real World||Jeremy Pickens, Senior Applied Research Scientist, Catalyst Repository Systems||2013/02|