Research comparing the quality of human review to predictive coding (also called technology-assisted review) has clouded the market’s understanding of the importance lawyers play in document review. This view that humans are inferior to machines, largely fueled by outdated research, has increased the risk of inadvertent disclosures of sensitive and confidential information when responding to document requests.
For example, certain requesting parties have arbitrarily asserted that the predictive coding party should not be allowed to conduct a responsiveness review of the production during the privilege review process. In other words, if attorneys using predictive coding encounter obviously non-responsive documents in the course of a privilege review—such as employee’s medical records or pictures of the employee’s children—the requesting party’s position is that the human attorneys should not be allowed to exercise their own independent judgment to mark such documents as non-responsive.
This session will:
- Analyze the results of prior research on predictive coding, scrutinize prior research, and update prevailing approaches.
- Quantify the risks associated with relying exclusively on predictive coding, without final human judgment calls.
- Examine the results of experiments that use real world data to assess the quality of review when combining predictive coding with human judgment calls.
- Rishi Chhatwal, Assistant Vice President, Senior Legal Counsel, AT&T Services
- Robert Keeling, Partner, Sidley Austin LLP
- Peter Gronvall, Senior Managing Director, Ankura
- Nathaniel Huber-Fliflet, Senior Managing Director, Ankura
Nathaniel Huber-Fliflet - Ankura
Rishi Chhatwal - AT&T
Robert Keeling - Sidley Austin