Replication and automation of expert judgments: Information engineering in legal e-discovery
Title | Replication and automation of expert judgments: Information engineering in legal e-discovery |
Publication Type | Conference Papers |
Year of Publication | 2009 |
Authors | Hedin B, Oard D |
Conference Name | IEEE International Conference on Systems, Man and Cybernetics, 2009. SMC 2009 |
Date Published | 2009/10/11/14 |
Publisher | IEEE |
ISBN Number | 978-1-4244-2793-2 |
Keywords | authorisation, authority control, Automation, civil litigation, CYBERNETICS, Delay, digital evidence retrieval, discovery request, Educational institutions, expert judgment automation, Human computer interaction, Human-machine cooperation and systems, human-system task modeling, information engineering, Information retrieval, interactive task, Law, law administration, legal e-discovery, Legal factors, PROBES, Production, Protocols, search effort, Search methods, text analysis, text retrieval conference legal track, United States, USA Councils, User modeling |
Abstract | The retrieval of digital evidence responsive to discovery requests in civil litigation, known in the United States as ÿe-discovery,ÿ presents several important and understudied conditions and challenges. Among the most important of these are (i) that the definition of responsiveness that governs the search effort can be learned and made explicit through effective interaction with the responding party, (ii) that the governing definition of responsiveness is generally complex, deriving both from considerations of subject-matter relevance and from considerations of litigation strategy, and (iii) that the result of the search effort is a set (rather than a ranked list) of documents, and sometimes a quite large set, that is turned over to the requesting party and that the responding party certifies to be an accurate and complete response to the request. This paper describes the design of an ÿinteractive taskÿ for the text retrieval conference's legal track that had the evaluation of the effectiveness of e-discovery applications at the ÿresponsive reviewÿ task as its goal. Notable features of the 2008 interactive task were high-fidelity human-system task modeling, authority control for the definition of ÿresponsiveness,ÿ and relatively deep sampling for estimation of type 1 and type 2 errors (expressed as ÿprecisionÿ and ÿrecallÿ). The paper presents a critical assessment of the strengths and weaknesses of the evaluation design from the perspectives of reliability, reusability, and cost-benefit tradeoffs. |
DOI | 10.1109/ICSMC.2009.5346118 |