First Page


Last Page


Document Type



With the rapidly expanding sophistication of artificial intelligence systems, their reliability, and cost-effectiveness for solving problems, the current trend of admitting testimony based on artificially intelligent (AI) systems is only likely to grow. In that context, it is imperative for us to ask what rules of evidence judges today should use relating to such evidence. To answer that question, we provide an in-depth review of expert systems, machine learning systems, and neural networks. Based on that analysis, we contend that evidence from only certain types of AI systems meet the requirements for admissibility, while other systems do not. The break in admissible/inadmissible AI evidence is a function of the opaqueness of the underlying computational methodology of the AI system and the court’s ability to assess that methodology. The admission of AI evidence also requires us to navigate pitfalls including the difficulty of explaining AI systems’ methodology and issues as to the right to confront witnesses. Based on our analysis, we offer several policy proposals that would address weaknesses or lack of clarity in the current system. First, in light of the long-standing concern that jurors would allow expertise to overcome their own assessment of the evidence and blindly agree with the “infallible” result of advanced-computing AI, we propose that jury instruction commissions, judicial panels, circuits, or other parties who draft instructions consider adopting a cautionary instruction for AI-based evidence. Such an instruction should remind jurors that the AI-based evidence is solely one part of the analysis, the opinions so generated are only as good as the underlying analytical methodology, and ultimately, the decision to accept or reject the evidence, in whole or in part, should remain with the jury alone. Second, as we have concluded that the admission of AI-based evidence depends largely on the computational methodology underlying the analysis, we propose for AI evidence to be admissible, the underlying methodology must be transparent because the judicial assessment of AI technology relies on the ability to understand how it functions.