How AI Evidence Is Changing Expert Testimony
By Eric Prindle, Esq. | Reviewed by Canaan Suitt, J.D. | Last updated on February 4, 2026The availability of generative artificial intelligence tools, which are capable of producing original written materials, images, and sound recordings, has presented new challenges for courts. Challenges particularly arise with respect to evidentiary rules and expert testimony.
What follows is a summary of some of the emerging trends and issues in this rapidly evolving area. For specialized legal advice, reach out to an attorney with experience in technology issues.
AI Hallucinations in Expert Testimony: Notable Case Examples
Stories of attorneys who have been disciplined for citing nonexistent, AI-hallucinated cases and statutes in legal pleadings are widespread.
Less common, but still notable, are cases where expert witnesses have come under scrutiny for incorporating AI hallucinations into declarations, reports, and other testimony.
Example 1: An AI Case from Minnesota
Perhaps the most well-known example is a 2025 court order in the case of Kohls et al v. Ellison et al in the U.S. District Court for the District of Minnesota. In this case, plaintiffs challenged Minnesota’s newly enacted law banning the use of AI-generated “deepfakes” in political campaigns. In response, the state filed two expert declarations from professors purporting to be experts in AI-generated content.
Ironically, one of those declarations, from Professor Jeff Hancock, was found to contain citations to two nonexistent academic articles and a misattribution of another article. Hancock acknowledged using AI tools to draft his declaration. The judge excluded it from consideration and declined to let the state submit a new expert declaration in its place.
Example 2: An AI Case from California
A similar situation arose the same year in the U.S. District Court for the Northern District of California in the case of Concord Music Group, Inc. et al v. Anthropic PBC. Interestingly, the subject matter of this case was also about artificial intelligence.
The defendant, Anthropic, a prominent AI company, submitted an expert report from one of its own data scientists, Olivia Chen. The report cited an actual article, but with a citation that included a fictitious name and list of authors. The judge concluded this was the result of an AI hallucination and struck the relevant section of the expert declaration. The judge also took note of the incident as undermining Chen’s credibility as an expert witness.
Example 3: A Trust Administration Case from New York
A third relevant case is Matter of Weber, a trust administration matter in the Surrogate’s Court of Saratoga County, New York. Unlike the previous two cases, this one did not involve detection of any specific hallucination in an expert report. The expert witness acknowledged using the AI tool Microsoft Copilot to draft his report.
However, the court found the expert’s testimony non-credible because, among other reasons, he could not recall what input or prompt he used to generate the report. He also couldn’t explain what sources Copilot relies on or how it draws on those sources to generate a given output.
Pointers for the Ethical Use of AI in Expert Testimony
Some general guidance emerges from the cases discussed above on the use of AI in expert testimony and legal practice. Expert witnesses and legal professionals should:
- Be transparent about their use of AI
- Keep a record of what inputs were used to generate the AI outputs
- Demonstrate an understanding of the methodology by which the AI system generates its results
- Independently validate AI-generated outputs to the extent possible
Transparent Use of AI in Expert Testimony
The uncritical use of AI to draft declarations and reports can undermine the credibility of an expert witness. This is especially true when specific hallucinations are found in the resulting document.
However, courts have generally acknowledged that AI technology has the potential to be useful, particularly in reviewing large datasets and detecting patterns or undercovering details.
Likewise, courts and opposing counsel can use AI and machine learning tools to evaluate the credibility of expert witnesses. One law firm claims to have used AI software to detect significant inconsistencies across 63 depositions, totaling 7,500 pages, submitted by the same expert witness, undermining the testimony of that witness in their client’s case.
AI Detection: Federal Rulemaking and the Need for Experts
Another area in which the growth of artificial intelligence is likely to impact expert testimony is the introduction of AI-generated or AI-altered materials such as images, videos, sound recordings, charts, and graphs into evidence.
Rule 901 of the Federal Rules of Evidence, which many state rules imitate, says that courts can use an item’s distinctive characteristics, along with all the circumstances, to authenticate the item as evidence. Given the ability of AI tools to create deepfakes that mimic the distinctive characteristics of an authentic item, judges and legal academics have begun to look for ways to ensure that the rules evolve.
For example, the U.S. Judicial Conference’s Advisory Committee on Ethics Rules has proposed consideration of a new Rule 707, which would require that known AI-generated evidence meet the same admissibility standards as expert witness testimony.
This proposed rule would not, however, speak to the need to identify whether evidence may have been generated by AI, where it is being presented as an authentic document. This is where it will likely become increasingly necessary for litigants and courts to retain experts in AI detection who can offer testimony on whether items that are being offered into evidence are likely to have been generated or altered through the use of AI tools.
Getting Credible and Experienced Legal Help
Though artificial intelligence offers opportunities for developing and evaluating expert testimony, as well as the growth of new fields of expertise, its potential misuse in legal proceedings is a major area of concern for courts.
Responsible practitioners will need to prioritize transparency and clarity around AI use. For legal advice on your case, reach out to an attorney with demonstrated knowledge and experience in legal and technology topics.
What do I do next?
Enter your location below to get connected with a qualified attorney today.Additional Science and Technology Law articles
- Overview of Science and Technology Law
- Intellectual Property Challenges for AI-Generated Content
- Using AI in Legal Practice: What Lawyers Say
- Can Companies Use My Likeness for AI Applications?
- Can Lawyers Use AI in Court? State-by-State Rules
- AI Compliance Audit: Does My Company Need One?
- State vs. Federal AI Regulation: Where Are We Heading?
- Avoiding Algorithmic Bias: Top 5 AI Liability Issues in Courts
- AI Hallucination in Legal Practice: When Technology Gets the Law Wrong
- The EU AI Act: How Other Countries Are Regulating AI
- Have You Been Deepfaked? What To Do Next
Related topics
At Super Lawyers, we know legal issues can be stressful and confusing. We are committed to providing you with reliable legal information in a way that is easy to understand. Our legal resources pages are created by experienced attorney writers and writers that specialize in legal content in consultation with the top attorneys that make our Super Lawyers lists. We strive to present information in a neutral and unbiased way, so that you can make informed decisions based on your legal circumstances.
Attorney directory searches
Helpful links
Find top lawyers with confidence
The Super Lawyers patented selection process is peer influenced and research driven, selecting the top 5% of attorneys to the Super Lawyers lists each year. We know lawyers and make it easy to connect with them.
Find a lawyer near you