AI Research Can Be Used Against Clients In Court. It Shouldn’t Be.
Above the Law
by Carolyn ElefantFebruary 26, 2026
AI-Generated Deep Dive Summary
The use of generative AI tools like ChatGPT or Gemini has become a growing concern among attorneys, as clients increasingly rely on these technologies to draft contracts and devise legal strategies without fully understanding their limitations. While some lawyers express frustration with the flawed advice generated by such tools, others, including the author, view AI as a valuable resource that helps clients arrive better prepared for legal proceedings. However, recent developments in case law have raised significant concerns about the discoverability of AI-generated documents in court.
In a landmark ruling, Judge Jed Rakoff of the Southern District of New York ruled in *United States v. Heppner* that documents created using Claude (an AI tool) were not protected by attorney-client privilege or work-product doctrine. The defendant, Bradley Heppner, had used Claude to generate detailed legal analyses as part of his securities fraud case. When the FBI seized his devices, prosecutors argued—and the court agreed—that such documents should be considered fair game. The reasoning hinged on three key points: AI tools like Claude are not attorneys; their communications lack confidentiality due to terms of service that allow third-party disclosure; and even if the documents
Verticals
legalnews
Originally published on Above the Law on 2/26/2026