The Audacity Of AI Incompetence

Above the Law
by Nicole Black
March 3, 2026
AI-Generated Deep Dive Summary
The article highlights a growing issue in the legal profession: the misuse of AI tools like ChatGPT, which have led to an influx of court submissions riddled with errors, including fake case citations. Since ChatGPT’s release in late 2022, lawyers have increasingly relied on AI for drafting briefs, often resulting in hallucinated or fabricated legal references that undermine the credibility of their arguments. One striking example occurred during an appellate argument in *Deutsche Bank National Trust Company v. Jean LeTennier*, where counsel for the appellant admitted to using AI but dismissed concerns about its accuracy. Despite acknowledging errors in his submissions and being prompted by the court, the attorney attempted to deflect criticism, claiming the inaccuracies were immaterial. This incident underscores a concerning trend: lawyers who either ignore or downplay the risks of AI-generated mistakes, even when caught. The legal profession faces a critical challenge as AI tools become more prevalent. While these technologies can enhance efficiency, they also introduce significant risks if not used responsibly. The case highlights the importance of accountability in ensuring that AI-assisted work remains accurate and reliable. Legal professionals must implement robust quality-control measures to prevent errors from slipping into court filings. Ultimately, this issue raises broader questions about ethical practices and professional responsibility in a tech-driven legal landscape. As AI continues to evolve, lawyers must balance its benefits with the need for integrity and precision in their work. The stakes are high: the credibility of the legal system and public trust depend on it.
Verticals
legalnews
Originally published on Above the Law on 3/3/2026
The Audacity Of AI Incompetence