Login | May 14, 2025
Akron Law Review symposium tackles AI and the law
RICHARD WEINER
Technology for Lawyers
Published: May 31, 2024
The law review of the University of Akron School of Law recently held a symposium on legal AI entitled AI Co-Counsel: An Attorney’s Guide To Using Artificial Intelligence In The Practice Of Law.
After opening remarks from Tyler Speer of the law review and law school dean Emily Janoski-Haehlen, the symposium started with presentations by Akron Law writing assistant professor Sean Steward followed by a panel consisting of attorney Brandon W. McHugh, an associate with Canton firm Plakos Mannos, attorney David J. Myers, partner with Buckingham, Doolittle & Burroughs, and attorney Asvi Patel of Jones Day.
The symposium then concluded with a presentation by Texas Justice (Ret.) John Browning.
Steward talked about using an AI chatbot to create what he called “reactive hypotheticals,” creating an entire set of discovery documents by continual queries on the AI, which was told to remember everything that had come before and create one entire scenario.
He said that the series of AI turns (which is the technical term) saved him hours of creating those documents by hand.
The panel, on “Integrating Artificial Intelligence into the Practice of Law,” all pretty much came to the same conclusion: We are BigLaw, they said. Our research is Westlaw, Westlaw has AI, that’s it, they said.
ChatGPT “can be dangerous,” they said. Etc.
Anyway, Justice Browning closed the symposium with an excellent, high-energy presentation on “Risks in Attorney Use of Generative AI.”
The first thing that Justice Browning covered was allllll the ways that ChatGPT messed up briefs, cites, etc. in courtrooms and beyond, causing sanctions to those too lazy to use the chatbot and then check their own work.
He suggested that AI is good at internal processes in the law firm—document control, responding to discovery requests, predictive analyses (particularly of judge decisions), contract analysis (Coca Cola, for instance, developed it’s own AI for this). He also mentioned jury bias, risk assessment and to make sure that client confidentiality is taken care of.
And he reiterated the obvious—don’t use public generative AI for legal research!
He had a very interesting take on whether courts should issue standing orders on the use of generative AI: “Rule 11 should take care of it.”
That’s cutting that issue right to the bone. I’ll put that one on repeat.
In general, a good time was had by all.