The State Bar of California has proposed amendments to comments accompanying several Rules of Professional Conduct addressing lawyers’ use of artificial intelligence (AI).

The proposed comments do not create new ethical duties.  Rather, they elaborate how existing rules apply to lawyers’ use of AI.  Although the proposed comments would apply only in California, they make explicit duties that are implicit in professional conduct rules in most jurisdictions.

The proposed comments repeatedly emphasize themes that should surprise no careful lawyer:  lawyers must understand the risks and benefits of relevant technology, protect confidential information, supervise subordinate lawyers and staff, communicate appropriately with clients, and independently verify work product before relying on it.

The essential point appears in the proposed comment to Rule 1.1 on competence: “[A] lawyer must independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.”

The proposed comments are timely because lawyers continue to file papers containing hallucinations even though they can’t plausibly claim ignorance about the risks of using AI without carefully reviewing its output.  Damien Charlotin maintains a remarkable database tracking cases involving hallucinations in legal filings.  As of this writing, the database includes almost 1,000 cases in the United States.

Law schools should pay close attention to these developments.  Every professional responsibility course should include serious discussion of AI competence, hallucinations, verification duties, confidentiality risks, and supervision responsibilities.  These issues are central questions of everyday legal practice, not peripheral technology topics.

Students need more than abstract warnings. They need practical exercises involving AI-generated errors, verification protocols, client communication problems, and strategic decisions about when AI use is appropriate, risky, or irresponsible.

Ironically, the wave of hallucination cases may ultimately produce a constructive effect that I describe in a forthcoming article, The Surprising Value of AI Hallucinations.  The frequency of hallucinations should remind lawyers and law students of something they should have known all along:  legal practice depends on professional judgment, careful verification, and responsibility for one’s own work product.