Have you heard about the New York lawyer who faced disciplinary action for submitting a brief generated by AI? This incident raises important questions for lawyers who need clarification on AI's role in the legal field. Understanding what went wrong and whether the blame lies with AI or the lawyer's misuse is crucial.
This article will help illustrate how to effectively use AI tools for tasks like contract drafting and review while maintaining client confidentiality and adhering to ethical standards.
The lawyer in question is Steven A. Schwartz, a seasoned attorney with over 30 years of experience – not a novice in the field. Schwartz specializes in workers' compensation claims and personal injury lawsuits, often handling cases related to construction accidents and malpractice.
He graduated from the State College of Albany and earned his law degree from New York Law School in 1992. Schwartz then maintained his license for 32 years before the ChatGPT incident occurred. Though he typically handles cases in the state jurisdiction, he remained involved in critical aspects of the case after it moved to federal court.
Ultimately, both Schwartz and his colleague, Peter LoDuca, were found responsible for violating professional conduct rules during this incident.
Schwartz admitted to using ChatGPT for legal research. According to court transcripts, he used the AI chatbot to:
At first glance, Schwartz’s intention to streamline his legal research appears commendable. However, he ultimately failed to consider the ethical implications of using AI and competency requirements.
AI tools can enhance various aspects of legal practice by automating time-consuming tasks. Still, they aren't a replacement for lawyers because of their current limitations, such as lack of accuracy and potential bias in results.
Currently, legal AI tools focus on:
While Schwartz submitted a legal brief that included relevant information and arguments, it also included six fake cases generated by ChatGPT. Known as "hallucinations," the cases were fabricated, inaccurate outputs that AI tools present as facts. This misstep violated professional standards, and the court clarified that this occurred because:
Regardless of the evolving regulatory frameworks surrounding AI in legal practice, lawyers must always adhere to the existing ABA’s Rules of Professional Conduct, which require diligence, accuracy, and reasonable effort.
To avoid similar pitfalls in your legal practice, learn how to integrate AI ethically by focusing on the following:
The introduction of specialized AI tools for legal professionals has raised several legal and ethical concerns, such as:
In legal practice, using third-party tools or external software can lead to risks such as breaching client confidentiality. Lawyers are ethically obligated to protect client data from unauthorized access, and this responsibility extends to their use of AI tools.
When you enter queries into AI systems, including chatbots, developers can typically use your information for further AI training, meaning conversations may not be entirely private. To safely incorporate AI into your practice while protecting client information, consider the following:
Schwartz and LoDuca’s disciplinary hearing highlights the serious consequences of misusing AI. It’s important to note that the disciplinary action stemmed not from Schwartz’s use of ChatGPT but from his incompetence, which led to a legal brief filled with fabricated citations.
As a result of submitting a court filing with false citations, Schwartz and LoDuca faced a financial penalty of $5,000 for misleading the court. In a 34-page opinion, the judge declared that he would not have imposed discipline measures if the lawyers had been honest about their actions. He required them to send letters to all judges mentioned in the false case citations.
The repercussions of misusing AI in legal work can vary based on the severity of the misconduct. Potential legal consequences include:
In severe cases, such as copyright infringement involving AI or breaches of client confidentiality, lawyers risk permanent revocation of their license.
Using ChatGPT for case research carries specific risks that could lead to case dismissal or other legal repercussions:
The damage to Schwartz's reputation is significant. The judge found that he acted in bad faith by knowingly submitting misleading statements, which justified the disciplinary action taken against him.
Schwartz and LoDuca notably lied to the court initially, calling into question their credibility. They got further into trouble when the court ordered copies of the ChatGPT-generated cases to be submitted. The nonsensical nature of the cases submitted made it clear that they had not been reviewed by a human lawyer.
Schwartz admitted that he did not understand the technology or take the time to verify its accuracy, raising questions about his integrity, credibility, and technological competence. The long-term impact on his career remains uncertain.
In Texas, the legal industry's response has been particularly dramatic. One judge banned solely AI-generated briefs, stating that lawyers can use AI-generated documents in court only if they certify that a human reviewed the content.
Media coverage of the incident suggests that Schwartz and his colleague have taken the fall for this issue. Newspapers reported the facts while discussing the legal standards for AI usage in law, emphasizing the need for accuracy.
Overall, this incident has not deterred people from relying on AI. Instead, it has sparked concerns and increased public debate about AI's impact on the legal profession and the need for a solid legal framework for its proper use.
Follow these guidelines to ethically integrate AI tools into your legal practice. By taking precautionary measures, you can avoid accuracy issues in your documents and enjoy the benefits of AI without facing legal or financial consequences:
Many lawyers rely on AI daily, but only a few make headlines for using it incompetently. Instead of joining them in the media, learn from their mistakes. Here are the key lessons to remember:
No, lawyers shouldn’t stop using ChatGPT or other AI tools. Instead, they should understand the pros and cons of incorporating AI into their legal practice. Benefits include increased productivity, reduced errors, and improved compliance. Many lawyers worry about potential accuracy issues, data confidentiality risks, and the loss of essential legal skills, but there are many ways to address these concerns.
No, ChatGPT cannot represent clients in court. Only licensed lawyers with expertise and moral judgment are qualified to represent clients.
Yes, using ChatGPT can affect the lawyer-client relationship. AI can streamline workflows in law offices, giving lawyers more time to focus on and strengthen client relationships. AI can also lead to confidentiality breaches, which may harm client-lawyer relationships and result in legal actions against lawyers. Being proactive about AI and ensuring that AI is used ethically and responsibly can help lawyers avoid these problems.