Home » Lawyers presented fake legal case created by ChatGPT

Lawyers presented fake legal case created by ChatGPT

by admin
Lawyers presented fake legal case created by ChatGPT

A federal judge on Thursday imposed $5,000 fines on two lawyers and a law firm in an unprecedented example of ChatGPT being held liable for their filing bogus legal investigations in an aviation injury claim.

Judge P. Kevin Castel said they acted in bad faith. But he acknowledged their apology and the steps taken to remedy the damage by explaining why harsher penalties were not necessary to ensure that they or others no longer allow artificial intelligence tools to prompt them to produce false legal stories for use in their arguments.

“Technological advances are common, and there is nothing inherently inappropriate about using a trusted AI tool to provide support,” Castel wrote. “But existing standards place a verification role on lawyers to ensure the accuracy of their statements.”

The judge said that the lawyers and their firm, Levidow, Levidow & Oberman, PC, “abandoned their responsibilities when they submitted non-existent judicial opinions with false statements and references created by the artificial intelligence tool ChatGPT, and then continued to support the false opinions after that court orders put its existence into question”.

In a statement, the law firm said it would abide by Castel’s order, but added: “We respectfully disagree with the finding that someone at our firm acted in bad faith. We have already apologized to the court and to our client. We continue to believe that, in what even the court recognized was an unprecedented situation, we made a good faith error in not believing that a technology unit could be fabricating cases without being based on facts or reality.”

See also  Judge affirmed "feeling anger and annoyance" before the possible incorporation of Schiaretti to JxC

The firm indicated that it was considering whether to appeal.

Castel said the bad faith resulted from the failure of the attorneys to adequately respond to the judge and their legal adversaries when it was noted that six legal cases listed to support their March 1 written arguments did not exist.

The judge cited “variable and conflicting explanations” offered by attorney Steven A. Schwartz. He said attorney Peter LoDuca lied when he said he was on vacation and was dishonest about confirming the veracity of statements filed with Castel.

At a hearing this month, Schwartz said he used the AI-powered chatbot to help him find legal precedents to support a client’s case against Colombian airline Avianca for an injury he sustained on a 2019 flight.

Microsoft has invested approximately $1 billion in OpenAI, the company behind ChatGPT.

The chatbot, which generates essay-like responses to its users’ requests, suggested several cases involving aviation mishaps that Schwartz had been unable to find through the usual methods used at his law firm. Several of those cases were not real, misidentified judges, or involved airlines that did not exist.

In a separate written opinion, the judge threw out the underlying aviation claim, saying the matter was time-barred.

Attorneys for Schwartz and LoDuca did not immediately respond to a request for comment.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy