
Jerome Dewald was sitting with his legs crossed, and his hands folded in the lap before the appeal panel of state judges in New York, ready to argue to reverse the decision of the lower court in his dispute with the former employer.
The court allowed Mr. Dewald, who is not a lawyer and represented himself, accompanied his argument with a preliminary video presentation.
When the video started to play, it turned out that a man seemingly younger than 74 years old Mr. Dewald was wearing a blue collar and a beige sweater and standing before what looked like a blurry virtual background.
For a few seconds to the video, one of the judges, confused by a picture on the screen, he asked Mr. Dewald if he was his lawyer.
“I generated it,” Mr. Dewald replied. “He’s not a real man.”
Judge, Judge Sallie Manzanet-Daniels from the first court department of the appeal division, stopped for a moment. It was clear that his answer was dissatisfied.
“It would be nice to know that if you filed your request,” she chuckled at him.
“I appreciate I’m mistaken,” she added before screaming to turn off the video.
Mr. Dewald failed to reveal that he created a digital avatar using Artificial Intelligence software, and the latest example of AI sinned into the American legal system in potentially worrying ways.
The Hearing on which Mr. Dewald presented his presentationMarch 26, was filmed by cameras of court systems and previously announced Associated Press.
On Friday, Mr. Dewald, the plaintiff, in the case of, stated that he was amazed at the negotiations. He said that shortly thereafter he sent an apology to the judges, expressed his deep regret, and acknowledged that his actions were “unintentionally stated” by the court.
He said he had resorted to using the software after he came across his words in previous legal proceedings. Using AI for presentation he thought he could ease the pressure he felt in the courtroom.
He said he had planned to create a digital version of himself, but he met “technical problems”, which made him create a false person instead for a recording.
“My intention was never to deceive, but rather to present my arguments in the most effective way,” he said in his letter to the judges. “However, I acknowledge that correct publication and transparency must always take priority.”
Mr. Dewald, who was described by an entrepreneur, appealed to an earlier decision in a contractual dispute with a former employer. Finally, he submitted an oral argument for hearing, stuttering and frequent pauses to regroup and read the prepared notes from his mobile phone.
As it might be embarrassed, Mr. Dewald could calm himself by real lawyers in trouble with the use of AI in court.
In 2023, a lawyer in New York faced strict consequences Used a chatgpt to create a legal short intertwined with false judicial views and legal quotes. The case introduced shortcomings in relying on artificial intelligence and learned throughout the legal trade.
In the same year, Michael Cohen, a former lawyer and fixer for President Trump, provided his lawyer the false legal quotes he received from Google Bard, an artificial intelligence program. In the end, Mr. Cohen begged for mercy from the federal judge of his case and stressed that he did not know that a generative text service could provide false information.
Some experts say that artificial intelligence and large languages can be useful for people who have legal matters they can deal with but cannot afford lawyers. The risks of technology remain.
“They can still hallucinate – produce very convincing looking information,” which are actually “false or nonsensical,” said Daniel Shin, assistant at the Center for Legal and Forensic Technology at the Faculty of Law William & Mary. “This risk must be solved.”