Sign Up for Alerts
Sign up to receive receive industry-specific emails from our legal team.
Sign Up for Alerts
We provide tailored, industry-specific legal updates to our clients and other friends of the firm.
Areas of Interest
June 9th, 2023
Lawyers Blame ChatGPT For Tricking Them Into Citing Bogus Case Law
Litigation Partners Tyler Maulsby and Ronald C. Minkoff, and Litigation Associate Ashley Alger are mentioned in articles published by ABC News, Bloomberg News, Courthouse News Service, Law360, Los Angeles Times, New York Daily News, New York Law Journal, and The New York Post. Tyler is quoted in the article, "Lawyer Who Cited Bogus Legal Opinions From ChatGPT Pleads AI Ignorance," published by Courthouse News Service. The article discusses attorney Steven Schwartz's court filings which included fake case citations generated by ChatGPT. Arguing that Mr. Schwartz had not acted in bad faith, Tyler is quoted saying, “There has to be actual knowledge that Mr. Schwartz knew he was providing bad cases ... or that ChatGPT would be providing bad cases."
Read the full Courthouse News Service article here.
Ron is quoted in the article, "‘I Failed Miserably’: Lawyer Who Used ChatGPT in Brief Explains Fake Cases to Judge," published by Bloomberg News. The article discusses Steven Schwartz's embarrassment over the ChatGPT invented cases he cited in a case brief. Ron is quoted saying, "the case is 'schadenfreude for any lawyer,' because lawyers have historically had difficulties with new technology.” Ron adds, "the public embarrassment they've been exposed to is deterrent enough."
Read the full Bloomberg News article here. (Behind Paywall)
Ron is quoted in the article, "Humiliated’ NY lawyer who used ChatGPT for ‘bogus’ court doc profusely apologizes," published by The New York Post. The article discusses Schwartz's recent hearing in which he profusely apologized to the judge over the mishap. Ron is quoted saying, "There was no intention[al] misconduct here. This was the result of ignorance and carelessness. It was not intentional and certainly not in bad faith.”
Read the full The New York Post article here.
Other Quoted
An Influencer Gained Followers as She Documented Her Weight Loss. Then She Revealed She Was on a GLP-1
Hannah E. Taylor is quoted in The Wall Street Journal about social media influencer Janelle Rohner, who shared her weight loss progression with diet and lifestyle tips, selling a paid course on nutrition. When Ms. Rohner posted she was taking a medication used for weight reduction and diabetes, her critics questioned her the legality of her advertising and e-commerce. The article stated, “Hannah Taylor, deputy managing partner and a partner in the advertising, marketing and public relations group at law firm Frankfurt Kurnit Klein & Selz, said proving an influencer acted fraudulently is a high bar because many jurisdictions require showing that the defendant had an intent to deceive. False advertising is typically easier to prove. Taylor said if someone had purchased the course believing that it led to Rohner’s weight loss, when in fact the medicine was the cause, that could be a material omission that could subject the influencer to false advertising liability.” View article.
May 30 2025
Mubi’s $24M Bet Just Made Agents Bullish Again. Here’s Why
Hayden Goldblatt is quoted in The Ankler article on Mubi’s purchase of Lynne Ramsay's film, “Die, My Love,” and what it meant for the Cannes market. He’s interviewed on “the real lessons from Cannes.” View article. (Behind paywall)
May 27 2025
A Federal Judge Ordered OpenAI to Stop Deleting Data
Daniel M. Goldberg is quoted in an Adweek article, which reported that a federal judge has ordered OpenAI to stop deleting output data from ChatGPT. This was part of The New York Times lawsuit, alleging OpenAI engaged in copyright infringement “by using ‘millions’ of articles published by the newspaper to train its AI model, which now directly competes with the Times’ content as a result.” The judge’s order seeks to preserve evidence in the Times’ case. Mr. Goldberg addressed mulitple implications of the order, which requires OpenAI to hold more data than they normally would. "That could make OpenAI more susceptible to security breaches, or shake the trust of consumers who expected their chatbot records to be deleted. There are also potential implications regarding energy use, storage and environmental impact that the judge may not have considered when making the order, Goldberg said." He also noted the order would trigger people's concerns about what it means for working with large tecnology providers.
May 21 2025