OpenAI is in a “tough spot” as it scrambles to avert a flood of copyright lawsuits from media outlets that could cripple the high-flying startup — or even shutter it for good, experts told The Post.
OpenAI’s ChatGPT and other AI chatbots have allegedly been using copyrighted news content to compete with media companies for eyeballs without giving publishers proper credit or compensation — an unauthorized practice that experts say could devastate the traditional media business if it goes unchecked.
In an effort to keep the peace, OpenAI — whose CEO Sam Altman recently returned to the helm following a short-lived coup by the company’s board — has reportedly engaged in talks with several prominent media firms, including CNN and Fox Corp., on deals to pay for access to content that can be used to train its popular chatbot.
OpenAI already announced content deals with the Associated Press and Axel Springer.
But the firm’s disruptive AI tools — namely its popular chatbot — have led to lawsuits such as one recently filed by the New York Times, which experts say poses a formidable argument for OpenAI to either stop stealing the Gray Lady’s content or cough up significant payouts.
OpenAI’s response so far has yielded mixed results. The firm’s piecemeal approach to negotiations runs the risk of exposing it to trouble as US lawmakers and federal courts alike examine the legality of AI training, according to experts.
“They’re in a tough spot,” an industry source who requested anonymity to discuss the situation told The Post. “I think they’re now seeing what happens, if you negotiate with entities individually, then you’re beholden to each one acting differently. Whatever comes from that, It’s not as predictable.”
The New York Times said it opted to sue OpenAI only after talks on potentially “amicable solution,” such as a licensing deal, broke down months earlier. Separately, the Washington Post hasn’t been in negotiations at all in recent months, a company spokesperson confirmed to The Post.
The Times lawsuit, filed in Manhattan district court last month, seeks to hold OpenAI and chief backer Microsoft responsible for “billions of dollars” in damages. The suit is likely years away from a court date, with firms facing the risk of having their work stolen during that time without financial or legal repercussions.
The lawsuit included many instances in which the GPT-powered chatbots regurgitated verbatim or near-verbatim copies of the Times’ articles in response to user prompts – including a notoriously scathing review of celebrity chief Guy Fieri’s American Kitchen & Bar restaurant in 2012 and a Pulitzer Prize-winning article called “Snow Fall: The Avalanche at Tunnel Creek.”
OpenAI described the Times’ lawsuit as “without merit” in a lengthy Jan. 8 blog post and alleged the newspaper “intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.”
The blog post cited “long-standing and widely accepted precedents” but lacked any specific examples of “fair use” court cases that would support its stance, making OpenAI seem “a bit scared,” the industry source said.
“If you’re on the weaker side, that’s when the company should be silent and wait for court and cross your fingers and hope that judges are scared to break the shiny toy,” the source added.
OpenAI dismissed the “regurgitation” of verbatim passages from articles as “a rare bug that we are working to drive to zero.” The firm said the Times refused to provide examples before the lawsuit, which would have been promptly fixed.
The Times’ lawsuit is “the most serious claim filed to date” against an AI firm because the newspaper “brought receipts” showing specific examples of near-perfect copying, said James Grimmelman, professor of digital and information law at Cornell Law School.
“A lot of other lawsuits have relied on much thinner showings of copying, like showing you can get it to generate a summary of a book or one sentence at a time,” Grimmelman told The Post. “This really shows that ChatGPT has memorized large numbers of Times articles.”
Given the massive financial and legal stakes tied to the lawsuits, it would be surprising if OpenAI allowed them to go to trial, according to Kristelia Garcia, a copyright law expert and professor at Georgetown University Law Center.
The possibility of a court ruling that could upend its entire business model could add pressure on OpenAI to accept a settlement or even a retroactive licensing agreement to settle the Times’ claims.
“Statutory damages are enormous,” Garcia added. “It would effectively not only stop their business models as they know it but probably close the companies down.”
The Post has reached out to OpenAI for comment.
Elsewhere, an unnamed media company is reportedly “considering taking legal action” against OpenAI similar to the Times’ lawsuit,” Bloomberg reported. Last year, billionaire investor Barry Diller suggested that publishers should sue AI firms for unauthorized use of their content.
The debate has also spilled over to Capitol Hill – with Conde Nast CEO Roger Lynch telling a Senate panel that AI tools have been “built with stolen goods” and calling for Congress to enact regulation.
OpenAI also has reportedly irked some media executives with paltry offers. Last week, The Information reported that OpenAI has offered sums of $5 million or less to outlets in exchange for a license to use their articles.
Despite ongoing talks with other media outlets, OpenAI is clearly bracing for an onslaught of legal action related to its business practices.
In an eyebrow-raising admission in a filing to the UK’s House of Lords last week, OpenAI stated it would be “impossible to train today’s leading AI models without using copyrighted materials.”
Meanwhile, the Times and any other outlets who opt to sue could consider a settlement because a deal for regular licensing payments would provide the newspaper with a “long-term sustainable model” to profit from AI’s usage of its work, Grimmelman said.