Anthropic pays the authors $ 1.5 billion to settle the trial for copyright violation: NPR

Friday, a case against a case against the anthropogenic AI carried by a group of authors was settled.
Riccardo Milani / Hans Lucas / AFP via Getty Images
hide
tilting legend
Riccardo Milani / Hans Lucas / AFP via Getty Images
In one of the greatest copyright regulations involving generative artificial intelligence, Anthropic IA, a leading company in the AI generator space, agreed to pay $ 1.5 billion to settle a lawsuit for copyright brought by a group of authors.
If the court approves the regulations, Anthropic is unleashed by the authors of around $ 3,000 for each of the 500,000 pounds estimated by the regulations.
The regulation, that the main judge of the American district William Alsup in San Francisco will envisage the approval of next week, is in a case which involved the first substantive decision on the way in which fair use applies to generative AI systems. He also suggests a inflection point in the ongoing legal fighting between creative industries and AI companies accused of the illegal use of artistic works to train large-language models that underlie their widely used AI systems.
The doctrine of fair use makes it possible to use works protected by copyright by third parties without the consent of the copyright holder in certain circumstances, as during the illustration of a point in a press article. AI companies trying to plead in favor of the use of copyright -protected work to train their generative AI models generally invoke fair use. But the authors and other complainants of the creative industry pushed back.

“This historic regulation will be the greatest recovery of copyright reported publicly,” said the motion of the regulation, arguing that it “will provide significant compensation” to the authors and “will establish a precedent of AI companies to pay for their use of pirated websites”.
“This regulation marks the start of a necessary development towards a legitimate license system based on the data training market,” said Cecilia Ziniti, a lawyer for the technological industry and former employee of the ninth circuit which is not involved in this specific case but which followed it closely. “This is not the end of the AI, but the beginning of a more mature and durable ecosystem where creators are offset, much like the way the music industry has adapted to digital distribution.”
A case with divided decisions
The authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson have deposited their complaint Against Anthropic for copyright violation in 2024. The collective appeal allegedly allegedly used the content of millions of books protected by digital copyrights to train large language models behind their chatbot, Claude, including at least two works from each applicant. The company also bought paper books and scanned them before ingesting them in its model. The company admitted to having done as much, a fact that the complainants raise their complaint. “Anthropic admitted having used the battery to form Claude,” said the complaint. (The battery is a large open source data set created for training as a large language model.)
“Rather than obtaining authorization and paying an equitable price for the creations he exploits, the anthropes are picked up,” said the authors’ complaint.
In his June decisionJudge Alsup agreed with the anthropic argument, declaring that the use of books by the complainants by the company to form their IA model was acceptable.
“The use of training was fair use,” he wrote. “The use of the books in question to form Claude and his precursors was extremely transformative.”
However, the judge judged that the use by anthropic of millions of pirate Books to build your models – Books than websites such as the Genesis Library (Libgen) and the Pirate Mirror (Pilimi) library copied without obtaining the authors’ consent or giving them compensation – was not. He ordered that this part of the case was tried. “We will have a lawsuit on the hacked copies used to create the central library of Anthropic and the resulting damage, real or statutory (including for the will),” wrote the judge in the conclusion of his decision. Last week, the parties announced that they had reached a regulation.

The American copyright law stipulates that the violation of the voluntary copyright can lead to legal damages of up to $ 150,000 per violated work. The judge’s order says anthropic has hacked more than 7 million copies of books. Thus, the damage resulting from a trial, if he had advanced, could have been enormous.
However, Ziniti said that regardless of the regulation, the judge’s decision effectively means that at least in northern California, AI companies now have the legal right to train their important language models on copyright -protected works – as long as they obtain copies of this work legally.
In NPR declarations, the two parties seem to be satisfied with the outcome of the case.
“Today’s regulation, if approved, will resolve the inherited inherited complaints from the applicants,” said Aparna Sridhar Anthropic deputy deputy. “We remain determined to develop safe AI systems that help people and organizations to extend their capacities, advance scientific discovery and solve complex problems.”
“This historical colony is the first of its kind in the AI era,” said Justin Nelson, a lawyer for the team representing the authors. “It will provide significant remuneration for each class work and will define a precedent forcing AI companies to pay copyright owners. This regulation sends a powerful message to IA companies and creators, than taking works protected by copyright from these harassment websites is erroneous.”
The creative community responds
The regulations also encountered the approval of the creative community.
“These historical regulations are an essential step to recognize that IA companies cannot simply steal the creative work of the authors to build their AI simply because they need books to develop high quality models,” said Authors guild CEO Mary Rasenberger. “We expect the regulations to lead to more licenses which give the authors remuneration and control of the use of their work by AI companies, as should be the case in a functional free market company.”

“While the settlement is very meaningful and represents a clear Victory for the publishers and authors in the class, it also proves what we have been saying all Along— That ai companies can afford to compensate copyright owners for their work Innovate and Compete, “Added Keith Kupferschmid, President and CEO of the Copyright Alliance.
Anthropic is in good position to manage considerable compensation. Tuesday, the company announcement The completion of a new financing cycle worth $ 13 billion, bringing its total value to $ 183 billion.
Meanwhile, the literary world and other parts of the creative sector continue to fight against AI companies. There have been a multitude of violation of the incarnation of the literary AI launched by eminent authors, notably Ta-Nehisi Coates and the actress Sarah Silverman in recent years. In June, the American district judge Vince Chhabria Certainly Meta’s request For a summary judgment in Coates and the case of Silverman against the technological society, which has effectively ended this trial. Other cases are underway.
And in the last in a series of legal actions involving major entertainment companies on Friday, Warner Bros. Discovery filed a trial In California Federal Court against the IA Midjourney images generator for copyright violation. NPR contacted Midjourney to comment.