The Court of Appeal has confirmed that the inclusion of Robert Kneschke’s photographs in a dataset for AI model training did not infringe copyright.
The German photographer had filed a lawsuit against LAION, a non-profit organisation, after discovering that it had included some of his stock images in an open-source data set.
In December 2025, the Higher Regional Court of Hamburg confirmed the District Court’s earlier decision that creating such a database for AI training could be considered scientific research.
According to the court, LAION’s reproduction of Kneschke’s images was covered by the text and data mining (TDM) exception; confirming for the first time that it applies to the training of generative AI.
The exception entails that copyrighted works can be extracted and reproduced without the copyright-holder’s permission or any compensation, if the purpose is scientific.
The Higher District Court regarded the opt-out of the exception on the stock photo agency’s website as invalid, arguing it wasn’t a machine-readable one.
According to Kneschke, the ruling is “a bitter loss for the cultural scene, as training generative AI without permission and remuneration creates strong competition for their works, meaning that with every new work published, cultural creators would contribute more to their own marginalisation.”
Remuneration rights and rethinking copyright law
Dr. Susana Navas Navarro – professor in private law at the Autonomous University of Barcelona, believes it to be a “strange interpretation” of the copyright act.“Making a database with data sets publicly available for the training of systems or models does not pursue any scientific or research purpose.”
Navarro believes we must rethink copyright law with the advent of Generative AI.
“Maybe the solution could be to review the directive on copyright law in order to introduce a general compensation.”
Kneschke adds that copyright holders ought to give permission.
“A logical and fair solution would be to allow training on copyright-protected works only if copyright holders give their permission and receive remuneration. Since AI acts as an uncontrollable black box, extensive disclosure requirements would also be necessary to make compliance with these rules transparent.”
In July 2025, the EU released the General‑Purpose AI Code of Practice, and one of the key points includes states that providers should document what data was used in training. However, these are only guidelines.
The legal framework is complicated, Navarro explains. All EU member states must implement the 2019 Copyright Directive in their own copyright directives, which results in 27 different interpretations of the directive and the TDM exception.
She also points at inconsistencies.
The regulatory AI Act “requires that everything related to the training, validation, and testing phases of the system are stored and documented, including information about data sets, while the 2019 Directive regulating the TDM exception states that once the system or model has been trained, the data must be deleted.”
The court ruling on the Kneschke vs LAION case seems to briefly favour those who help develop AI systems, but the current copyright model’s future remains unclear.
The rise of Generative AI will continue to throw concepts such as human authorship and originality off balance, and with it will grow the need for clear regulation and legislation.

