[ISAFIS Newsletter #8] OpenAI vs. The New York Times: Copyright Clash Turns to Debates on User Privacy
Written by: Pelangi Retta Gyani Sihombing Staff of Research and Development
What started as a legal fight over journalism and artificial intelligence has now entered a deeper realm: user privacy. The New York Times (henceforth abbreviated as NYT)’ copyright lawsuit against OpenAI has escalated into a battle over whether OpenAI must preserve all user chats, even deleted ones. This dispute now begins to engage in not only Intellectual Properties, but also Data Privacy of generative technologies.

Source: Reuters
How it all started
In December 2023, NYT filed a lawsuit against OpenAI and Microsoft, alleging that millions of its copyrighted articles were scrapped and used without consent to train AI models like ChatGPT and Copilot. The lawsuit accused the companies of threatening the integrity and economic foundation of journalism by allowing their AI to generate outputs that closely replicated NYT content.
“These bots compete with the content they are trained on,” said Ian B. Crosby, lead counsel for NYT, highlighting that AI-generated responses could replace traditional readership and undermine subscription-based journalism. This concern was echoed by the News/Media Alliance, which stated that the lawsuit was necessary to protect important copyright principles, and warned that unchecked AI scraping creates an imbalanced marketplace that could devastate local and independent newsrooms.
Despite OpenAI’s claim that such usage falls under the U.S. fair use doctrine, NYT presented examples of ChatGPT generating articles verbatim. Attempts at a licensing agreement reportedly broke down after NYT requested stronger protections and editorial oversight.
In early 2024, the case advanced when a U.S. District Court judge denied OpenAI’s motion to dismiss, ruling that NYT’s copyright claims had merit and should proceed to trial. Then came a new twist in May 2025. The judge ordered OpenAI to preserve all user chats, including those deleted by users, to ensure that evidence is not lost during litigation. NYT argued that user prompts and outputs could show direct copying of its articles. OpenAI responded by filing an appeal on June 6, 2025, calling the order excessive and a threat to user privacy.
“We have been thinking recently about the need for something like ‘AI privilege’; this really accelerates the need to have the conversation. In my opinion, talking to an AI should be like talking to a lawyer or a doctor. I hope society will figure this out soon,” said Sam Altman, CEO of OpenAI, criticizing the demand for violating the confidentiality of AI Conversation with users.
The company warned that retaining all chat logs across its services would undermine its privacy policy and set a dangerous precedent. This warning is grounded in OpenAI’s own policy, which promises users that deleted chats will be permanently removed within 30 days. Forcing the company to store even deleted data directly contradicts this promise and could breach data protection laws like the European Union’s GDPR (which guarantees the right to be forgotten) and California’s CCPA, which gives users control over personal data deletion. If courts can force AI platforms to keep all user data for litigation, this can scare people off from AI interaction and jeopardize privacy rights not only in the U.S., but also worldwide. OpenAI is currently filing a request to overturn the court’s decision, while still retaining a limited portion of internal data to fulfill applicable legal obligations.
The Global Implication of This Case
What began as a copyright case has now evolved into a test of how far the legal system can reach into private AI interactions. With global regulators watching closely, the outcome may determine not only how AI companies train their models, but also how they handle user data.
Globally, this reflects broader tensions over AI transparency and privacy, especially in Europe under GDPR. France and Canada are also investigating ChatGPT’s data handling, while the EU’s AI Act introduces rules that could mirror or even exceed NYT‘s demands.
Why This is Important in Indonesia
In Indonesia, where AI usage is accelerating in various fields including journalism, this case is a crucial alarm. Indonesia’s existing copyright law (UU No. 28 Tahun 2014) may not yet anticipate AI-generated duplication or cross-border data conflicts. Moreover, as local developers begin training models on Indonesian data, the need for ethical and legal clarity has become more urgent.
While Indonesia has not yet faced a legal case as explicit as the NYT vs. OpenAI dispute, there have been early signs of concern. For example, local news portals and educational platforms have raised questions about their content being reused or summarized by AI tools without attribution. However, in the absence of specific regulation, these concerns remain unresolved.
Compounding this legal gap is UU ITE (Undang-Undang Informasi dan Transaksi Elektronik), which governs electronic information and digital transactions. Yet, although it regulates digital content distribution and data privacy, it does not recognize or define AI-generated works, AI authorship, or AI-related data ownership.
On a regional level, the urgency is similar. ASEAN currently has no treaties or binding frameworks on AI-related copyright or data ethics. This lack of binding mechanism means that each country is left to implement its own rules, leading to inconsistent protections and making it harder to enforce cross-border digital rights.
In this vacuum, Indonesia risks falling behind, not just in protecting its content creators and digital rights, but also in ensuring that its AI development is ethical and aligned with global best practices.
Conclusion
The NYT vs. OpenAI case highlights a major threat in the AI era: the risk of copyrighted content being used without permission, the loss of user privacy through forced data retention, and the lack of clear laws to protect both. Without strong rules, anyone’s work or conversation could be used, stored, or copied without consent. As AI becomes part of our daily lives this case raises a question on who really owns the content users create or consume, and how safe the data shared is.
References
European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation).
https://eur-lex.europa.eu/eli/reg/2016/679/oj.
Hall, K. (2023, December 28). NYT lawsuit says OpenAI bots stole billions of dollars’ worth of journalism. Business Insider. https://abovethelaw.com/2023/12/ai-update-white-collar-ai-ny-times-sues-openai-patchwork-regulations.
Paul, K. (2025, June 6). OpenAI appeals New York Times’ suit demand asking not to delete any user chats. Reuters. https://www.reuters.com/business/media-telecom/openai-appeal-new-york-times-suit-demand-asking-not-delete-any-user-chats-2025-06-06/.
Office of the Privacy Commissioner of Canada (2023, April 4). OPC launches investigation into OpenAI’s ChatGPT. https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/.
Republik Indonesia. (2016). Undang-Undang Republik Indonesia Nomor 19 Tahun 2016 tentang Perubahan atas UU No. 11 Tahun 2008 tentang Informasi dan Transaksi Elektronik (UU ITE). https://peraturan.bpk.go.id/Home/Details/37582/uu-no-19-tahun-2016.
Republik Indonesia. (2014). Undang-Undang Republik Indonesia Nomor 28 Tahun 2014 tentang Hak Cipta. https://peraturan.bpk.go.id/Home/Details/38681/uu-no-28-tahun-2014.
Robertson, A. (2023, December 27). The New York Times sues OpenAI and Microsoft for using its work to train AI. The Verge. https://www.theverge.com/2023/12/27/24016212/new-york-times-openai-microsoft-lawsuit-copyright-infringement.
State of California. (2018). California Consumer Privacy Act of 2018 (CCPA). https://oag.ca.gov/privacy/ccpa.
Vincent, J. (2024, January 5). OpenAI faces international regulatory scrutiny over data use and privacy. The Guardian. https://techcrunch.com/2024/01/02/openai-dublin-data-controller/.
0 Comments