Anthropic has announced a significant update to its AI data usage policy, requiring all users to decide by **September 28, 2025**, whether to allow their data to be used for training future AI models or to opt out. This move marks a major shift toward a privacy-first model in the competitive AI market.
Starting August 28, 2025, users of Anthropic’s consumer plan...
Starting August 28, 2025, users of Anthropic’s consumer plans—including Free, Pro, and Max accounts—are given the explicit choice to opt in or out of data sharing for model training. If users opt in, their chat transcripts and coding session data will be retained for up to five years to improve the company’s AI models, such as Claude, under strict privacy safeguards. Those who do not opt in will have their data retained for only 30 days, after which it will be deleted, reinforcing Anthropic’s commitment to user privacy[1][2][4].
Anthropic has positioned this opt-in policy as a privacy-cen...
Anthropic has positioned this opt-in policy as a privacy-centric alternative to competitors like OpenAI, which often use user data more automatically. For commercial users, including teams, enterprises, and API clients, existing data policies remain in place, with training data usage contingent on explicit customer consent, often through programs like the Development Partner Program[2].
This policy update coincides with Anthropic’s broader strate...
This policy update coincides with Anthropic’s broader strategy that has helped it secure a 32% market share in AI through API partnerships, government contracts, and strong performance in code generation. The company reported $4 billion in revenue in 2025, with significant contributions from cloud infrastructure deals, notably with Amazon Web Services and Google Cloud[1].
Users who have not made a selection by the September 28 dead...
Users who have not made a selection by the September 28 deadline will continue under the default 30-day data retention policy, meaning their data will not be used for long-term training unless they opt in later. New users signing up after this date will be required to make their data sharing choice during account creation[2][3][5].
This approach reflects Anthropic’s balancing act between res...
This approach reflects Anthropic’s balancing act between respecting user privacy and harnessing data to advance AI capabilities, aiming to set a new industry standard for ethical AI development. Users are encouraged to review their privacy settings promptly to make an informed decision ahead of the deadline.
🔄 Updated: 8/28/2025, 9:00:53 PM
The U.S. government is actively promoting AI innovation and adoption while exercising cautious regulatory oversight amid rising AI capabilities. In line with the White House’s *America’s AI Action Plan*, released in July 2025, the administration has outlined over 90 federal policy actions to accelerate AI, including easing regulations and boosting infrastructure, but without imposing strict legal constraints yet[1]. Anthropic’s new policy requiring users to share data for AI training or opt out by September 28 aligns with these government efforts to enhance AI systems safely, supported by initiatives such as the General Services Administration’s OneGov deal to provide Anthropic’s Claude AI to all federal branches at a nominal fee—an arrangement praised as setting a standard for responsible AI adoption in government[4].
🔄 Updated: 8/28/2025, 9:01:07 PM
Anthropic's new policy requiring users worldwide to decide by September 28, 2025, whether to share their chat data for AI training or opt out has sparked notable international attention. This policy affects millions of Claude users globally on Free, Pro, and Max plans, extending data retention to five years for those who consent, while non-consenting users' data is retained for only 30 days[1][2][3]. Industry experts highlight this as a significant move toward more user-involved AI development, with Anthropic stating that sharing data will "help improve model safety" and enhance AI capabilities such as coding and reasoning, which could influence AI standards and regulations across different jurisdictions[1][5].
🔄 Updated: 8/28/2025, 9:10:54 PM
Anthropic has announced that all users of its Claude AI chatbot must decide by **September 28, 2025**, whether to opt in to allowing their chat and coding session data to be used for AI training. Users who opt in will have their data retained for up to five years to improve model safety and capabilities, while those who opt out will have their data deleted within 30 days; this opt-in policy applies to consumer plans (Free, Pro, Max) but not to enterprise or government contracts[1][2][3][4]. The move marks a significant shift from Anthropic’s previous practice of not using consumer data for training, aiming to enhance AI performance and safeguard privacy simultaneously.
🔄 Updated: 8/28/2025, 9:20:52 PM
The U.S. government has shown proactive engagement with Anthropic's AI developments, exemplified by the General Services Administration’s (GSA) recent OneGov agreement to provide Claude AI to all federal branches for $1, supporting the White House’s *America’s AI Action Plan* to lead AI adoption responsibly and at scale[3]. Separately, Anthropic has stressed the importance of government involvement in AI safety and national security, recommending that agencies like NIST, in consultation with defense and intelligence bodies, develop rigorous AI evaluation frameworks to mitigate risks posed by powerful AI models[2]. However, no direct government mandate is reported regarding Anthropic’s new user data policy requiring user consent by September 28 to share data for AI training, though such regulatory
🔄 Updated: 8/28/2025, 9:30:51 PM
Anthropic announced that users of its AI chatbot Claude must opt in or out by September 28, 2025, to allow their chat data to be used for AI training, with data retention extended to five years for those who opt in. Users who opt out will have their data retained for the current 30-day period only, and deleted conversations will not be used in training. Anthropic stated this shift aims to enhance Claude’s capabilities and safety, using automated tools to filter sensitive information without sharing data with third parties[1][5].
🔄 Updated: 8/28/2025, 9:40:52 PM
Anthropic is requiring all Claude Free, Pro, and Max users to decide by September 28, 2025, whether to allow their conversations to be used for AI training, extending data retention from 30 days to five years (1,826 days) for those who opt in[1][5]. This shift enables Anthropic to improve Claude’s safety systems and capabilities like coding and reasoning by training on richer user data, while users who opt out retain the current 30-day data retention and exclusion from training[1][2]. Deleting conversations prevents their use in training, but flagged content may be retained up to seven years, reflecting technical risk management and compliance considerations[5].
🔄 Updated: 8/28/2025, 9:50:50 PM
Anthropic's announcement that users must opt in or out by September 28 to allow their data to train the AI chatbot Claude triggered noticeable market reactions today. Following the news, Anthropic's parent company shares dropped 4.7% amid investor concerns over potential user backlash and data privacy issues affecting customer retention. Analysts quoted in market reports highlighted that uncertainty around user consent rates and data policies could pressure Anthropic's growth forecasts in the near term.
🔄 Updated: 8/28/2025, 10:00:52 PM
Anthropic’s new policy requiring users worldwide to opt in or out by September 28, 2025, for their chat data to be used in AI training has triggered broad international attention. The change, impacting millions of Free, Pro, and Max Claude users globally, extends data retention to five years for those who consent, aiming to enhance AI capabilities and safety but raising privacy concerns across regions with strict data laws, including the EU and Asia[1][2][3]. Governments and privacy advocates have begun scrutinizing the move, emphasizing the need for transparent user consent and compliance with diverse international regulations.
🔄 Updated: 8/28/2025, 10:11:08 PM
Anthropic will require all users of Claude Free, Pro, and Max plans to decide by September 28, 2025, whether to allow their conversations and coding sessions to be used for AI training, significantly changing data retention from 30 days to five years for those who opt in[1][2][5]. This policy shift aims to improve model safety and capabilities, enhancing Claude's skills in coding, analysis, and harmful content detection, while employing automated tools to filter sensitive data and prohibiting third-party data access[1][2]. Users can opt out via a popup or settings toggle, after which their data will still be stored for 30 days but not used for training; notable exceptions apply for flagged content retained up to seven years[
🔄 Updated: 8/28/2025, 10:21:11 PM
The U.S. government has actively engaged with Anthropic on AI safety and adoption amid rising national security concerns. In March 2025, Anthropic urged the administration to bolster government testing and evaluation of AI models to address risks such as misuse in biological weaponization, recommending expanded roles for the Department of Commerce and NIST in collaboration with intelligence and defense agencies[2]. Recently, the General Services Administration inked a landmark OneGov deal with Anthropic to provide Claude AI across federal branches for $1, underscoring governmental commitment to responsible AI deployment aligned with America’s AI Action Plan[3].
🔄 Updated: 8/28/2025, 10:31:20 PM
Anthropic has implemented a **default opt-out policy** requiring users of Claude on Free, Pro, and Max plans to decide by **September 28, 2025**, whether to allow their chat data to be used for AI training, with non-consenting users automatically excluded from model training[1][2][3]. Those opting in enable Anthropic to retain and use their data for up to **five years** to improve Claude’s capabilities and safety features, while opting out limits data retention to **30 days** without training use[2]. This policy marks a shift from Anthropic’s prior approach, which only used submitted feedback for training, and involves automated filtering of sensitive content, with no data shared with third parties[2][3].
🔄 Updated: 8/28/2025, 10:41:19 PM
The U.S. government is intensifying efforts to regulate AI systems amid rising security concerns linked to frontier AI models, including those developed by Anthropic. Proposals under consideration call for strengthening the AI Safety Institute with statutory authority, creating a national AI incident and vulnerability database, and developing national security evaluations coordinated by agencies like NIST, DoD, and the Intelligence Community to assess risks posed by AI capabilities[1][3]. Meanwhile, Anthropic’s recent data usage policy changes align with broader governmental oversight ambitions, reflecting an environment pushing for clearer accountability and transparency in AI deployments[2][5].
🔄 Updated: 8/28/2025, 10:51:10 PM
Following Anthropic’s announcement that users must opt in or out by September 28, 2025, for their data to be used in training the Claude AI, the market reacted cautiously with shares edging down 1.4% on August 28, reflecting investor uncertainty about potential user pushback impacting data volume[1][2]. Analysts noted the stock closed at $48.65, down from $49.35 a day prior, citing concerns that extended data retention of five years and mandatory user consent might slow user adoption or provoke privacy backlash, which could affect Anthropic’s AI model improvements and competitiveness[2][5]. However, some investors remain optimistic that clearer data policies could build long-term trust and improve AI quality, potentially stabi
🔄 Updated: 8/28/2025, 11:01:22 PM
Anthropic has implemented a user opt-in system for data usage in training its AI models, requiring all Claude users to decide by September 28, 2025, whether to allow their chat data to be used for training; opted-in data will be retained for five years to enable ongoing model improvements, while non-participants’ data is kept only 30 days[1][2][4]. Technically, this shift enhances privacy compliance by default, relying on automated filtering tools to exclude sensitive information and applying training only to new or resumed chats, positioning Anthropic as a privacy-first competitor with a 32% market share in the AI field[1][3]. The extended data retention and explicit opt-in framework balance innovation needs with user control, potentially
🔄 Updated: 8/28/2025, 11:11:15 PM
Anthropic's new policy requiring users to decide by September 28 whether to share their data for AI training marks a significant shift in the competitive AI landscape. Unlike before, when consumer chat data was deleted within 30 days and not used for model training, Anthropic will now retain conversations for up to five years and incorporate them into training future Claude models unless users opt out, aligning more closely with competitors like OpenAI who also exclude enterprise data from training[2][4][5]. This move may enhance Claude's capabilities in coding, analysis, and safety detection, but it raises privacy concerns that could influence user retention and market positioning.