
<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI safeguards &#8211; The Milli Chronicle</title>
	<atom:link href="https://www.millichronicle.com/tag/ai-safeguards/feed" rel="self" type="application/rss+xml" />
	<link>https://www.millichronicle.com</link>
	<description>Factual Version of a Story</description>
	<lastBuildDate>Wed, 04 Mar 2026 17:27:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Anthropic Investors Engage Officials to Prevent Pentagon Ban on AI Systems</title>
		<link>https://www.millichronicle.com/2026/03/62916.html</link>
		
		<dc:creator><![CDATA[Millichronicle]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 17:27:40 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[AI ethics safeguards]]></category>
		<category><![CDATA[AI military use]]></category>
		<category><![CDATA[AI regulation United States]]></category>
		<category><![CDATA[AI safeguards]]></category>
		<category><![CDATA[AI supply chain risk designation]]></category>
		<category><![CDATA[Amazon Anthropic partnership]]></category>
		<category><![CDATA[Andy Jassy]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[Anthropic Claude chatbot]]></category>
		<category><![CDATA[artificial intelligence industry news]]></category>
		<category><![CDATA[autonomous weapons policy]]></category>
		<category><![CDATA[Claude AI]]></category>
		<category><![CDATA[Dario Amodei]]></category>
		<category><![CDATA[defense technology policy]]></category>
		<category><![CDATA[Department of Defense AI policy]]></category>
		<category><![CDATA[enterprise AI market]]></category>
		<category><![CDATA[generative AI companies]]></category>
		<category><![CDATA[OpenAI Pentagon contract]]></category>
		<category><![CDATA[Pentagon AI dispute]]></category>
		<category><![CDATA[U.S. government AI regulation]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=62916</guid>

					<description><![CDATA[Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote">
<p>Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner Amazon.</p>
</blockquote>



<p>Several investors in artificial intelligence developer Anthropic are working to defuse a growing dispute between the company and the U.S. Department of Defense over limits on military uses of its technology, according to seven people familiar with the matter, amid concerns that an escalating conflict could damage the company’s business prospects.</p>



<p>Chief Executive Dario Amodei has discussed the issue in recent days with major investors and partners, including Amazon Chief Executive Andy Jassy, two of the people said. Venture capital firms Lightspeed and Iconiq have also contacted Anthropic executives about the situation, two sources added. Some investors have simultaneously reached out to contacts within the administration of U.S. President Donald Trump in an effort to reduce tensions between the company and the Pentagon.</p>



<p>The discussions are centered on preventing a potential government move to bar Pentagon contractors from using Anthropic’s artificial intelligence systems, the sources said. One person familiar with the situation said Anthropic and the Defense Department continue to hold discussions, though details of those talks were not clear.</p>



<p>The White House has publicly called on Anthropic to assist the government in phasing out its AI systems. Neither the Pentagon nor investors including Amazon responded to requests for comment.</p>



<p>The dispute follows months of disagreement between Anthropic and the Defense Department—renamed the Department of War by the Trump administration—over how the military may deploy the company’s technology in operational settings. The conflict has become a broader test of the degree of control AI developers can retain over the use of their systems once they are integrated into government and commercial applications.</p>



<p>Pentagon officials have urged AI companies to abandon internal usage restrictions and instead accept a contractual framework allowing any use that complies with U.S. law. Anthropic has refused to remove certain safeguards governing its flagship Claude AI models, maintaining prohibitions against the technology being used to operate autonomous weapons or to support large-scale domestic surveillance programs.</p>



<p>Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner Amazon. Last week, rival OpenAI said it had also reached a classified agreement with the Pentagon and added that Anthropic should not be treated as a security risk to the department.</p>



<p>During discussions with Anthropic leadership, investors have reaffirmed their support for the company while urging a negotiated solution with defense officials, the seven people familiar with the talks said. Some investors privately expressed frustration that Amodei’s approach had intensified tensions with the Pentagon rather than easing them.</p>



<p>One person briefed on the discussions described the situation as partly a diplomatic challenge. At the same time, investors acknowledge that Amodei faces internal constraints. Several people familiar with the matter said that if the company appeared to fully concede to administration demands, it could alienate employees and customers who have supported Anthropic partly because of its public stance on AI safety restrictions.</p>



<p>Amodei has not responded to requests for comment. In prior statements, he said the company could not “in good conscience accede” to government demands to remove its safeguards. According to one person who participated in a call with investors late Tuesday, Amodei said Anthropic would continue attempting to find a workable arrangement with the Department of War.</p>



<p>Investors are particularly focused on preventing Anthropic from being designated a “supply-chain risk” by the U.S. government. Such a designation could require federal contractors to discontinue use of the company’s technology, potentially affecting commercial customers that also conduct government work.</p>



<p>Defense Secretary Pete Hegseth has said that a supply-chain risk determination would compel all government contractors to stop using Anthropic’s systems across their operations. Anthropic has publicly challenged that interpretation, stating that Hegseth lacks the statutory authority to prohibit the use of its AI technology outside of direct defense contracts. The Pentagon has not responded to questions about that claim.</p>



<p>Anthropic said last week it would contest any supply-chain risk designation in court.</p>



<p>Even without a formal ban, some investors fear the confrontation could deter potential customers who prefer to avoid conflict with the administration, one person familiar with the matter said.</p>



<p>The dispute comes at a critical stage for the San Francisco-based startup. Anthropic has raised tens of billions of dollars from investors betting on rapid growth in enterprise adoption of its AI systems. The company has previously said enterprise customers account for roughly 80% of its revenue.</p>



<p>Demand for products including its Claude chatbot and the Claude Code programming assistant has expanded rapidly. On Monday, the Claude app ranked as the most downloaded free application in Apple’s App Store, surpassing OpenAI’s ChatGPT.</p>



<p>One person familiar with Anthropic’s finances said the company’s annualized revenue run rate has reached about $19 billion based on current sales, compared with roughly $14 billion only weeks earlier.</p>



<p>Investors say maintaining that growth trajectory is important for the company’s longer-term capital plans. Anthropic is currently allowing employees to sell shares to outside investors in secondary transactions, and the company has previously said no decision has been made regarding a potential initial public offering.</p>



<p>The investor push to calm tensions intensified after several U.S. government agencies began discontinuing Anthropic technology. Following an order issued by President Trump on Friday directing federal agencies to replace Anthropic systems within six months, the State Department switched to OpenAI’s products, according to people familiar with the change.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Indonesia Temporarily Restricts Grok Access as AI Safety Standards Take Center Stage</title>
		<link>https://www.millichronicle.com/2026/01/61877.html</link>
		
		<dc:creator><![CDATA[NewsDesk MC]]></dc:creator>
		<pubDate>Sat, 10 Jan 2026 21:35:44 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[World]]></category>
		<category><![CDATA[AI compliance]]></category>
		<category><![CDATA[AI ethics]]></category>
		<category><![CDATA[AI safeguards]]></category>
		<category><![CDATA[AI safety standards]]></category>
		<category><![CDATA[artificial intelligence policy]]></category>
		<category><![CDATA[content moderation]]></category>
		<category><![CDATA[deepfake prevention]]></category>
		<category><![CDATA[digital governance]]></category>
		<category><![CDATA[digital rights]]></category>
		<category><![CDATA[digital security]]></category>
		<category><![CDATA[generative AI]]></category>
		<category><![CDATA[global AI oversight]]></category>
		<category><![CDATA[Grok chatbot]]></category>
		<category><![CDATA[Indonesia AI regulation]]></category>
		<category><![CDATA[innovation and regulation]]></category>
		<category><![CDATA[online content rules]]></category>
		<category><![CDATA[online safety]]></category>
		<category><![CDATA[platform accountability]]></category>
		<category><![CDATA[responsible AI]]></category>
		<category><![CDATA[technology regulation]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=61877</guid>

					<description><![CDATA[Indonesia’s temporary block on Grok highlights growing global focus on responsible AI use, digital ethics, and stronger safeguards to protect]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote">
<p> Indonesia’s temporary block on Grok highlights growing global focus on responsible AI use, digital ethics, and stronger safeguards to protect users in the online space.</p>
</blockquote>



<p>Indonesia has temporarily blocked access to Grok, an artificial intelligence chatbot developed by xAI, as authorities review concerns related to the generation of sexualised images. The move reflects the government’s emphasis on digital responsibility and user protection in rapidly evolving AI ecosystems.</p>



<p>Officials said the restriction is a precautionary step aimed at preventing the spread of harmful or inappropriate content online. Regulators stressed that the decision is not a rejection of innovation but a call for stronger safeguards and accountability.</p>



<p>Indonesia’s action places it at the forefront of global efforts to regulate artificial intelligence responsibly. Governments across regions are increasingly examining how generative AI tools manage content and protect vulnerable users.</p>



<p>The Communications and Digital Ministry stated that non-consensual sexual deepfakes pose serious risks to human dignity and digital security. Authorities emphasized the importance of ensuring technology aligns with ethical standards and societal values.</p>



<p>xAI has already begun tightening controls on image generation features. The company announced restrictions on image creation and editing, limiting access as it works to strengthen safety mechanisms.</p>



<p>Industry observers view these steps as part of a broader learning phase for generative AI platforms. As tools scale globally, developers are under growing pressure to refine safeguards and content moderation systems.</p>



<p>Indonesia has also invited representatives from the platform’s parent company to engage in discussions. The dialogue is expected to focus on compliance, user safety, and long-term cooperation between regulators and technology firms.</p>



<p>The government’s approach highlights collaboration rather than confrontation. Officials have signaled openness to restoring access once sufficient protections are demonstrated and regulatory concerns are addressed.</p>



<p>Indonesia’s digital regulations are shaped by cultural, social, and legal considerations. The country maintains strict rules against online content deemed obscene, reflecting strong public expectations around online conduct.</p>



<p>Experts say the temporary block underscores the importance of trust in artificial intelligence. Public confidence depends on platforms showing they can prevent misuse while delivering innovation responsibly.</p>



<p>Global technology leaders are increasingly recognizing that regulation and innovation must advance together. Clear standards can help AI tools gain wider acceptance and long-term sustainability.</p>



<p>The situation also reflects a global shift toward proactive AI governance. Rather than reacting after harm occurs, regulators are seeking early intervention and preventative safeguards.</p>



<p>Developers see these moments as opportunities to improve systems and align with international norms. Enhanced transparency and accountability can strengthen partnerships with governments worldwide.</p>



<p>Indonesia’s decision has sparked wider conversations about digital ethics and platform responsibility. Policymakers and technologists alike are reassessing how AI tools interact with social values.</p>



<p>As AI adoption accelerates, countries are exploring balanced frameworks that encourage innovation while protecting users. Responsible deployment is increasingly viewed as a competitive advantage rather than a constraint.</p>



<p>The temporary restriction may ultimately contribute to stronger AI standards globally. Lessons learned from this process could shape future policies and platform design.</p>



<p>Overall, Indonesia’s action signals a constructive step toward safer digital spaces. With cooperation and improved safeguards, AI tools like Grok can continue to evolve in ways that benefit users and society.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
