Can't find what you're looking for?
View all search resultsCan't find what you're looking for?
View all search resultsWhen the responsibility for insisting on basic ethical limits falls to private companies, the systems meant to protect the public interest from potentially dangerous technologies have clearly failed.
he ongoing dispute between Anthropic and United States President Donald Trump’s administration reveals something deeply troubling about the current state of artificial intelligence (AI) governance. Apparently, a private company is more concerned about ethical guardrails than the world’s most powerful military.
Earlier this month, the US defense department designated Anthropic a “supply-chain risk.” The unusual move followed the company’s insistence on safeguards preventing its technology from being used for mass surveillance of Americans or in fully autonomous weapons. In response, the Pentagon placed Anthropic on a list typically reserved for foreign entities considered national-security threats. Anthropic has since filed a lawsuit challenging the designation.
Whatever one thinks of Anthropic’s motives, this episode underscores how misaligned governance frameworks have become. When the responsibility for insisting on basic ethical limits falls to private companies, the systems meant to protect the public interest from potentially dangerous technologies have clearly failed.
Encouragingly, February’s AI Impact Summit in India showed that it is not too late to change course. Around the world, startups are developing systems designed explicitly for safe and ethical deployment, and civil-society organizations are using AI to tackle pressing social challenges, including violence against women and girls. At the same time, the costs of AI applications have dropped by as much as 90 percent in recent years, while the growth of open-source ecosystems has made powerful tools accessible to smaller actors.
This is the AI revolution many of us have long hoped for, with technological progress guided by democratic values and respect for human rights. The same vision has informed my work on UNESCO’s Recommendation on the Ethics of AI — the first global framework of its kind — and on the OECD’s AI Principles.
India’s experience offers a useful model for countries seeking to harness AI in ways that serve the public interest. By investing heavily in digital public infrastructure — most notably the Aadhaar biometric identity system and the Unified Payments Interface — the country has shown how technology can be deployed at scale to meet citizens’ everyday needs.
But the Anthropic dispute highlights a growing tension between sound AI governance and governments’ desire to attract investment. The business models of the handful of US companies that currently dominate the AI frontier are shaped by intense competition, both among themselves and with their Chinese counterparts, and policymakers are reluctant to impose rules that might drive them away.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.