Can't find what you're looking for?
View all search resultsCan't find what you're looking for?
View all search resultsEffective control over AI systems cannot be achieved without shared norms, shared responsibilities and shared enforcement mechanisms.
he recent move by a number of countries, including Indonesia, to limit access to Grok, an AI-driven feature on the X platform, should not be viewed merely as a matter of censorship. It signals a deeper issue: that our legal frameworks are no longer able to keep pace with technology that recognizes neither borders nor jurisdictions.
Digital violence is no longer a hypothetical issue. Cyberbullying, hate speech and algorithm-driven abuse are realities that children, women, minorities and other vulnerable groups face daily. Generative AI has exacerbated this problem by lowering the production costs of harmful content. Even if digital platforms view these issues as side effects, they remain very real for the victims.
International human rights law has consistently challenged the notion that rights disappear online. In 2012, the United Nations Human Rights Council reiterated that the rights people possess offline must also be guaranteed online.
While the right to freedom of expression is fundamental, it is not unlimited; it cannot be invoked as a pretext for behaviour that diminishes human dignity or public morality. Nevertheless, this clarity of principle has not ensured accountability online.
The Grok scandal is a case in point. Grok is not the first AI to push the limits of regulation, and it certainly will not be the last. As more generative AI tools emerge, they will continue to test the boundaries of legality, often at a pace that outstrips regulation. While shutting down access to a platform may placate public outrage, it does little to solve the underlying problem.
The root cause is the lack of a coherent accountability framework for algorithmic systems that operate across borders. The current legal paradigm remains stubbornly territorial. National laws on electronic information, data protection and platform regulation are still based on geographical premises: where the company is incorporated, where servers are located and where users reside. These premises no longer apply in a system where an AI model may be developed in one country, deployed from another and cause harm in many others.
Indonesia’s regulatory framework reflects this tension. Current regimes, such as the obligations placed on registered digital service providers, are significant on paper but weak in reality. Liability regimes, such as strict liability, often depend on an administrative presence within territorial boundaries. When this is not the case, liability becomes diffuse, and victims are left without remedy.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.