Can't find what you're looking for?
View all search resultsCan't find what you're looking for?
View all search resultsAmerican firms could cause global harm before European regulators catch up
he problem with European regulators, a German businessman recently told me, is that they are too scared of downside risks. “In any innovative new business sector, they overregulate and stifle any upside potential.” In contrast, he argued, Americans care more about the upside potential and thus holds off on regulation until they know much more about the consequences. “Not surprisingly, the United States has much more of a presence in innovative industries.”
Artificial intelligence is a case in point. The European Union enacted the world’s first comprehensive AI regulation in August 2024, establishing safeguards against risks such as discrimination, disinformation, privacy violations and AI systems that could endanger human life or threaten social stability. The law also assigns AI systems different risk levels, with different treatments for each. While AI-driven social scoring systems are banned outright, higher-risk systems are heavily regulated and supervised, with a list of fines for noncompliance.
But Europe has little presence in the burgeoning AI industry, especially relative to the US or China. Those leading the charge in generative AI are US-based firms such as OpenAI, Anthropic and Google; no European firm meets the mark. Such a glaring gap seems to speak for itself. For now, the Trump administration’s AI Action Plan, which seeks to limit red tape and regulation in AI, looks like the better approach.
The problem with the European way is that it burdens fledgling firms with the costs of regulatory compliance before the technology’s potential has become clear. A chatbot that spreads falsehoods or discriminates against certain ethnic groups is certainly not desirable, but there must be some tolerance for such errors in the early stages of a system’s development.
Moreover, when developers can explore a system’s positive possibilities more freely, they also have time (and possibly resources generated from successful but error-prone launches) to figure out cost-effective ways to address issues that undermine the system’s reliability. Demanding near-perfection from the outset does not safeguard society so much as it stifles the trial-and-error process through which breakthroughs emerge.
Of course, errors such as racial discrimination can be extremely costly, especially if made by chatbots that interact with millions of people. Recognizing this risk, some regulators allow new products to be tested only in tightly controlled settings. Innovators can experiment with a limited group of users, and always under the regulator’s watchful eye. This “sandbox” approach helps to contain any harms from spilling over to the broader public – Europe’s main concern.
But sandboxes might also limit what can go right. Trials with small, restricted groups cannot capture the benefits of network effects, whereby products become more valuable as more people use them. Nor can they reveal unexpected breakthroughs that come when the “wrong” people adopt a product (for example, online pornography drove early innovations in web technology). In short, sandbox trials may keep disasters at bay, but they also risk stifling discovery. They are better than outright bans, but they may still cause innovators to bury too many promising ideas before they can scale.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.