Indonesia needs to take a leap and get ahead of the game in regulating AI technology, or at least take small steps.
oward the end of last year, news about a new artificial intelligence (AI) chatbot capable of generating text on almost any topic got the internet talking. The bot in question is ChatGPT, which was developed by OpenAI, a young San Francisco-based AI lab.
ChatGPT has apparently acquired over 10 million daily users since, according to Brett Winton of ARK Investment Management (@wintonARK), exceeding Instagram.
While much of the hubbub about ChatGPT focuses on its misuse by students to complete school assignments, says an article on the Semarang State University website, in truth, the tech is a lot more than that. ChatGPT is essentially a human-like chatbot. Unlike other bots, users can interact with it using natural language, like you would chat with a friend.
It is also trained on a massive amount of text data and given this, could interact with humans in a more fluid and conversational way than other bots. Its features include answering follow-up queries by users, engaging in a wide variety of topics and even admitting its own errors.
ChatGPT is not just an ingenuous piece of algorithm to play with; it is also a powerful machine that can produce TV scripts, create song lyrics and write poetry, when fed the right set of data.
The chatbot even made recent headlines for being intelligent enough to pass several law and business school exams in the United States.
But all that “magic” comes with a catch. ChatGPT is replete with issues and limitations that could turn dangerous if not properly regulated. First, an OpenAI article states that the bot’s training data is limited to events until 2021, meaning it is clueless about more recent events and trends. Further, it can respond to problems, but not contextualize them. It also cannot fact-check the information it generates. In fact, users have reported ChatGPT of spitting out clearly inaccurate responses and making up facts.
Just like other AI, ChatGPT is susceptible to bias. It can produce sexist, hateful or other inappropriate outputs based on its training data. Evidence of ChatGPT’s drawbacks in maintaining the gender, cultural and political equilibrium when answering queries surfaced from several tests run by tech experts at US universities. For instance, Steven Piantadosi at University of California, Berkeley tweeted his success at getting the bot to conclude that only Caucasian or Asian men would make good scientists.
Then, concerns began to emerge about ChatGPT’s algorithm allegedly favoring America’s left-wing views while also regurgitating false narratives about the outcome of the 2020 US election. OpenAI was quick to address the latter gaffe and managed to create guardrails so the system generated more neutral and diverse political viewpoints, but users still found ways to get around them.
In addition, a recent TIME investigation has found that OpenAI outsources its content moderation to real people in Kenya. Moderators are tasked with scouring through thousands of text items across the internet and flagging everything potentially toxic or harmful, and then feed those flagged items to ChatGPT so it can learn to detect similar items on its own.
This method of fine-tuning, of course, mirrors that of Facebook and Twitter, which have been extensively criticized for being slow and unreliable, especially when dealing with disinformation and hate speech, reports Wired.
The hidden perils of ChatGPT must already be sounding the alarm for Indonesian regulators. After all, the bot now knows 95 languages, including Indonesian and Javanese, according to a blog post on SEO.ai. We can thus expect a growing pool of Indonesian users toying with the tech, and doing so irresponsibly.
It is not at all difficult to picture cyber trolls using the bot to manufacture stories or whole fake dialogues that defame a presidential candidate ahead of the 2024 election, for instance. ChatGPT also writes code, making it an effective cybercrime instrument. Even criminals with limited English skills could easily get it to generate malware if they had enough coding knowledge.
Tech players and regulators are perpetually stuck in a game of cat and mouse, and Indonesia is very much losing. We do not have a comprehensive legal framework to regulate how AI technologies are designed, developed and used. This is despite President Joko "Jokowi" Widodo’s bold vision of having Indonesia keep up with the global AI race, as reported by Kompas.com.
His administration’s only headway in this regard was made in 2020 with the issuance of the National AI Strategy 2020-2025 by the Agency for the Assessment and Application of Technology (BPPT). The strategy centers heavily on Indonesia’s plan of creating an AI-driven economy, but it is silent on how exactly AI innovations fit into the domestic law.
Meanwhile, the European Union (EU) and countries like the United Kingdom, the US and China have all made strides in regulating trustworthy AI.
The EU is now finalizing its flagship AI law, which is among the first to introduce a risk-based approach to AI governance, dictating that AI systems be subject to distinct licensing and transparency obligations depending on how harmful they are. Last year, the US Congress tabled the Algorithmic Accountability Act, aimed at mitigating AI-related bias, unfairness and discrimination, and quickly followed up with the introduction of a sketch for an AI Bill of Rights.
Without a set of robust, binding rules, there is little hope for accountability over potential AI harms. We do not know, for example, if the fake news and hate speech provisions in the Criminal Code and the Information and Electronic Transactions (ITE) Law can extend to AI-generated content and if so, who might be criminally liable.
Our intellectual property regime is also silent on automation, and therefore provides no safeguard against AI-facilitated copyright or trademark infringements, including plagiarism. The brand-new Personal Data Protection (PDP) Law provides minimal clarity on how data breaches committed by means of AI-powered malware are to be tackled.
If legislation is too far-fetched, Indonesia can instead opt for other ways to achieve trustworthy AI: an ethics guideline could give stakeholders an idea of the bounds within which they may develop and use AI, while a testing toolkit like Singapore has implemented allows developers to self-assess AI systems before they enter the market.
The general consensus is that there is no one-size-fits-all way to regulate AI, but to harness the technology for the benefit of humans, taking small steps is as good as any. Indonesian regulators must come to terms with this soon, as ChatGPT and other algorithms are only going to keep evolving.
***
Felicity Salina is a researcher in criminal justice and human rights at Gadjah Mada University (UGM), Yogyakarta. Haekal Al Asyari is a lecturer at UGM and a doctoral candidate at the University of Debrecen, Hungary.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.