What should we do in the AI era? How should we control it? For humanity, this is the main issue for a bright future.
here are serious global concerns about the misuse of synthetic content, ranging from psychological to financial and political harm. Artificial intelligence can amplify the misuse of such content and create doubt about the authenticity of what we see and hear. Creative AI can also be directed toward authoritarian use. It will not always be used democratically.
What should we do in the AI era? How should we control it? For humanity, this is the main issue for a bright future. The impact is social-technical, and so must be the remedies. While the manipulation of information undermines trust, top-down responses such as government censorship also weaken the foundations of governance.
Nancy Pelosi, the former speaker of the United States House of Representatives, visited Taiwan in 2022 when I was a cabinet member in charge of digital in Taiwan. There were a lot of cyberattacks but also information manipulation of foreign origin.
There was a malicious attack that replaced a billboard in Taipei with hate messages directed toward Pelosi. The intention was to make us panic, to make the stock market crash. It was a comprehensive attack, but we defended ourselves very quickly and made sure people understood. They just changed some billboards and tried to cut access to websites, but they never really took control.
In January 2024, when presidential and legislative elections were held, we said that there would be deep fakes, there would be attacks on the vote-counting process and people would be led to believe that the vote-count was fraudulent.
In Taiwan's experience, we have found that debunking after the fact may not be enough. Instead, we practice "pre-bunking", proactive approaches that anticipate likely tactics, openly illustrate how deepfakes are made and cultivate a shared civic resilience against them. In addition, deliberations, such as our Information Integrity Alignment Assembly, enable citizens to co-create standards for online advertising, authentication and platform accountability.
In Taiwan, we use technology to make the state and the government transparent to the people. But authoritarian regimes in China and Russia use AI technology to make people transparent to the state. So it is transparency, but in reverse.
The authoritarian narrative is always the same: "Democracy leads to chaos, democracy leads to polarization, democracy leads to a lack of trust."
In this context, we should remember that a doctor in China discovered an "unknown pneumonia", later identified as COVID-19, and warned local people at the end of 2019. The local government accused him of spreading a "hoax" and unfortunately, he died of COVID.
Without journalism and free speech, a lot of problems can arise. It is not just that ordinary citizens become ignorant of what is going on because of the lack of journalism and information.
Without freedom of expression, there is a danger because the decision-makers do not acknowledge the whole picture. They can be misled and make the wrong decisions because they do not know that the reality has changed. It is a terrible situation for a society.
Established media and journalism, which need to check facts first hand, have a more critical role to sustain society in the era of AI.
We should not concentrate power too much, and that requires a different design, it requires privacy technology, so that is very important. The government needs to be accountable, but the government should not sacrifice the personal privacy of citizens.
With proactive public engagement, transparent provenance, collaborative governance and open-source tools for trust and security, we can align new technologies with democratic principles. In doing so, we preserve the integrity of elections and the trust that underpins democracy itself. The government trusts the people and so the people put their trust back in the government.
I call AI assistive intelligence, like an eyeglass. We can steer it, change its trajectory, go in a different direction. I call this direction plurality. Plurality means collaboration across diversity, different cultures and different ideas, factors that are the strength of our society.
AI should be steered to facilitate broad listening and broad communication across regions and generations, so that people who speak different languages can easily have real-time translation or summaries, so that you can listen to a different language as if it is your native language. Broader communication and discussion through AI could lead not only to creative ideas, but also to greater democracy.
As for the singularity, it is said to be a hypothetical point in time when AI is so advanced that civilization will undergo a dramatic and irreversible change. There will be a development of AI that is so strong, so powerful beyond comprehension and understanding that humans are powerless to change its behavior.
I have said many times the singularity is near, but plurality is here. I mean that singularity is always near, but plurality is a better choice, and it is already here. We can keep using AI to foster plurality.
---
The writer is the former minister of digital affairs in Taiwan.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.