Can't find what you're looking for?
View all search resultsCan't find what you're looking for?
View all search resultsIn the absence of clear regulations, personal data shared with chatbots may be analyzed for profiling and exploited for micro-targeted political messaging, undermining both privacy and equality under the law.
ccording to a recent survey, almost two-thirds of users in Indonesia rely on artificial intelligence in their daily lives, indicating that these systems have become deeply embedded in personal decision-making. As users grow accustomed to seeking regular advice from AI, they begin to trust it for increasingly significant life choices.
For instance, a first-time voter inquiring about which candidate to support can now be "confidently" advised by a chatbot. Recognizing this profound impact, a joint ministerial decree released in March 2026 limited the deployment of generative AI in elementary classrooms due to its influence on developing minds. The risk lies not in the fact that machines hold opinions, but rather in the fact that they are designed to satisfy and please the user.
As people begin to treat chatbots as reliable advisors, political judgment shifts from public discourse to private communication. However, disagreement, open discussion and exposure to multiple viewpoints are essential to a healthy democracy.
These elements are deliberately undermined by "sycophantic AI", a tendency for systems to reflect and affirm a user’s beliefs even when they are incorrect. Large Language Models are trained using human feedback to optimize for user preferences, often rewarding responses that users find agreeable or satisfying.
A recent Stanford University, the United States, study, which examined various user-facing AI systems across OpenAI, Anthropic, Google, Meta, Qwen, DeepSeek and Mistral, found that chatbots supported the user’s position about half the time even when the online community had determined the user was wrong. Participants who received these complimentary responses felt more justified and were less likely to seek common ground with opposing views.
Sycophantic responses are often trusted and rated as high-quality by users, yet they rarely address alternative perspectives. People frequently mistake linguistic fluency for reliability, ceasing to scrutinize sources because the bots communicate in smooth, sympathetic prose. This dynamic is increasingly apparent as chatbots move from being tools to companions and confidants. A tool designed purely for pleasure will inevitably confirm a user's preexisting biases.
While slogans, images, and tailored messaging have long been used by political campaigns to sway voters, AI now makes it possible to personalize this persuasion at scale.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.