TheJakartaPost

Please Update your browser

Your browser is out of date, and may not be compatible with our website. A list of the most popular web browsers can be found below.
Just click on the icons to get to the download page.

Jakarta Post

Safeguarding the future of artificial intelligence

In recent years, the development of artificial intelligence (AI) has been viewed as a “double-edged sword,” where some do find it beneficial, yet others are more skeptical about the problems it may create. While AI can bring certain advantages when integrated into a company’s operations, key decision makers should be mindful of their AI solutions, in order to ensure that the practice is free from controversy.

Sheena Suparman (The Jakarta Post)
Jakarta
Wed, March 13, 2024

Share This Article

Change Size

Safeguarding the future of artificial intelligence (Courtesy of IBM)

O

ne of the problems that may arise is AI hallucination, which is a phenomenon wherein a large language model (LLM) perceives patterns or objects that are nonexistent or imperceptible, creating outputs that are nonsensical, inaccurate and are not based on training data.

AI hallucinations have also led to headlines recently. One of the most famous stories include when an AI started a love story for the digital age when it admitted to falling in love with a user and even encouraged him to dissolve his marriage.

When used irresponsibly, AI has the potential to erode trust, propagate inequality and create harm. Therefore, companies have to be able to prioritize ethical principles to integrate the new technological innovation in order to gain more trust, instead of dismantle it.

A study by the Institute for Business Value found that of the 79 percent of executives believe AI ethics are important to their enterprise-wide approach, while fewer than 25 percent have operationalized ethics governance principles. AI governance is a critical element that needs to be prioritized for an AI system to truly benefit all stakeholders.

The ethics of it all

First of all, users need to keep in mind the purpose of AI is to augment human intelligence, not replace it. The data and insights generated belong to a creator or a company.

Prospects

Every Monday

With exclusive interviews and in-depth coverage of the region's most pressing business issues, "Prospects" is the go-to source for staying ahead of the curve in Indonesia's rapidly evolving business landscape.

By registering, you agree with The Jakarta Post's

Thank You

for signing up our newsletter!

Please check your email for your newsletter subscription.

View More Newsletter

Moreover, IBM has broken down principles as “pillars of trust,” which are explainability, fairness, robustness, transparency and privacy. The first basically means to ensure that users know the decision-making processes of the AI system, therefore they can comprehend and trust the results and output created by the algorithms.

When properly regulated, AI can assist humans in making fairer choices. Since the system of AI is based on machine learning, at the end of the day, any bias in the output usually lies in statistical discrimination based off of its training data.

The AI system also has to be robust. Research has shown that even small, imperceptible variations in the input data may lead AI models into incorrect decisions. Therefore, the system needs to be designed and developed to be strong against real-world variations, as well as evaluated and optimized accordingly.

Transparency is the key to trustworthiness. For it to be transparent, the capabilities and purpose of the algorithms must be openly and clearly communicated to relevant stakeholders.

The last pillar may be the focus of most conversations surrounding the disadvantage of AI. It is clear that AI systems must prioritize and safeguard consumers’ privacy and data rights.

PwC’s 2024 Global Digital Trust Survey shows that mega breaches are increasing in number, scale and cost, with 36 percent of respondents revealing a cost of US$1 million or more for their worst breach in the past three years globally.

However, steps can be taken to prevent data breaches. This includes incorporating privacy-enhancing technologies, anonymizing data, and adopting robust security measures.

Furthermore, AI requires an open and diverse ecosystem, fostering a culture where diversity, inclusion and shared responsibility are imperative. This includes the datasets, practitioners and more.

Offering solutions to the problems

With the vast growth of AI usage and networks all over the world, certain government agencies have come up with a set of regulations regarding the technology. Perhaps the most famous, the European Union-launched AI Act, the world’s first comprehensive AI law.

First introduced in April 2021, the European Commission proposed the first EU regulatory framework for AI that states that systems that can be used in different applications are analyzed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

Previously, in 2019, the Organisation for Economic Co-operation and Development (OECD) Council adopted the Recommendations on Artificial Intelligence, entitled “OECD AI Principles.” The document includes five values-based principles and recommendations for OECD countries and adhering partner economies to promote responsible and trustworthy AI policies.

However, most believe that the responsibility and ethical behavior should lie with the inventors themselves. This is a value that has been embedded in the vision and mission of IBM. Therefore, the company strongly supports the regulations set forth by the EU and OECD.

As one of the biggest technology companies in the world, IBM has been leading advances in innovations, including in AI. In order to ensure that the system created is responsible and more beneficial, IBM offers a “Precision Regulation for AI” that outlines a risk-based framework for industry and governments to work together in a system of co-regulation.

While acknowledging that a “one-size-fit-all” policy is not a realistic approach based on the many unique characteristics of AI systems, IBM proposes three pillars of policy framework; accountability and transparency, as well as fairness and security.

However, the company acknowledges that there needs to be an open collaboration between the government and the private sector. IBM’s framework is in line with the Communications and Information Ministry’s circular letter no. 9/2023 that emphasizes the values of inclusivity, accessibility, security, humanity, as well as credibility and accountability in the use of AI.

“We believe in all the principles mentioned above and have implemented them. Because in the end, the benefits of AI stand to grow exponentially. But only if society trusts it. With trust as the cornerstone for AI innovation, businesses must leverage AI as a force for positive change. Remember, this is a marathon, not a sprint. And at this moment in the long arc of human progress, that matters not just for our company, but for our customers and society as a whole,” concluded Roy Kosasih, country manager of IBM Indonesia.

This article is published in collaboration with IBM

Your Opinion Matters

Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.

Enter at least 30 characters
0 / 30

Thank You

Thank you for sharing your thoughts. We appreciate your feedback.