TheJakartaPost

Please Update your browser

Your browser is out of date, and may not be compatible with our website. A list of the most popular web browsers can be found below.
Just click on the icons to get to the download page.

Jakarta Post

With AI comes a double-edged sword

Some of the most AI-proficient organizations in the world are treading with caution, and for good reason.

Conall McDevitt
Dublin
Thu, December 7, 2023

Share This Article

Change Size

With AI comes a double-edged sword The logo of OpenAI is displayed near a response by its AI chatbot ChatGPT on its website, in this illustration picture taken on Feb. 9, 2023. (Reuters/Florence Lo)

A

rtificial Intelligence (AI) is not new. Neither is reputational risk. While corporations have been using AI for some time now, most of it has been unseen in data analytics, predicting customer behavior, sales and marketing, or operations. Most of the time, clients and customers do not see the touch of AI in a corporation’s work.

For instance, a manufacturing company might use machine learning to collect and analyze an immense amount of data, and then identify patterns and anomalies which the company can use to make decisions about improving operations. As a customer of this manufacturing company, you would probably never see this AI in the works.

That is set to change with ChatGPT, which uses generative AI as a language model to answer questions and assist with tasks. Its uses now are varied - students might use it to write an essay, or a software engineer to code, a traveler to plan an itinerary, and some are already using it as a search engine. Companies are planning to jump on this bandwagon.

Forbes reported that Meta, Canva, and Shopify are using ChatGPT to answer customer questions. They also found that Ada, a Toronto-based company that automates 4.5 billion customer service interactions, partnered with ChatGPT to further enhance the technology.

As part of its evolution, CNBC reported that Microsoft is planning to release technology so that big companies can launch their own chatbots using the OpenAI ChatGPT technology. That’s going to be billions of people interacting with ChatGPT.

It seems like a perfect partnership, a natural next step for the technology.

Viewpoint

Every Thursday

Whether you're looking to broaden your horizons or stay informed on the latest developments, "Viewpoint" is the perfect source for anyone seeking to engage with the issues that matter most.

By registering, you agree with The Jakarta Post's

Thank You

for signing up our newsletter!

Please check your email for your newsletter subscription.

View More Newsletter

Not everyone has jumped onto this tempting bandwagon. Some of the most AI-proficient organizations in the world are treading with caution, and for good reason.

As impressive as ChatGPT has proved so far, Large Language Models (LLM) like ChatGPT are still rife with well-known problems. They amplify social biases, often negatively against women and people of color. They are riddled with loopholes—users found that they could circumvent ChatGPT’s safety guidelines, which are supposed to stop it from providing dangerous information, by asking it to simply imagine it’s a bad AI.

In other words, ChatGPT-like AI is fraught with reputational risk.

That doesn’t mean we have to dismiss AI like ChatGPT. Adopting new technology of any sort is bound to come with risks. So how do we reap the benefits of AI whilst maintaining a healthy level of reputational risk?

The Reputation, Crisis and Resilience (RCR) team at Deloitte held a roundtable with leaders in financial services, technology, and healthcare industries to discuss how they approach the complex challenge of managing reputation risk. Some of the points concluded were:

First, foster a reputation intelligent culture.

One of key things discussed was creating a culture that is sensitive to brand and reputation. In every decision made, employees should have an internal compass that constantly asks: will this move the needle on the company’s reputation, and how? This can be cultivated through holistic onboarding and training programs.

Second, set a reputation risk tolerance.

Setting a tolerance can help organizations make intentional decisions. No company wants to take a reputational hit, but few companies set tolerance levels for how much risk they want to take. When you have a threshold to stay within, it’s easier to deal with new technologies you might not understand fully.

Third, utilize reputation risk management.

Measurement methods include regular surveys, media monitoring, and key opinion research. However, leaders must find a balance between collecting relevant data without drowning in it. Research shows that too much data collection can be counterproductive, distracting people from the bigger picture or creating a risk-averse attitude.

As AI continues to develop very quickly, knowing the intricate depths and breadths of AI all the time will be difficult. While we should keep abreast, what’s more important is focusing on cultivating a strong mindset around reputational risk so that no matter the tool—AI, social media, cryptocurrency—we can always manage the risk involved.

***

The writer is managing partner of Europe and Asia for the Penta Group.

 

Your Opinion Matters

Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.

Enter at least 30 characters
0 / 30

Thank You

Thank you for sharing your thoughts. We appreciate your feedback.