TheJakartaPost

Please Update your browser

Your browser is out of date, and may not be compatible with our website. A list of the most popular web browsers can be found below.
Just click on the icons to get to the download page.

Jakarta Post

Unmasking fraudsters: Combating the emerging threat of deepfake fraud

Indonesia’s digital and financial services organizations should diversify identity verification methods, avoiding over-reliance on a single technique to fully verify their users.

Ronald Molenaar
Jakarta
Tue, November 21, 2023

Share This Article

Change Size

Unmasking fraudsters: Combating the emerging threat of deepfake fraud Illustration of deepfake video (Shutterstock/Lightspring)

I

ndonesia's digital economy is projected to hit Rp 3.216 quadrillion (US$210 billion) by 2027, as reported by the Indonesian Chamber of Commerce and Industry (Kadin). This substantial growth, a 128 percent increase, is attributed to the widespread adoption of technology across various sectors.

It is anticipated that the digital economy will account for approximately 14 percent of Indonesia's gross domestic product (GDP) by 2027. This rapid pace of digitalization positions Indonesia on track to become one of the world's top 10 economies by 2030.

However, digital fraud poses one of the most significant challenges if Indonesia is to fulfill its digital and economic potential. One of the fastest-growing threats we have observed is the rapid rise in bad actors taking advantage of the latest advances in AI-generated content (AIGC) technology.

Increasingly, fraudsters are targeting specific areas within the digital customer boarding journeys by creating or manipulating deepfake content (facial photos and videos) to create realistic digital impersonations of individuals to bypass identity verification measures and gain unauthorized access to sensitive financial services.

This ranges from digital bank account opening, loan applications, credit card applications, or making e-commerce transactions, posing a significant potential financial and reputational

risk to both individuals and organizations.

Viewpoint

Every Thursday

Whether you're looking to broaden your horizons or stay informed on the latest developments, "Viewpoint" is the perfect source for anyone seeking to engage with the issues that matter most.

By registering, you agree with The Jakarta Post's

Thank You

for signing up our newsletter!

Please check your email for your newsletter subscription.

View More Newsletter

Nowadays, it is increasingly easy to download photos of someone’s face or video from public social media accounts and use AI tools to synthesize a deepfake photo or video. Social media also makes it relatively easy to find out where potential victims live and work, as well as who their relatives and friends are. Experts say these fraudsters, who operate their scams like businesses, are prepared to be patient, planning their attacks for months.

Already, some high-profile examples of deepfake content involving actors, executives, public figures, and politicians, have successfully tricked people. In another example, a fraudster in China used AI-powered face-swapping technology to impersonate a friend of the victim during a video call and convince him to transfer Rp 9.5 billion.

In response to the growing threat of deepfake and digital fraud, many financial services companies have already implemented electronic Know Your Customer (e-KYC) measures to verify a customer’s identity. These typically employ some form of biometric verification such as facial recognition, fingerprints, and/or iris scans in addition to national identity documents like ID cards to digitally authenticate a customer before they can open an account.

But fraudsters have evolved their tactics and found ways to "spoofing" the e-KYC process by using fake samples such as fingerprints (via gelatin, tape, or 3D printing), facial scans (printed images, videos, or 3D masks), or iris scans (digital images or contact lenses). These “presentation attacks” may employ physical or digital objects such as photos, detailed masks, digitally created images, or a video presented on a screen in front of the smartphone or laptop camera.

A more advanced fraud type, frame injection, uses fraudulent data streams between the capture device (such as a webcam or microphone) and the biometric identifier. The fraudster hacks a phone app to submit their video to the identity verification website as if it came from the phone’s camera.

More companies are starting to add a "liveness detection" authentication layer asking the customer to perform a certain action, such as smiling, blinking, or turning their head in real-time during the registration stage. Since the action required is randomized each time, fraudsters cannot rely on prepared photos, video, or audio spoofing.

But even so, fraudsters have upped their game to collect data from social media, analyze facial features, and superimpose a lifelike 3D "skin" onto their own faces, enabling them to respond convincingly to real-time liveness checks.

Here are a few suggested countermeasures financial services companies can deploy, to protect themselves and their customers.

First, advanced liveness detection. Implement advanced liveness detection technologies using three-dimensional depth analysis, and infrared scanning to differentiate between genuine and synthetic facial features.

Second, browser security enhancements. Web-based browsers (versus mobile app) are particularly vulnerable. Regularly update and patch vulnerabilities, enforce secure protocols, and employ secure browser extensions or plugins that can detect and block frame injection attempts.

Third, continuous monitoring and anomaly detection. Deploy robust real-time monitoring systems to identify suspicious bot or human-driven interactions, and spikes and flag potential attacks.

Fourth is multi-factor authentication (MFA). Add layers of security with multi-factor authentication that combine biometric checks with additional factors such as one-time passwords (OTPs) and device fingerprinting.

Fifth, machine learning-based fraud detection. Utilize machine learning to identify fraudulent patterns and swiftly respond to threats, including bot-driven interactions and money laundering behavior.

Sixth, robust customer education. Continuously educate customers on online fraud risks, secure online practices (such as changing passwords regularly), and how to verify website authenticity, identify phishing attempts, and report lost documents such as ID card.

Seventh, collaborative industry efforts. Foster industry collaboration to share knowledge about fraud techniques, emerging threats, and best practices. Collective intelligence strengthens defenses and safeguards customer identities and assets.

Indonesia’s digital and financial services organizations should diversify identity verification methods, avoiding over-reliance on a single technique to fully verify their users. Businesses should consider a more comprehensive approach, combining biometric, behavioral, and contextual factors, which enhances security. Within the organization, departments like compliance, risk management, fraud prevention, and IT security must collaborate for a holistic fraud prevention strategy.

Moreover, continuous updates in capabilities, processes, and knowledge are essential to stay ahead in the ever-evolving battle against bad actors and deepfake syndicates using the latest technology.

 ***

The writer is Indonesia's country manager at Advance.AI.

Your Opinion Matters

Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.

Enter at least 30 characters
0 / 30

Thank You

Thank you for sharing your thoughts. We appreciate your feedback.