TheJakartaPost

Please Update your browser

Your browser is out of date, and may not be compatible with our website. A list of the most popular web browsers can be found below.
Just click on the icons to get to the download page.

Jakarta Post

AI is explaining itself to humans. And it's paying off

Paresh Dave (Reuters)
Premium
Oakland, United States
Fri, April 8, 2022

Share This Article

Change Size

AI is explaining itself to humans. And it's paying off The why of AI: LinkedIn’s subscription revenue has increased 8 percent after it adopted explainable artificial intelligence (XAI) that not only predicted clients at risk of canceling, but also explained how it arrived at the conclusion. (Unsplash/Souvik Banerjee)

E

em>Explainable AI is on the rise for accurate results about consumers’ decisions, but some experts are concerned that it will do the opposite.

Microsoft Corp-owned LinkedIn boosted subscription revenue by 8 percent after arming its sales team with Artificial Intelligence (AI) software that not only predicts clients at risk of canceling, but also explains how it arrived at the conclusion.

The system, introduced last July and described in a LinkedIn blog post on Wednesday, marks a breakthrough in getting AI to "show its work" in a helpful way.

While AI scientists have no problem designing systems that make accurate predictions on all sorts of business outcomes, they are discovering that to make those tools more effective for human operators, the AI may need to explain itself through another algorithm.

The emerging field of explainable artificial intelligence, or XAI, has spurred big investment in Silicon Valley as start-ups and cloud giants compete to make opaque software more understandable. It has also stoked discussion in Washington and Brussels, where regulators want to ensure that automated decision-making is done fairly and transparently.

AI technology can perpetuate societal biases like those around race, gender and culture. Some AI scientists view explanations as a crucial part of mitigating those problematic outcomes.

United States consumer protection regulators, including the Federal Trade Commission, have warned over the last two years that AI that is not explainable could be investigated. The EU could pass next year the Artificial Intelligence Act, a set of comprehensive requirements that includes that users be able to interpret automated predictions.

to Read Full Story

  • Unlimited access to our web and app content
  • e-Post daily digital newspaper
  • No advertisements, no interruptions
  • Privileged access to our events and programs
  • Subscription to our newsletters
or

Purchase access to this article for

We accept

TJP - Visa
TJP - Mastercard
TJP - GoPay

Redirecting you to payment page

Pay per article

AI is explaining itself to humans. And it's paying off

Rp 29,000 / article

1
Create your free account
By proceeding, you consent to the revised Terms of Use, and Privacy Policy.
Already have an account?

2
  • Palmerat Barat No. 142-143
  • Central Jakarta
  • DKI Jakarta
  • Indonesia
  • 10270
  • +6283816779933
2
Total Rp 29,000

Your Opinion Matters

Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.

Enter at least 30 characters
0 / 30

Thank You

Thank you for sharing your thoughts. We appreciate your feedback.