TheJakartaPost

Please Update your browser

Your browser is out of date, and may not be compatible with our website. A list of the most popular web browsers can be found below.
Just click on the icons to get to the download page.

Jakarta Post

Is artificial intelligence subject to ethics?

Besides environmental disasters, global poverty and all types of war and skirmishes, human beings are facing another challenge that makes us more anxious, those related to artificial intelligence (AI)

Qusthan Firdaus (The Jakarta Post)
Jakarta
Sat, October 6, 2018 Published on Oct. 6, 2018 Published on 2018-10-06T02:27:04+07:00

Change text size

Gift Premium Articles
to Anyone

Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!

B

esides environmental disasters, global poverty and all types of war and skirmishes, human beings are facing another challenge that makes us more anxious, those related to artificial intelligence (AI).

AI refers to developing computer systems to acquire human intelligence so that the former can carry out tasks such as decision-making, visual perception, language translations and speech recognition.

Among those four tasks, decision-making implies the most serious ethical challenge for humans because sooner or later, AI could make a decision that affects (or on behalf of) humans.

Suppose that a self-driving car is designed as part of AI advancement to place human safety as its first and foremost priority. Then in a near-accident situation, the car’s AI would decide whether to hit an innocent bystander, an innocent pet, or a wall, for instance.

Given that the AI prioritizes the safety of a person, it is likely to smash either a pet or a wall. Nonetheless, in countries where the notion of animal rights is highly embraced such as Australia and New Zealand, such a decision would be considered bad compared to toppling a wall.

We can perplex the situation by creating a scenario where the AI system is hijacked by a hacker. If this is the case, then who should be morally and legally responsible for the car accident? Is it the AI itself, the car manufacturer, car owner, internet provider or the hacker?

On the one hand, blaming the AI requires a law that recognizes that the AI is a moral agent and subject to laws. At the same time, accusing the rest involves serious investigation. Therefore, we need an ethical framework.

Henry Sidgwick’s account of universal happiness is beneficial here. In other words, ethical assessments should not address the greatest happiness of the greatest numbers (which is wrongly understood as the ultimate principle of utilitarianism or consequentialism) but virtues and duties as well because Sidgwick requires them to achieve happiness, which is morally objective.

Objective moral happiness means virtuous, dutiful satisfactions or pleasures. Consequently, virtues and duties are two measurements for the account of universal happiness.

In contrast, once businesspeople pursue egoistic happiness, AI development would ignore and abandon ethics. Further, how should we conceive ethics for AI? Should it be similar or different from ethics for human beings and animals?

Those who reject animal rights would likely reject the notion of AI rights as well. Having said that, we should make an early assessment about whether, first, AI have a set of rights. Once we grant them such rights, then we cannot discriminate against AI, and they could easily make decisions on behalf of human beings. At this stage, rights mean moral and legal entitlements. Once our legal system regulates AI, they would then have a set of legal entitlements.

However, we do not know yet whether AI have moral entitlements. If they do, how can we prove and govern them?

Let us say that a package of moral entitlements refer to negative and positive duties being available from within 10 commandments. If this is the case, then AI ought not to kill, steal and covet. But what about the duty to honor parents and misuse divine attributes? I believe they are not relevant to AI as much as computers have nothing to do with them.

It is vague as to whether AI have a set of moral entitlements, though they have to be regulated. If this is the case, then it seems that AI have artificial rights as opposed to natural rights.

Technology is generally neutral of values, but once technology like AI strives to imitate human intelligence, it might be a case where such imitation contains a set of values. Therefore, AI decision- making is value-laden.

This combination implies that AI cannot be ethically assessed in the same way we justify (or counter justify) human actions and rules. Having said that, I think AI is a moral agent that is slightly different from humans.
__________________________


The writer is an associate lecturer at Binus University International, Jakarta. The views expressed are his own.

Your Opinion Matters

Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.

Enter at least 30 characters
0 / 30

Thank You

Thank you for sharing your thoughts. We appreciate your feedback.

Share options

Quickly share this news with your network—keep everyone informed with just a single click!

Change text size options

Customize your reading experience by adjusting the text size to small, medium, or large—find what’s most comfortable for you.

Gift Premium Articles
to Anyone

Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!

Continue in the app

Get the best experience—faster access, exclusive features, and a seamless way to stay updated.