Can't find what you're looking for?
View all search resultsCan't find what you're looking for?
View all search resultsWe should not look at ethics as a barrier to innovation; but as the foundation for sustainable progress.
thical Integrity (EI) is fast becoming the new competitive edge. Defined simply, EI means applying ethical judgment beyond current laws and societal norms.
And yet, it’s being dangerously overlooked.
A recent IBM study found that while 79 percent of executives say artificial intelligence ethics is important to their enterprise-wide strategy, less than 25 percent have operationalized any form of ethics governance.
The message is clear: Ethics are being talked about far more than they are acted upon. In the global race for AI dominance, transparency and accountability are often sacrificed for speed.
But this is a short-sighted bargain. Companies that lead with EI are not just doing the right thing, they are building long-term trust, regulatory resilience and brand equity. We should not look at ethics as a barrier to innovation, but as the foundation for sustainable progress.
If that sounds dramatic, consider history. The most dangerous tools we have ever created, including fire, nuclear energy, even social media, were not inherently evil. They became destructive in the absence of ethical restraint. AI is no different.
Power by itself is not the problem. The real danger lies in how that power is used. And in today’s AI race, we are once again prioritizing capability over conscience. Innovation is a mandate, but just because we can build something, doesn’t mean we should.
A McKinsey report projects AI could add up to US$13 trillion to the global economy by 2030. But it also warns that the benefits may concentrate in the hands of a few regions and actors. The risk? Deepening inequality and loss of democratic control over technologies that will touch every aspect of life.
AI, for all its potential, is not a neutral force. It reflects the intent of its creators, the priorities of its funders and the biases of its data. In that sense, AI is not the challenge – we are. The real test of our AI future will not lie in technical milestones, but in the ethical courage of those who shape it. And the decisions made today will shape social, economic and political systems for decades.
Some of the world’s most influential AI leaders claim to be acting with integrity. They point to their codes of conduct, oversight boards and commitments to transparency. But as the AI arms race accelerates, we see those guardrails being quietly removed or ignored.
OpenAI was founded on the promise of open-source development and democratic access. Today, its internal policies have shifted without public explanation. The tech giant that once claimed it would never work with defense contractors now develops AI tools for warfare. In each case, legality is observed. But is legality the same as morality?
Another recent example brings this tension into sharp focus. Palmer Luckey, founder of Oculus VR, now leads Anduril Industries, a defense technology company building autonomous, AI-enabled weapons systems like the ALTIUS-600M drone, which can identify and strike targets with minimal human intervention.
His mission is framed as deterrence: To make the US “impossible to harm”. Yet beneath the rhetoric lies a deeper ethical dilemma. Are these systems truly safeguarding lives or simply automating the mechanics of war?
That statement underscores the heart of the issue: It is not just the tools that matter, but the intent and governance behind them. Just because a system can distinguish a combatant from a civilian does not mean it always will, or should be trusted to do so autonomously.
As militaries around the world race to integrate AI into battlefield strategy, we must ask: Who is accountable when things go wrong? And what happens when private innovation outpaces public oversight?
Steward leadership requires more than just compliance. It requires a higher standard: Ethical Integrity, the courage to ask not just “Is it legal?” but “Is it right?”
Ethical Integrity means acting according to moral principles that consider consequences, including unintended ones. It pushes leaders to confront their blind spots, cultural, social and economic, and to take responsibility for both action and inaction. It means making hard decisions even when you’re not being forced to, because it’s the right thing to do.
In fact, some of the most compelling examples of ethical leadership in AI are emerging not from Silicon Valley, but from Southeast Asia and the Middle East, where technology adoption is accelerating amid deeply diverse cultural, political and social contexts.
There, a new wave of ethical start-ups is reshaping the technology landscape by prioritizing long-term value, sustainability and social impact over rapid, unchecked growth. Unlike traditional ventures chasing quick unicorn status, these start-ups, such as Komuto (Indonesia) and Hasan.VC (Malaysia and Singapore), emphasize financial prudence, transparency and responsibility.
In these regions, where economic potential and social complexity intersect, ethical decision-making isn’t a theoretical debate. It’s a daily necessity. And it shows: Steward leadership can thrive when leaders are grounded in local realities, guided by shared values and committed to inclusive progress.
Across regions and industries, enduring organizations, meaning those that have succeeded at “doing well by doing good”, share five core values: Interdependence, long-term view, ownership mentality, creative resilience and ethical integrity.
These values are not abstract ideals but a compass guiding some of the world’s most resilient and respected companies. Such organizations don’t chase quarterly profits; instead, they build legacies that endure for generations.
These exemplary cases illustrate that steward leadership in Asia must engage multiple stakeholders, including governments, businesses, communities and civil society, fostering dialogue and investing in education that prioritizes inclusion and justice. It requires leaders who understand that AI ethics cannot be separated from broader societal values and power dynamics.
This leadership mindset challenges us all to ask better questions: Are we designing AI to replace human connection or to enhance it? Are we building systems that reduce bias or that reinforce it? Are we planning for the next ten months or for the next ten generations?
At its heart, steward leadership is about using power and resources to create value for all stakeholders, not just shareholders. It recognizes that today’s greatest threats aren’t competitors or technological disruption, they are environmental collapse, social fragmentation and growing inequality.
If we want to protect what matters, we must redefine how we lead.
Rather than getting caught up in debates about whether AI will save or destroy us, let’s ask a more urgent question: How will we choose to lead while we still can?
***
The writer is CEO of Stewardship Asia Center.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.