The rapid adoption of AI without a clear understanding of how these systems work presents risks for society but offers opportunities for policymaking.
he recent adoption of Artificial Intelligence (AI) models has started to shake up traditional avenues of society. In the education sector, for example, teachers and schools have begun implementing AI assistance in classrooms to aid students with better knowledge retention. Corporations are deploying AI tools to boost productivity and retain customers with “humanlike” models in search engines and e-commerce sites. For individuals wishing to get crafty, they can even download open-source files and develop their own AI models to suit their needs.
Media reporting of AI often feature the possibilities of what AI could do to help people in day-to-day affairs. One might argue the pushback against AI models is merely reactionary and that, in the long term, society would benefit from having tools that significantly increase productivity by offloading tasks from human hands, similar to what the world has done with traditional computers.
However, it remains a huge technological leap the world has not yet fully understood in terms of its current shortcomings and potential risks.
For starters, AI models are advancing rapidly; fears that were once unfounded because AI could not generate human fingers or create essays beyond basic levels of understanding are now even more relevant. Within months, newer models have shown the ability to create photorealistic images, render “deep fakes” without identifiable visual flaws and produce text conversations indistinguishable from natural language.
The fears are now grounded in reality. In a Fortune article, a survey of 1,000 United States companies currently using ChatGPT say nearly half of these companies have already replaced workers with the chatbot. A major US political party has begun campaigning for the 2024 presidential elections using fully AI-generated ads with images that previously did not exist.
Memos circulated internally among United Kingdom students show colleges have stopped accepting new intake for their online masters programs due to risk of AI undermining the teaching process, preventing access to those seeking quality higher education but limited by geographical or other personal constraints.
Governments are being called upon to be aware of the dangers of uncontrolled AI usage in the private sector. During the RSA Conference in San Francisco, California, the United States, late last month, a renowned industrial information security seminar, Eric Goldstein of the US Cybersecurity and Infrastructure Security Agency (CISA) said in an interview he acknowledged the wide breadth of benefits that come with AI, but “all the positive and negative aspects of AI are still unknown” and following the trends may “inadvertently expose companies to risk”.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.