Tackling Risk of Agentic AI Capable of Autonomous Actions and Unforeseen Emergent Behaviours
Ministry of Digital Development and InformationSpeakers
Summary
This question concerns the government’s plans to regulate agentic AI as raised by Mr Low Wu Yang Andre, specifically whether legislative empowerment will govern autonomous AI actions and generated content. Minister for Digital Development and Information Mrs Josephine Teo replied that many AI risks are currently covered by broad legislation like the Personal Data Protection Act and various sector-specific guidelines. She emphasized that human and organizational accountability remains central, with the Government reviewing how to adapt existing frameworks to address the increased autonomy and emergent behaviors of agentic systems. The response also highlighted that the government is strengthening data protection and cybersecurity while monitoring international developments and implementing targeted interventions like the upcoming Online Safety (Relief and Accountability) Bill. Additionally, she noted that the Government Technology Agency of Singapore is conducting trials and experiments to understand how to implement agentic AI responsibly for the public good.
Transcript
26 Mr Low Wu Yang Andre asked the Minister for Digital Development and Information regarding the risk of agentic AI capable of autonomous actions and unforeseen emergent behaviours (a) what is the Government’s specific plan to regulate these high-risk capabilities; and (b) whether this plan will include legislative empowerment to govern both AI actions and the content that AI generates.
Mrs Josephine Teo: In our current artificial intelligence (AI) governance approach, many AI risks are covered by broad legislation such as the Personal Data Protection Act, the Workplace Fairness Act and the Broadcasting Act, and sector-specific guidelines in the healthcare, finance and legal sectors. For specific risks, there are also targeted interventions, for example, Elections (Integrity of Online Advertising) (Amendment) Act, and the upcoming Online Safety (Relief and Accountability) Bill.
Human and organisational accountability is central to Singapore’s AI governance approach. The Government’s Model AI Governance Framework sets out guidelines for AI systems to be explainable, transparent and fair. Organisations deploying AI should establish clear governance structures with designated oversight roles. They should ensure meaningful human accountability in AI-augmented decision-making. Organisations should also implement risk management processes to monitor and mitigate risks, such as algorithmic bias.
Agentic AI presents new opportunities and risks. These systems can execute actions, interact with external systems and adapt their behaviour with reduced human oversight. Our response focuses on two areas.
First, we recognise that many risks arising from agentic AI are extensions of existing challenges. Existing guidelines and regulations, including risk management processes and oversight, apply to agentic AI systems. The principle of maintaining human accountability and putting in place sufficient controls and guardrails also applies. In addition, our long-standing efforts to strengthen data protection, cybersecurity and resilience help protect the systems that AI agents interact with. We are reviewing how to adapt and strengthen these frameworks and systems to account for the increased autonomy of agentic systems, while monitoring international developments in this emerging field.
Second, we are developing capabilities to go beyond existing frameworks and approaches. We are running trials and experiments to stay abreast of the evolving technology and to understand what it takes to implement agentic systems responsibly. For example, the Government Technology Agency of Singapore has been experimenting with agentic AI in public sector use cases. We are doing this with care. This approach will deepen our understanding of how to interact with these systems, build confidence and harness their value for the public good.