Written Answer to Unanswered Oral Question

Risk Assessment on Providers of Artificial Intelligence Technologies

Speakers

Summary

This question concerns whether the National AI Strategy 2.0 mandates risk assessments for artificial intelligence providers prior to public release and the specific details regarding the nature of these assessments. Minister for Communications and Information Josephine Teo stated that the government utilizes the AI Model Governance Framework and toolkits like AI Verify to help organizations validate systems against principles such as robustness and explainability. She highlighted that the finance sector is guided by the Monetary Authority of Singapore’s FEAT principles, while private entities like DBS employ internal responsible data use frameworks and risk assessment tools. Minister Josephine Teo also emphasized international cooperation, such as mapping AI Verify to the United States’ AI Risk Management Framework, to harmonize standards and reduce compliance burdens. These initiatives form a baseline for the government to partner with the industry in managing risks and building a trusted environment for artificial intelligence development.

Transcript

26 Assoc Prof Razwana Begum Abdul Rahim asked the Minister for Communications and Information (a) whether Singapore's National Artificial Intelligence (AI) Strategy 2.0 includes a requirement for providers of AI technologies to complete a risk assessment prior to making the technology publicly available; (b) if so, who undertakes the assessment; and (c) what risks are assessed.

Mrs Josephine Teo: The Singapore National AI Strategy 2.0 identifies the presence of a trusted ecosystem as a key enabler for robust AI development.

In fact, Singapore was a first-mover in launching our AI Model Governance Framework back in 2019, which recommends best practices to address governance issues in AI deployment. We continue to update it to address emerging risks, including by launching a Framework for Generative AI this year. Meanwhile, the Government also provides practical support for organisations seeking to manage risks in the development and deployment of AI, including through launching open-source testing toolkits, such as AI Verify to help them validate their AI systems' performance on internationally-recognised governance principles like robustness and explainability.

These frameworks provide a useful baseline for the Government to partner industry on managing and assessing AI risks across the ecosystem. In the finance sector, financial institutions are guided by sector-specific AI governance guidelines, such as the Monetary Authority of Singapore's (MAS') Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT), which aligns closely to the earlier-mentioned AI governance frameworks. Many companies have supplemented these with additional internal guidelines to oversee AI development, examples of which can be found in the Personal Data Protection Commission's Compendium of Use Cases for the Model AI Governance Framework. For example, DBS has implemented its own Responsible Data Use framework for its AI models to comply with legal, security and quality standards and utilises risk assessment tools, such as the probability-severity matrix.

Besides enhancing our governance approach domestically, we collaborate with international partners to build a trusted environment for AI worldwide. For instance, we have conducted a joint mapping exercise between AI Verify and the US' AI Risk Management Framework, to harmonise approaches and streamline the compliance burdens on organisations deploying AI across different jurisdictions. We will continue to seek out such opportunities and adapt our methods in tandem with the technology development.