Written Answer to Unanswered Oral Question

Encouraging Companies to Enrol in IMDA's AI Verify Foundation to Uphold Responsible and Ethical Use of AI

Speakers

Summary

This question concerns MP Ms See Jinli Jean’s inquiry into plans for enrolling AI-driven companies in the AI Verify Foundation and establishing reporting channels or a tripartite approach for ethical AI management. Minister for Communications and Information Josephine Teo responded that the Government promotes responsible AI through voluntary guidelines, including the Model AI Governance Framework and the AI Verify testing toolkit. AI Verify enables developers to objectively validate system performance against responsible principles and generate reports to facilitate stakeholder engagement, including discussions between unions and employers. The Minister noted that the framework is already being adopted by platform companies like XOPA.ai to demonstrate transparency and fairness in their AI-driven business models. For reporting irresponsible use, she directed consumers to existing sector-specific authorities such as the Ministry of Manpower, TAFEP, or the Monetary Authority of Singapore for follow-up.

Transcript

100 Ms See Jinli Jean asked the Minister for Communications and Information (a) what are the plans to ensure that companies with artificial intelligence (AI) driven business models, such as platform companies, are enrolled into IMDA's AI Verify Foundation that seeks to uphold responsible and ethical use of AI; and (b) whether the Government will consider (i) a reporting channel for consumers who are subjected to companies’ irresponsible use of AI and (ii) a tripartism approach to shaping rules in using AI for work and workforce management.

Mrs Josephine Teo: Singapore supports the responsible development and deployment of artificial intelligence (AI) across all settings.

Though there is broad agreement that processes and outcomes should be explainable, transparent, fair and human-centric, technical standards for responsible AI are still in the nascent stage of development. Nevertheless, to promote responsible AI use, IMDA has introduced guidelines and tools for owners and developers of AI systems. These include (a) the Model AI Governance Framework in 2019; and (b) AI Verify in 2022 – a voluntary testing framework and software toolkit that helps companies (i) objectively validate the performance of their AI systems against responsible AI principles, and (ii) demonstrate this to their stakeholders through the sharing of testing reports.

AI Verify can, therefore, be a useful basis for unions to engage employers or labour market intermediaries on the use of AI that impact workers. In fact, early adopters of AI Verify include online human resource services platform XOPA.ai.

As for reporting channels, this would depend on the specific concerns. For example, reports concerning employment discrimination as a result of AI use should continue to be channelled to the Ministry of Manpower or the Tripartite Alliance for Fair and Progressive Employment Practices (TAFEP) for assessment and follow up. For insurance- or banking-related concerns, reports can be directed to the Monetary Authority of Singapore.