Oral Answer

Introduction of Regulations with Advent of Artificial Intelligence and Autonomous Machines

Speakers

Summary

This question concerns the development of legal frameworks and ethical safeguards for artificial intelligence (AI), as raised by Mr Patrick Tay Teck Guan and Dr Tan Wu Meng. Senior Minister of State Janil Puthucheary stated that the Government adopts a domain-specific regulatory approach, requiring safety testing for autonomous vehicles and human oversight for algorithmic systems. He clarified that existing laws apply to the human or corporate owners of current AI, while the Infocommunications Media Development Authority studies broader policy implications with sectoral regulators. The Senior Minister of State emphasized building technical capabilities within Government and industry to manage risks while leveraging AI for the Smart Nation vision. He added that regulatory efforts focus on integrating new technologies into existing legal frameworks and operational processes rather than creating laws for hypothetical, sentient AI.

Transcript

7 Mr Patrick Tay Teck Guan asked the Prime Minister with the advent and drive towards the use of artificial intelligence and autonomous machines, whether new laws and regulations will be promulgated or existing ones reviewed to ensure we address the issues of ethics, morality, provision of kill switches as well as liability.

8 Dr Tan Wu Meng asked the Prime Minister what policy principles and legal frameworks are being developed to update Singapore's governance and laws to address the issues arising from autonomous algorithms and devices.

The Senior Minister of State for Communications and Information and Education (Dr Janil Puthucheary) (for the Prime Minister): Mr Speaker, may I take Question Nos 7 and 8 together, please?

Mr Speaker: Yes, please.

Dr Janil Puthucheary: Sir, as artificial intelligence (AI) applications become more pervasive, it is being applied to different domains with very different types of risks specific to the domain concerned. The regulatory approach would thus also have to be domain-specific.

For example, in transportation, the Land Transport Authority (LTA) and the Traffic Police require autonomous vehicles (AVs) to first pass a rigorous safety test within a controlled environment before they can be tested on public roads at specific times and places. This progressive testing allows developers to advance the technology without exposing other road users to safety risks. Conceptually, this is similar to how a human driver has to be trained and tested before being given a driving licence. In addition, LTA requires mandatory motor insurance for AVs to cover third-party liability and property damage to protect the interests of all road users and property owners. Developers are also required to share key data with authorities to allow them to monitor the progress of the trials.

In the Government, we have developed machine learning algorithms that can detect and identify trading accounts suspected of syndicated activities. This is unlike AVs, where there are issues of physical safety. However, we still need to rigorously test the software before use, as well as maintain and update the AI engine continuously, by performing human checks on the results from time to time. For example, before any account is suspended, there will still be a human process to check through and ensure that the punitive action is justified.

In other sectors, the Infocommunications Media Development Authority (IMDA) is working with sectoral regulators to discuss issues of AI governance and study the policy and legal implications. Where we see the need to step in to protect the public’s interest, we will implement domain-specific safeguards.

AI is a key enabler of Smart Nation. To exploit its use and manage its risks well, we need people with a deep understanding of the technology. We intend to continue to raise such capabilities within the Government, industry and our research and development (R&D) institutions.

Mr Speaker: Mr Patrick Tay.

Mr Patrick Tay Teck Guan (West Coast): I thank the Senior Minister of State for the answer. I would like to ask if the Government would consider doing an overall national detailed study on the potential impact of AI and technology to get the conversations going and also pave the way for the ethical developments of technology. This is because there are growing concerns on various issues, such as liability, privacy, consent, safety, security, diversity as well as transparency.

Dr Janil Puthucheary: Mr Speaker, the Member will be reassured to know that many of the sectoral-specific bodies, whether they are trade or professional associations, are already interested in this process and have begun such discussions either in terms of their public consultation or in collaboration internally with the Smart Nation Office and other offices.

This is likely to be an ongoing discussion and attempt to look at the issues that are thrown up by new forms of technology, such as AI. The issue is to make sure that we make this process fit into the existing domain's specific laws, operational issues and regulations. So, we have a multitude of issues going on.

The Smart Nation Office will continue to study the matter and, especially as new technology and technological opportunities become available, we will continue to become engaged with the various regulations and domain-specific partners.

Mr Speaker: Dr Tan Wu Meng.

Dr Tan Wu Meng (Jurong): I thank the Senior Minister of State for his answers. I would like to ask whether these studies and frameworks will be forward-looking and include drawer plans and scenarios for technologies that may not have reached the market yet but which would have a significant disruptive impact. I have three illustrations for the Senior Minister of State's consideration.

Firstly, what happens when multiple autonomous algorithms interact? What is the impact for competition law if trading algorithms start to collude?

Secondly, for example, what are the implications for the criminal law and the idea of mens rea ‒ the idea of guilty intention ‒ when you have AI?

Thirdly, would there be a role for more aspirational principles of AI, such as studying, say, Isaac Asimov's Laws of Robotics?

Dr Janil Puthucheary: Mr Speaker, I thank the Member for the questions.

For the first question, indeed, we are continually studying various frameworks and trying to draw up drawer plans. But it is hypothetical and we will have to look at the technology as it emerges. There is no specific individual plan for AI. AI is a big basket of different types of technologies and opportunities. There is a plan or strategy for the investment around R&D. But in terms of the regulatory and legislative space, we will have to look at the capability and its impact on the various domains. So, we will continue to do so.

The example the hon Member brought up in the second question around multiple algorithms colluding, the regulatory framework would be no different from today, because those algorithms would have to have someone who had owned them, written them, supervised them and benefited from the collusion and criminal activity and so on and so forth. So, the existing legislation and laws would need to apply to the humans or the corporate entities that own and operate such AI.

This then leads us to the hon Member's second supplementary question, which is that criminal law for AI would be an issue if we had what is described as "strong AI" ‒ AI that is sentient, conscious ‒ that is, a general purpose AI where it is able to take its capabilities developed for one task and direct it to another possibly criminal task. We are a long way away from that. AI, in today's use, is what we call "weak AI" or task-specific AI. So, any criminal issue, we would have to ask the legal faculty around us, but I presume will be directed at the person who owns, operates or benefits from such a device or tool of technology.

The hon Member's third question was about the aspirational issues around AI and Asimov's Laws of Robotics. Indeed, we are aspirational around AI. We think it has a significant possibility of enabling both the Smart Nation vision as well as significant benefits for our economy and society over the next 20 years. As for Asimov's Laws, they are a fictional device used in science fiction, but nevertheless inspiring for how we should think about these types of problems. We are a long way away from the situation predicated in the Asimov literature which is where a strong AI is sentient, conscious and able to understand issues around harm, human beings, the rank order of relative issues between society and humanity and morality.

So, today, as far as Asimov's Laws are concerned, we have no need because we do not have sentient-strong AI. Neither have we the technical capability to programme in such laws today. But they are a useful thought experiment about how we should think about the regulations around this space.

Of note, the literary device of Asimov's Laws in his writing was often used as a way to describe what happens when they fail and what happens when poor regulations or poorly thought-out laws fail in terms of the opportunities around robotics and AI. So, in much of the literature, the day was rescued by Dr Susan Calvin, who was a brilliant engineer. I think the salient lesson is that if we can recruit more females, more young women into engineering, it would serve our Smart Nation and AI vision far more than the use of his three laws today. [Applause.]

Mr Speaker: Coming back to non-fictional issues, Mr Zainal Sapari.