Oral Answer

Impact on Singapore's Digital Ecosystem following Meta's Decision to Eliminate Fact-checking and Reduce Content Moderation Effort

Speakers

Summary

This question concerns the impact of Meta's decision to reduce content moderation and fact-checking on Singapore's digital ecosystem and information integrity. Mr Alex Yam inquired about potential risks regarding misinformation and the measures used to maintain safe online discourse when platforms adopt lenient policies. Minister of State Rahayu Mahzam stated that the government is monitoring these developments and highlighted existing safeguards like the Protection from Online Falsehoods and Manipulation Act and the Code of Practice for Online Safety. She detailed public education efforts through the National Library Board and IMDA, alongside a $50 million investment in the Centre for Advanced Technologies in Online Safety. Furthermore, the government remains in contact with Meta and will utilize tools like the Elections (Integrity of Online Advertising) (Amendment) Bill to ensure online safety.

Transcript

3 Mr Alex Yam asked the Minister for Digital Development and Information in light of the recent announcements by Meta to eliminate fact-checking and reduce content moderation efforts (a) what is the Ministry's assessment of the potential impact on the spread of misinformation and harmful content on Meta’s platforms; (b) whether there are implications for Singapore’s digital ecosystem and the safety of online discourse; and (c) what measures will be introduced or enhanced to promote information integrity and responsible digital citizenship in Singapore, particularly when global technology companies adopt more lenient moderation policies.

The Minister of State for Digital Development and Information (Ms Rahayu Mahzam) (for the Minister for Digital Development and Information): Mr Speaker, social media has increasingly become the primary source of news and information for many Singaporean users online. A reliable fact-checking and content moderation system on social media platforms, therefore, serves as a crucial first line of defence against misinformation and harmful online content, allowing platforms to act early to detect, correct or filter out such material.

Meta has been assessed to be a social media platform with significant reach in Singapore. The Government is naturally concerned about the impact of its policies and practices on Singapore. We are therefore monitoring developments arising from the company's announcement to replace third party fact-checking with crowdsourced fact-checking and revise its hate speech policies across its platforms. Based on publicly available report, these changes will be limited to within the United States (US), at least in the near term. Nonetheless, we will assess their impact on Singapore, especially if the changes are also implemented here.

In the meantime, we will continue our existing approach to addressing misinformation and ensure that it remains fit-for-purpose. In terms of regulations, the Protection from Online Falsehoods and Manipulation Act (POFMA) enables the Government to issue corrections against online falsehoods that are against the public interest. The Codes of Practice under POFMA also require the prescribed Internet intermediaries to put in place safeguards to promote credible online sources of information, enhance transparency in political advertising and prevent and counter the abuse of online accounts. In addition, our Code of Practice for Online Safety under the Broadcasting Act requires designated social media services, including Meta's platforms, such as Facebook and Instagram, to have systems or processes to prevent Singapore users from accessing harmful content.

Beyond regulations, it is also important for Singaporeans to be able to protect themselves against risks in the online space. To this end, the Government has put in place public education programmes to equip Singaporeans with the skills to critically evaluate information and protect themselves against misinformation. As an example, the National Library Board's Source, Understand, Research and Evaluate (SURE) programme has developed resources and organised activities to equip Singaporeans to be discerning producers and consumers of information. As part of the Digital for Life movement and the Digital Skills for Life framework, the Infocomm Media Development Authority (IMDA) has also developed resources to equip Singaporeans to be safe, smart and kind online, including skills on identifying and taking action against false information.

We are also keeping up with efforts to leverage technology to respond to online harms. In May last year, we committed $50 million in funding over five years to the Centre for Advanced Technologies in Online Safety (CATOS), which will bring together government, industry, academics and civil society, to develop and deploy technological solutions to build a safer online ecosystem for Singapore users. Part of CATOS' work includes testing "Trust by Design" technologies, such as watermarking and content authentication, so as to enhance the authenticity of digital content.

The Government will continue to review our range of online safety efforts, to keep pace with a fast-changing social media landscape and ensure our Singaporeans can continue to access safe and trusted online spaces.

Mr Speaker: Mr Alex Yam.

Mr Alex Yam (Marsiling-Yew Tee): Thank you, Mr Speaker, for your indulgence. I thank the Minister of State for her answer. While the current changes to Meta's moderation policy applies only to the US, has the Ministry done an assessment that if this becomes broad-based and company-wide across various jurisdictions, what is the impact on our local regulations, such as with POFMA and Foreign Interference (Countermeasures) Act? And has Meta reached out to the regulators, or has the Ministry, perhaps, spoken to Meta, to better understand the rationale behind these changes and how they intend to have responsible digital governance?

As we saw from the previous Parliamentary Question, there are, of course, many specific risk areas that are pertinent to us. We have been through COVID-19, impact on public health, social cohesion, which we covered in yesterday's debate, and of course, of pertinence to this year, elections in Singapore.

Meta's new policy in the US, somewhat, mirrors what X is doing, in terms of crowdsourcing moderation, but the Center for Countering Digital Hate, for example, looked at X – and platform's own owner, Elon Musk – many of his views, which are misleading on the US elections, and of course, his support for the Alternative for Germany (AfD) and Reform UK outside of the US, has amassed almost two billion views. This is despite them having this so called crowdsourced or community nodes system. So, there is a great scope for damage to be done, should various social media platforms decide to go the same way. And I am asking if the Ministry is prepared for those eventualities.

Ms Rahayu Mahzam: Mr Speaker, I thank the Member for his questions. Firstly, I would like to highlight that the announcement is in its early days yet. In Singapore, we always take a collaborative approach with the social media platforms and we have always been in active engagement with them. With regards to this particular instance, we are in touch with Meta and we are working with them to understand the specifics and details of this policy and the implications to the US, as well as to the global larger population. We will continue to assess if there are areas of concerns and if there are areas where we need to enhance our regulations, to see if they will continue to meet their obligations under our existing regulations. So, that is one. We are already in touch, we are already assessing.

Two, I would also highlight that we have always been on the lookout for all the new trends and developments, especially during sensitive periods like the election, which is why the Elections (Integrity of Online Advertising) (Amendment) Bill was passed recently, specifically looking into the integrity of information online during the election period. So, we will continue to keep an eye out for some of these developments.

Thirdly, though, I would like to highlight that we already have existing safeguards. We have been outcome-based in our approach; we know the outcome we want. There are a lot of technologies and capabilities and different mechanisms to protect users in the online space in many different areas of concerns, like the algorithms. So, we are looking into some of these. But as a start, our safeguards make sure that the platforms are on the same page, with regard to the outcomes we want to achieve, that there is a safe online space and there is a high integrity of the content that is put out. So, these are ongoing efforts and we will continue to enhance as we move on, on this.