← Back to Bills

Online Safety (Miscellaneous Amendments) Bill

Bill Summary

  • Purpose: The Bill aims to amend the Broadcasting Act to regulate online communication services, specifically social media platforms, to protect Singaporean users from harmful content such as suicide promotion, sexual violence, and racial or religious disharmony. It establishes a regulatory framework that requires designated services to implement safety systems through Codes of Practice and empowers the Infocomm Media Development Authority (IMDA) to direct the removal of egregious content.

  • Responses: Minister for Communications and Information Mrs Josephine Teo justified the Bill by noting that while existing laws like POFMA and POHA address specific harms, there are gaps regarding the algorithmic spread of dangerous content on social media. She explained that the approach is outcome-driven rather than prescriptive, shifting responsibility to platforms to maintain safety standards while ensuring the government can act as an "online firefighter" to disable access to particularly harmful material.

Reading Status 2nd Reading
Introduction — no debate
2nd Reading (1) Tue, 8 November 2022
2nd Reading (2) Wed, 9 November 2022

Members Involved

Transcripts

First Reading (3 October 2022)

"to amend the Broadcasting Act 1994 and the Electronic Transactions Act 2010 to regulate providers of online communication services",

presented by the Minister for Communications and Information (Mrs Josephine Teo) read the First time; to be read a Second time at the first available Sitting of Parliament in November 2022, and to be printed.


Second Reading (8 November 2022)

Mr Speaker: Minister for Communications and Information.

5.44 pm

The Minister for Communications and Information (Mrs Josephine Teo): Mr Speaker, I beg to move, "That the Bill be now read a Second time."

Sir, a study conducted by the Ministry of Communications and Information (MCI) in June this year found that almost 80% of Singapore residents are concerned with online harms. In stark contrast, when we ask people how they feel about walking the streets of Singapore alone at night, 97% said that they would be comfortable to do so. There is obviously a sizeable gap between how safe Singaporeans feel online and offline.

Today, most of us remain connected online throughout the day. Online services have become the key conduits through which we communicate and consume content. Because of this, the prevalence of harmful online content on these services can have negative serious consequences on the physical, emotional and mental well-being of society.

The Bill we are debating today is not the first law introduced to secure our online space. The Government has, over the years, introduced targeted laws to deal with specific types of harmful online content and behaviour, including:

(a) Falsehoods, which are dealt with under the Protection from Online Falsehoods and Manipulation Act, or POFMA.

(b) Foreign interference, which is addressed under the Foreign Interference (Countermeasures) Act, or FICA.

(c) Online harassment, such as cyberbullying, which is dealt with under the Protection from Harassment Act, or POHA. POHA was also recently updated in 2019 to cover doxxing.

Our laws have served to protect many Singaporeans. POFMA was integral to Singapore's response to COVID-19, allowing the Government to address the deluge of misinformation which made COVID-19 not just a pandemic, but an "info-demic".

Recently, The Straits Times reported a record high number of protection orders filed and granted in 2021 under POHA, more than double the number in previous years. Lawyers attributed the spike in applications to media attention on the issue of harassment. The application process has also been enhanced with the opening of the Protection from Harassment Court.

However, there are still gaps that need to be addressed. One growing concern is content encouraging suicide and self-harm. Just two months ago, an investigation in the United Kingdom (UK) concluded that 14-year-old Molly Russell took her own life after being exposed to thousands of self-harm and suicide related posts in the months leading up to her death. Many of these posts portrayed suicide as an inevitable consequence of depression.

There have also been reports of users' accidental deaths while attempting to mimic videos of impossible physical stunts. Unknown to some victims, these reckless acts and dangerous challenges had been heavily edited.

Our children, who may lack the capacity or maturity to deal with certain types of content are particularly vulnerable when exposed to inappropriate content and unwanted social interaction online. In June this year, MCI conducted a study which asked respondents what online content they felt children needed to be most protected from. The top three were sexual content, cyberbullying and violent content.

In a dialogue with youths held by MCI and the National Youth Council last year, participants indicated that the top three online harms they and their peers faced included being insulted online, impersonated by someone else and receiving unwanted contact from another person.

If such harmful content existed only on websites, the Infocomm Media Development Authority (IMDA) would be able to deal with them under the existing Broadcasting Act. But today, users are much more likely to consume content from the feeds of social media services, where such harmful content can be pushed via algorithms, and spread quickly through our social connections.

Just two weeks ago, Meta announced that Facebook recorded nearly 2 billion daily users, while Instagram recorded 2 billion active monthly users. TikTok has been downloaded over 3.5 billion times worldwide since its launch, while YouTube recorded 30 billion daily views on "YouTube Shorts".

As the Internet evolves, so must our laws. In the book "Tools and Weapons" co-authored by Microsoft's President Brad Smith and Carol Ann Browne, the backwardness of some cybersecurity measures was likened to "digging trenches to defend against missiles".

In the same way, we must recognise that online content can inflict serious damage on our people and communities, if our laws fall short. We must have the ability to deal with harmful online content accessible to Singapore users, regardless of where the content is hosted or initiated.

The entities controlling the biggest and most popular online communication services (OCSs) or platforms accessible in Singapore all operate outside of Singapore and fall outside the legal remit of the Broadcasting Act today. To ensure that Singapore users of these services and platforms can be kept safe, we must be able to take appropriate action on these entities, as long as they provide content accessible by Singapore users.

We are not alone in thinking this way. There is a growing consensus that rules must be put in place to prevent harms in the online world, just as in the physical world. Calls for online services to take greater responsibility in ensuring safety on their platforms have also led to jurisdictions such as the UK, the European Union (EU), Germany and Australia to introduce or propose new online safety laws.

Mr Speaker, I seek your permission to distribute handouts to the Members, which summarise online safety laws enacted or under consideration in these jurisdictions.

Mr Speaker: Please proceed. [Handouts were distributed to hon Members.]

Mrs Josephine Teo: Mr Speaker, the Singapore public, like many other societies, is concerned over the potential damage caused by harmful online content and expects social media services to take greater responsibility to protect their users.

In July and August this year, MCI conducted a public consultation and series of engagements on our proposals to combat harmful online content and received more than 600 responses. Respondents expressed the desire for safety features to manage their exposure to certain types of content.

Similarly, MCI's June 2022 study, found that nine in 10 respondents felt that such measures would protect users from harmful online content to at least a moderate extent. Parents, in particular, were concerned over viral social media content which featured dangerous pranks and challenges, harmful advertising, cyberbullying and explicit sexual content. Some suggested keeping younger users in mind when developing safety features, including tailoring content moderation thresholds and ensuring young users can easily report inappropriate content.

In a separate poll conducted earlier this year by the Sunlight Alliance for Action, Singaporeans ranked reporting systems and laws to tackle online harms as the top two measures that would facilitate help-seeking.

I will now explain our approach to enhancing online safety for Singapore users and Members will find that there are similarities to the practices elsewhere, examples of which I have circulated.

The first is to tackle the problems in an accretive manner. Rather than take a "Big Bang" approach which some countries are attempting and have an all-encompassing law, let us design our laws in a considered and calibrated manner.

[Deputy Speaker (Ms Jessica Tan Soon Neo) in the Chair]

Second, as far as possible, be outcome driven instead of being overly prescriptive. In today's context, we are dealing with a vast volume of user-generated content. Rather than chasing individual pieces of content, we must ensure that systems and processes to regulate the content are put in place and maintained by the platforms. Instead of prescribing how these systems and processes are set up, we should specify the outcomes they ought to achieve.

The third, and perhaps, most important of all, is to recognise that laws are not a silver bullet. The Government will need to work with partners, including our citizens, to tackle harmful content and enhance the safety of users online.

Today's online content service providers are different from traditional local broadcasters and require a different regulatory approach. In fact, each type of service is different. "Social media services" are not the same as "over-the-top media services", which also operate differently from "game distribution services".

The Bill allows us to adopt this accretive approach by building on existing laws to introduce new ones; so that over time, our foundations for digital safety become stronger. If passed by Parliament, this Bill will create a new part in the Broadcasting Act to regulate "online communication service", which are electronic services that enable users to access or communicate content via the Internet.

The regulations will only apply to specified types of "online communication service", which are listed in a Schedule under the Broadcasting Act.

For now, we will only specify one type of OCS in the Schedule and that is "social media services". Under the Bill, a social media service is defined as an electronic service, whose sole or primary purpose is to enable online interaction or linking between two or more users, including enabling users to share content for social purposes; and which allows users to communicate content on the service.

Why have we chosen to regulate social media services as a matter of priority? Well, because three in five users or thereabout from MCI's June 2022 survey experienced harmful content online using social media platforms. This is the highest proportion compared to other platforms, such as e-commerce sites, search engines and news sites.

Given the voluminous user-generated content in today's evolving online space, it is not efficient to regulate individual pieces of content.

IMDA will instead focus on system-wide measures which are more effective at scale.

Under sections 45K and 45L of the proposed Bill, IMDA will be able to designate OCSs with significant reach or impact in Singapore, and require them, via the Codes of Practice, to put in place measures to keep Singapore users safe.

This approach is similar to how we go about regulating fire safety. Building owners, occupiers and qualified persons must adhere to the Singapore Civil Defence Force (SCDF)'s Fire Code, which requires them to put in place systems and processes to maintain high fire safety standards, to keep their occupants safe.

Likewise, OCSs must have in place systems and processes to minimise Singapore users' exposure to, and mitigate the impact of, harmful content on their platforms. IMDA will impose these requirements on designated OCSs via Codes of Practice. By stating in the Codes the outcomes which regulated services must meet, IMDA aims to provide sufficient clarity on what the services must do to protect users, whilst allowing some flexibility for them to adjust their approaches.

We can also expect IMDA to update the Codes from time to time. This will allow us to be agile and responsive to technologies as they evolve.

But before introducing new requirements, IMDA will consult and work collaboratively with service providers to assess the most suitable approaches to strengthening safety on their platforms.

Under the Bill, IMDA does not have unfettered ability to issue new Codes. The new section 45L sets out that IMDA can issue Codes for the following purposes:

First, to ensure services have systems or processes in place to address harmful content.

Second, to provide practical guidance or certainty in respect of what content should be covered.

Third, to set out the procedures that service providers must follow when audits are carried out.

Fourth, to require services to collaborate with approved researchers to understand systemic risks relating to the service.

Earlier, I explained that we will apply our new laws to social media services as the first type of OCS. Let us now turn to the Code that designated social media services with significant reach or impact in Singapore must comply with.

In October, IMDA issued a draft copy of the "Code of Practice for Online Safety". This draft Code comes after an extensive study of international online safety legislation as well as proposals and engagements with major social media services in Singapore, including Facebook, YouTube, Instagram, TikTok, Twitter and HardwareZone.

The social media services consulted were receptive to the proposals laid out in the draft Code and the Bill. They support the Government's commitment to find innovative and effective solutions to combat harmful online content and recognise the need to improve online safety.

The designated social media services will be expected to meet the key outcomes as follows:

First, minimise Singapore users' exposure to harmful content and empower users with tools to manage their own safety. The social media services must also take additional steps to minimise children's exposure to inappropriate content and provide tools allowing children or their parents to manage their safety.

Second, make available an easy-to-use mechanism for Singapore users to report harmful content and unwanted interactions.

Third, provide transparency on the effectiveness of their measures in protecting Singapore users from harmful content. Designated social media services must provide information that reflect Singapore users' experience on their services. This will allow users to make informed decisions about how they use the service.

If the Bill is passed, IMDA will further consult relevant social media services, before finalising the Code for issuance.

We believe that the Code of Practice for Online Safety will reduce users' exposure to harmful online content, but it will not eliminate them completely. Part of the reason is that these social media services tend to operate globally, drawing in users and content from around the world. Their safety measures are not tuned to reflect an in-depth understanding of Singapore's local context or our racial and religious sensitivities.

Members may recall that in the early days of the COVID-19 pandemic, supermarkets were purportedly running out of toilet paper. A social media post surfaced, suggesting that people use the Bible and the Quran as toilet paper. This post was religiously very offensive and denigrated two religions in Singapore. However, it was not moderated nor removed by the platform concerned. IMDA had to step in to engage the platform and the platform eventually disabled access to the post.

There may also be egregious content on non-designated social media services, which are not subject to the Code of Practice for Online Safety. In May last year, a poll published on a social media service sexualised local female Islamic teachers, asked users to rank them and further promoted sexual violence against them. The post went viral and the modest reach of this particular service received a sudden big boost. It not only caused great distress to the individuals involved, but also unsettled many others in the community.

These issues are like fires that occur, even as the Fire Code has prevented most fires. In such instances, we must have firefighters who are properly equipped to act quickly, so as to minimise, if not prevent, serious injury and damage.

If Parliament agrees, the new section 45H proposed by this Bill will allow IMDA to act as an "online firefighter", to direct any social media service to disable Singapore users' access to egregious content and stop the egregious content from being transmitted to Singapore users via other channels or accounts.

IMDA has, in fact, performed this role for some time now, working with social media services behind the scenes to deal with egregious content. As Singapore's media regulator, IMDA also has significant experience in assessing content across the different media platforms and making decisions to protect the community.

Under this Bill, IMDA will be better equipped to ensure Singapore users are protected from egregious content online. But IMDA will not have carte blanche to issue directions. Its powers will be limited in scope.

First, IMDA will not be able to issue directions in respect of private communications. Those will remain private.

Second, directions can only be issued for certain categories of egregious content relating to user safety.

The new section 45D proposed by the Bill defines "egregious content" to include content advocating terrorism, suicide and self-harm, violence including sexual violence, child sexual exploitation, content posing public health risk and content likely to undermine racial and religious harmony. These categories will be set out in law.

When dealing with content that requires the expertise of other agencies, IMDA will consult them accordingly. As an example, when assessing content pertaining to public health measures and risk, IMDA will consult the Ministry of Health (MOH) and its experts.

The new section 45M proposed in the Bill requires designated services to take all reasonably practicable steps to comply with an applicable Code of Practice.

If they do not, IMDA can take regulatory action under the proposed section 45N to issue (a) a financial penalty; or (b) a rectification direction requiring the service to remedy the failure to comply with the Code of Practice. Non-compliance with a rectification direction will be a criminal offence, punishable with a fine. For egregious content, non-compliance with a direction by IMDA will also be a criminal offence, punishable by a fine.

Mdm Deputy Speaker, I said right at the beginning that laws are necessary but success alone in ensuring our citizens' safety cannot just depend on the laws. Respondents of MCI's public consultation and engagements agreed with this view. They wanted the Government to mandate stronger measures and social media services to do more to reduce harmful online content. At the same time, they emphasised that all of us, as users of social media services, have an individual responsibility to protect ourselves.

During one of our engagements, Mr Mark Joel Premraj, a parent, shared his perspective on how parents also play a key role in educating their children on inappropriate content online, including how to encourage them to flag the inappropriate content they come across.

Besides establishing a robust regulatory toolkit, the Government has taken active steps to nurture a well-informed and discerning public. Efforts to educate the public include the National Library Board's S.U.R.E. programme. It equips the public to think critically, be responsible producers and consumers of information, and stay safe and well online. Since its launch in 2013, S.U.R.E. has conducted over 6 million physical and digital engagements.

In addition, the Ministry of Education (MOE)'s refreshed Character and Citizenship Education curriculum has a stronger focus on Cyber Wellness education, where students learn to be safe, respectful and responsible users of cyberspace, and to be a positive peer influence.

In support of the Digital for Life movement, launched in February last year, community partners have also spearheaded initiatives which helped over 270,000 Singaporeans enrich their lives through digital technologies.

For example, TOUCH Community Services has partnered Meta to conduct the Digitally Ready Families programme, where low-income families learn digital life skills and cyber wellness tips. Another Digital for Life partner is "Kids PlaySafer". Created and run by Ms Sandra Low, a mother of an 11-year-old and 9-year-old, "Kids PlaySafer" has conducted talks on digital literacy and cyber safety to help parents manage their children's digital needs.

Mdm Deputy Speaker, may I continue in Mandarin, please.

Mdm Deputy Speaker: Yes, please.

(In Mandarin): [Please refer to Vernacular Speech.] Mdm Deputy Speaker, for many people, technology is indistinguishable from magic that greatly improves our lives and brings about greater convenience. However, for the parents of 14-year-old British girl, Molly Russell, social media became the dark spell that took away the life of their daughter. Badly affected by the thousands of contents about self-harm and suicide, young Molly ended her short life.

Besides content about self-harm, there are many other types of harmful online content including those that promote violence, sexual abuse or enmity between races. If we allow such content to flood our cyberspace, many people, especially our young, would be adversely affected; and the social cohesion that we built preciously over the years may also be at risk.

That said, there is no law that can shield us totally from harmful content. Therefore, as the Government strengthens the law, our hope is that as parents, we can encourage our children to tell us whenever they encounter problems online, so that we can support them. Our hope is for social media platforms to innovate and come up with newer technologies to protect users, beyond just fulfilling their obligations.

All of us also hope that while civil organisations and individuals strengthen their own awareness of such content, they can also empower vulnerable groups to counter such content.

Only then, can we move forward as one.

(In English): Mdm Deputy Speaker, let me conclude. When the author Arthur C Clarke published "2001: A Space Odyssey", one of the lines in the book became famous and quoted many times over. It says, "Any sufficiently advanced technology is indistinguishable from magic".

Mdm Deputy Speaker, magic happens as we speak. Without disturbing Parliamentary proceedings, Members can compare notes instantaneously, in fact, I saw some of you do so. And conduct research on the fly either directly by instructing the colleagues outside the Chamber or just looking at the QR code that I distributed.

Gone are the days where we might rush home to catch a favourite television programme. So much content is available online anytime, anywhere.

But not all of this content is good. To ensure that safety is upheld for Singapore users, we need OCSs to be held accountable. Equally, we need the support of everyone in the community to keep each other safe online. I appeal to Members to support this Bill so that we can together improve online safety. I beg to move. [Applause.]

Question proposed.

6.13 pm

Ms Tin Pei Ling (MacPherson): Mdm Deputy Speaker, I support the Online Safety (Miscellaneous Amendments) Bill.

When social media first emerged, many people saw it as an unadulterated good thing – a new frontier, separate from the real world, offering freedom of speech without ill consequences. Indeed, the founders of many social media companies had good intentions – to connect users from around the world; to give them personal control to express what they desire and exchange ideas; to be free of oppression; to come together and do good for themselves and their societies.

Years passed, and alas, we know that the reality is far from the ideal. We have seen how extremism gets propagated online, inspiring and triggering acts of terrorism in parts of the world. We have seen how young lives were lost to dark and depressing content, and irrational online movements such as the "Blackout challenge". We have seen how hate speeches, in the name of free speech, get disseminated and cause great divisions within society.

These are worrying trends that we see around the world and countries, including those that uphold democracy and free speech, are facing the same challenges. Singapore, too, is not spared. As a society, we must respond to curb its ills and protect the vulnerable, while allowing the widest possible.

So, our legislation must evolve to better protect our people from harmful online content, especially minors who are more susceptible and vulnerable. In this, Singapore is not alone in passing such a legislation. Australia and Germany are some of the first few countries to have done so, and others are either doing the same, or thinking of doing the same.

The Online Safety (Miscellaneous Amendment) Bill builds on some of our existing laws to further recognise that happenings online can have a real impact in the physical world and actions must be taken to address issues concerning the safety and well-being of our people. The Bill also represents a shift in some ways. Firstly, protecting users is no longer just the responsibility of the Government or the individual users themselves, but also the OCS providers, who will now be explicitly required to implement tangible measures or face legal consequences for failing to do so. Secondly, giving teeth to our agencies to compel these OCS providers to act on harmful and egregious content, even if these service providers are not situated on our shores.

On the Internet, all types of information, the good and the bad, are readily accessible and can be proliferated widely almost instantaneously, thereby extending the harm it can bring onto innocent people. In instances of bullying, the Internet magnifies the effects. And, in other instances, the widespread egregious content could mislead and cause harm to the innocent. Thus, I believe that most, if not all, will agree on the importance of protecting those who are vulnerable online.

Nonetheless, I have the following questions to ask about the Bill.

Firstly, I think it is quite clear that the Ministry is taking a whitelist approach to applying the Codes of Practice. Therefore, I would like to ask: how does the Ministry determine which OCS providers make it into the whitelist? What criteria or parameters do the Ministry consider?

Second, I note that private messages will not be covered in this Bill. I suppose the Ministry is trying to strike a balance between offering sufficient protection whilst not being overly intrusive. However, there could be instances where objectionable content gets shared through private messaging channels. Moreover, in the context of Australia where its online safety act was passed just last year and came into effect earlier this year, private messaging is included. Could the Ministry explain the considerations made when deciding what to cover and what not to cover? Perhaps, the Ministry could also share the lessons learnt or the observations made of other jurisdictions where similar legislation was passed before us.

Thirdly, while the Ministry has a broad list of what constitutes egregious content, such as content that advocates or instructs on suicide, self-harm, violence, child sexual exploitation, public health risk, racial and religious disharmony and terrorism, who and how will this assessment and decision be made on what actually crosses the threshold to qualify as egregious?

While I applaud the Online Safety (Miscellaneous Amendments) Bill for being a timely one, I would also like to know: how frequent does the Ministry intend to review the Bill? As we know, technological advancement and disruptions happen at accelerating speeds. Along with it comes new issues and operational challenges that the Bill today may not be adequate in addressing.

For example, with the rise of the metaverse and the increasing number of young people immersing in it, our legislation and protection mechanisms must catch up quickly. In May this year, it was reported on various news sites, such as the BBC and The Business Insider, that a researcher's avatar was sexually assaulted on a metaverse platform called the Horizon World. The researcher from a non-profit advocacy group SumOfUs entered the particular metaverse and, within an hour, her avatar was raped in the virtual space. As she wore her Virtual Reality (VR) equipment during that episode, her controller vibrated when the male avatar strangers touched her, resulting in a physical sensation consequential of what she was experiencing online. The incident left her feeling "disoriented". It was a clear instance of how the virtual and physical world boundaries have blurred and how online happenings can cause real harm.

Apparently, that was not an isolated incident, as there were other reports of similar sexual assaults, homophobic and racial slurs, as well as gun violence on Horizon World. Though these reports were specific to Meta's metaverse platform, it is not difficult to imagine similar incidents happening on other metaverse platforms, especially if the different metaverse universes start to connect with one another.

Therefore, how will the Bill, in its current form, be able to protect users from such online harms?

In addition, as we look to a possible future of Web 3.0, where data and control become decentralised, going just after a couple of OCS providers operating in the Web 2.0 world may not be effective enough. Hence, we will need to ensure that our legislation and enforcement capabilities are updated in a timely manner, so that while we do not want to be over-prescriptive, we are also not too big a step behind these technological developments.

Therefore, I hope the Ministry can also share more about what is being done to continually engage industry players and community stakeholders so that our legislation and Codes of Practice not only have teeth but will bite where it matters. With that, I support the Bill.

Mdm Deputy Speaker: Mr Gerald Giam.

6.21 pm

Mr Gerald Giam Yean Song (Aljunied): Mdm Deputy Speaker, the Online Safety Bill before us seeks to tackle harmful content on online services like Facebook, YouTube and TikTok, which are accessible to users in Singapore. I support the Bill, given the online harms that people in Singapore have been subject to on social media and on the Internet, and the growing need to protect our people, especially the young, from these harms.

However, I have some clarifications to seek on the Bill which I hope the Minister will address before we vote on the Bill.

Access to digital communication devices is not optional in this day and age, even for younger children. For example, if a 10-year-old child were to take public transport on his own to and from school, his parents would want him to be able to contact them in case of an emergency or to track his location. In most cases, this can only be done using a mobile phone or smart watch. However, it will be unwise to give that same 10-year-old unfiltered access to the Internet on his phone.

Currently, parents can install a parental control app on their child's phone. This app will allow parents to restrict content, approve apps, set screen times and filter harmful content. It can also locate the child using GPS.

I set this up for my son some time back. However, even with all my professional technical knowledge, it took me quite a bit of time and research to figure out which was the most suitable software to use and how to configure it. I wonder how many parents have tried to set up parental control software for their children. For those who have not, they should be aware that their children and teens essentially have unfiltered access to the Internet and all the harms that come with it. These parents can only regulate their children's Internet access by looking over their shoulders. This is a suboptimal solution, given the asymmetry of technical knowledge between most parents and their children. Most children nowadays can run rings around their parents when it comes to configuring settings on their mobile phones.

Also, for such content filtering to work for young people, age verification is needed. The Code of Practice for Online Safety for Designated Social Media Services proposed by the Ministry states that social media services must have additional measures to protect children, including minimising children's exposure to inappropriate content and ensuring that their account settings are age-appropriate by default.

However, the Code of Practice does not prescribe how this age verification should be implemented. Indeed, attempts at imposing age verification have previously failed in the United Kingdom's implementation of the Digital Economy Act of 2017, in part because of privacy concerns. Online age verification providers could collect excessive personally identifiable information and process it for other purposes in violation of privacy laws.

Separately, a young user can circumvent age restrictions by declaring his age to be 18 when, in fact, he is only 12. Can I ask the Minister: how will content providers be required to perform age verification checks in practice?

Some Internet service providers do provide parental control tools which block harmful content before they come through the fibre. However, they require a separate subscription that entails an additional cost each month. Many parents are not even aware of this service. They will have to take the effort to log in to their broadband provider's website and subscribe to this service. This additional friction will deter many parents from signing up, leaving young children vulnerable to accessing harmful content without their parents' knowledge.

It would be better for Internet service providers to block harmful content at the network level by default, rather than expect parents to set up complicated filtering software on their children's devices. This remote filtering should be activated by default for all new mobile and broadband subscriptions and offered for free for all subscribers. This will ensure that even children of less tech-savvy parents will be protected by default. Adults who need full access to the Internet should be able to opt out of the filtering service without any charge.

I am glad to note that under the Code of Practice, content that may encourage young users to engage in dangerous acts will be considered harmful content and be subject to additional safeguards for young users. Examples of these include the "Skull-breaker Challenge" where two people trick a friend standing in between them to take a vertical jump, then kick their legs from under them as they are in the air, making them fall backwards and potentially injure their head and back. People sometimes do not properly assess the risk associated with an activity. They may have seen others perform it without incident in a YouTube video and may be tempted to experiment themselves.

Ultimately, we cannot completely insulate young people from all dangerous, harmful and silly online content. The best protection is for parents and teachers to educate their children and students of the potentially harmful content that may be accessed online and the consequences of indulging in them. The Media Literacy Council could also directly push out educational materials on the platforms that young people access, like TikTok and Telegram. This should be an ongoing process, not a one-time effort, because harmful content is constantly evolving, and new trends are always emerging.

Under this Bill, failure to comply with the directions from IMDA could be an offence punishable by a fine on conviction. Can the Minister clarify if this fine will apply to only the company or also the individual officers within the company responsible for ensuring compliance? Given the financial might of social media companies, they might have no problem paying even a huge fine.

The Code of Practice will require social media services to submit annual reports to IMDA to reflect Singapore users' experiences on the service, including the actions that they have taken on user reports.

I would like to propose that social media services also be required to submit quarterly reports, listing the type of content that has been flagged by users. This is so that IMDA can be kept apprised of trends in harmful online content and behaviours.

Section B of the Code of Practice requires that users of OCSs must be able to report harmful content or unwanted interactions to the platform providers through an "effective, transparent and easy to use mechanism" and social media services are expected to take action on these user reports in a "timely manner".

This leaves open lots of room for interpretation. In contrast, Australia's Online Safety Act requires platforms to provide a clear and easily accessible complaints system for end-users to submit complaints or requests to remove certain material and the platforms must respond to the complainant within 48 hours, failing which the end-user may contact the eSafety Commissioner, who has the power to investigate the complaint. I would like to propose that Singapore's Code of Practice include these specific requirements and timelines.

Another potential area of harm to young people is online gaming, which can be both addictive and cause social problems. Can the Minister share to what extent this Bill and the Code of Practice will regulate online gaming?

I note that the Bill covers cyberbullying content that is likely to cause harassment, alarm or distress to the targeted person. Will the non-consensual sharing of intimate images be covered in this Bill? There have been cases, recently, of disgruntled ex-lovers sharing such images, which, most certainly, will cause alarm and distress to the victim.

Next, I would like to seek clarifications from the Minister regarding the protection of Singaporeans' democratic rights in this Bill. Some respondents to the public consultation sought assurance that the proposed measures would not affect user privacy or freedom of expression.

The Bill gives wide-ranging powers to IMDA to issue directions to social media companies to remove harmful content if it deems it so. Can the Minister elaborate on what safeguards will be in place to ensure that such powers are not abused? Will there be channels for independent appeal or judicial review?

The UK's Online Safety Bill specifically includes protections to safeguard pluralism and ensure Internet users can continue to engage in robust debate online. For example, section 29 of the latest draft of the UK's Online Safety Bill requires content providers to "have regard to the importance of protecting the rights of users and interested persons to freedom of expression within the law", when deciding on safety measures and policies.

Section 15 of the UK Bill also requires social media services to put in place clear policies to protect "content of democratic importance", such as user-submitted comments supporting or opposing particular political parties or policies, and to enforce this consistently across all content moderation. The UK Bill also requires that platforms must not discriminate against different political viewpoints.

The UK's draft legislation has also been designed to safeguard access to journalistic content. News publishers' content will be exempted from social media platforms' new online safety regulations. Because of this, social media platforms will not be incentivised to remove news publishers' content, as a result of a fear of sanction from the regulator.

Are there such provisions in Singapore's Online Safety Bill? If not, will the Government study the Online Safety Bills of other countries, including the UK and Australia, and include democratic protections in the Code of Practice and subsidiary legislation?

Will Singapore have the equivalent of an eSafety Commissioner like Australia does? Who will this eSafety Commissioner be, and will he or she be empowered to make directions independent of the Government?

Australia's Online Safety Act itself was controversial in part because of the huge amount of discretion and power it puts in the hands of the Minister for Communications and the eSafety Commissioner to determine what are community expectations. How will Singapore's Bill safeguard democratic freedoms while protecting the young from online harms?

I note that some electronic services are excluded from this Bill. Examples of these are SMS and MMS services. Can I confirm with the Minister that other private messaging platforms like WhatsApp, Telegram and Signal are also excluded from this Bill? For the avoidance of doubt, I am not advocating for these services to be included in this Bill, as they are primarily used for private communication between individuals. Much of the communication is end-to-end encrypted, which means even the platforms do not have access to the data exchanged by their users. I would have strong privacy concerns if this encryption were to be broken for the sake of enhancing online safety. Madam, I look forward to the Minister's responses.

Mdm Deputy Speaker: Mr Zhulkarnain Abdul Rahim.

6.33 pm

Mr Zhulkarnain Abdul Rahim (Chua Chu Kang): Mdm Deputy Speaker, I stand in support of the Bill. Digital technology has permeated our lives. It has a deep impact on how we learn, how we do business and how we interact with one another. However, because of the cloak of anonymity, the online world rears its ugly head through online harms.

Maintaining safety online is not just the responsibility of the individual or the Government but all stakeholders involved. In this regard, many felt that stricter enforcement of relevant laws can be effective to combat or reduce online harms. Many also felt that technology companies and platforms must lead the way in tackling this issue, alongside the Government and us fellow Singaporeans.

It is, thus, timely that we have this Bill to further help us safeguard Singaporeans against online harms.

Last week, my firm organised its annual thought leadership platform, the Dentons Rodyk Dialogue 2022, themed "Building a Safe and Inclusive Digital World Together: Vision and Transformation". During the keynote speech, Minister Josephine Teo explained Singapore's approach in regulation towards digital safety and inclusion by ensuring what she described as the "3As".

Firstly, accretive – building each step or measure one after the other in a calibrated approach. Second, agglomerate – pulling in partners and groups in our collective endeavour. Third, agile – being able to adapt through different emerging technology or disruptions.

To borrow from bonds credit rating parlance, that to me, is a triple-A rated approach – sound and sensible.

In dealing with online harms, it is important to take a calibrated and multi-stakeholder approach while keeping a close eye on emerging technologies, such as Web 3.0 and the metaverse. Although Bill is focused on platform and service providers, there is also a need to focus on the users, particularly, the victims of online harms.

In June 2021, I started an initiative called Defence Guild SG, a collaborative group of lawyers providing pro bono assistance to victims of online abuse or harms. We now have over 20 lawyers who volunteer pro bono to assist or advise victims who face sexual harassment or online harms. The bulk of the cases they face are time-sensitive given the viral nature of the harmful online content, but most importantly, they are emotionally draining and usually, it is time- and cost-consuming for victims to seek redress.

Following this initiative, together with other People's Action Party (PAP) Members of Parliament, like Ms Hany Soh and Ms Nadia Ahmad Samdin, we spearheaded the formation of resource toolkit to combat online harms at the PAP Women's Wing International Women's Day celebration in March this year. And in September this year, with the help of pro bono lawyers, social workers and counsellors, we ran a workshop for close to 70 activists across various PAP branches on the practical walkthrough of the resource toolkit. This is to support residents during Meet-the-People sessions or any of our other activities.

Notwithstanding all of the efforts, according to a recent survey, almost 57% do not know what are the legal redress or help that they can get when faced with online harms personally. Hence, raising awareness and empowerment is a continuing endeavour.

In this regard, I have three suggestions: first, to provide legal pro bono advice under the Legal Aid Bureau for individuals facing online harms, much like what Defence Guild SG is doing for victims currently; second, to standardise the ease of reporting of online harms across all platforms; third, to seek platform and broadcasting service providers to be included in public awareness campaigns on eradicating online harms. In Malay please, Mdm Deputy Speaker.

(In Malay): [Please refer to Vernacular Speech.] In June 2021, I started an initiative called Defence Guild SG.

This was in the wake of a harassment incident towards 17 local religious teachers, who became victims of a lewd online survey. This issue raised awareness among many within our community about the threat of online harms.

Over 20 lawyers from different races and religions, comprising of veteran and young lawyers from the Lawyers@M³ network, joined forces to provide legal advice to victims of online harassment.

Most of the cases they dealt with are sensitive in nature and many within our community are unaware of the type of help available as well as their own rights. However, this protection could not have come about without the partnership of those from the Government, individuals and technology companies or online service providers. Therefore, this Bill places responsibility in the hands of service providers, to block online content that may cause harm, such as extremist content, violence and those that negatively impact our multiracial and multi-religious society.

I, therefore, support this Bill.

(In English): I now move away from the individual and on to the service providers, the focus of this Bill.

I have several clarifications: firstly, I would like to clarify on the definitions used in the Bill; the second set of clarifications relate to the mechanism of the takedown and defence available; and lastly, I have some clarifications and suggestions in relation to the Code of Practice proposed.

First, on the definitions. In determining whether a broadcasting service is private or domestic, the new section 2(3) of the Broadcasting Act gave regard to certain factors, and I will touch on two of them.

Firstly, on the number of individuals in Singapore who are able to access the content. The clause itself does not state, for guidance, the actual number as a threshold. In this regard, may I ask what would the threshold be? Should it be a percentage of entire users of the service who are in Singapore? Some clarity in this regard would be helpful.

Secondly, on the restrictions on who may access the content. Would content on accounts which are accessible through friends or restricted followers, be considered "private or domestic nature"?

Next, in respect of the new clause 45D which sets out the definition of "egregious content", I have a few questions.

First, what is meant by content that "advocates or instructs"? Would the Ministry consider content to be harmful by looking objectively at its impact on our society, even though the content itself may not provide a clear set of instructions or advocate a certain position?

Second, on resources, may I ask who will be the enforcement agency for this? And would there be a dedicated team to regulate such behaviour? And if so, how are we sufficiently resourced for this?

For example, in Australia, the Department of Home Affairs has a dedicated team to find content on social media sites that promote hate, incite violence, or points to terrorist propaganda. The team has a budget of around AUD$3 million. It is a resource-intensive endeavour, and if we rely on self-reporting by individuals or self-regulation by providers, there may be many instances of online harms that fall through the crack.

Thirdly, I note that certain categories of harms are expressly stated in the Bill. May I ask what about issues relating to drug abuse or any other activities that are illegal or against our social norms – would these be considered harmful content as well? May I suggest that certain discretion be given to include any future definition of harmful content.

My next clarifications are on the mechanism. I welcome the takedown or disabling order under clause 45H and the blocking order under clause 45I. In particular, the new clause 45E makes it an offence of not stopping egregious content on an OCS.

There is also a defence available to service providers in not complying with the order, if it was not reasonably practicable to do more than what was in fact done, or if there was no better practicable means than what was in fact used.

However, may I ask what is meant by "reasonably practicable"? Would the costs or expenses involved in complying with such order be a relevant factor?

Likewise, what about actual or consequential loss that the party may suffer when complying with such duty? I would suggest that such costs, expenses and losses are not relevant factors. This is because economic losses or ramifications should not be placed on equal importance footing with online harms that have debilitating and irreversible effects on our society and individuals. Nevertheless, I would welcome Minister's clarifications in this regard.

Next, in relation to the immunity given to service providers against criminal or civil liability when complying with the orders – under clause 45J. Would this apply to civil liability brought by parties from outside the jurisdiction as well? I understand and fully appreciate that our laws may not have extraterritorial effect in this respect.

I am fully aware that the current Bill, as proposed, was done after extensive consultation with service providers and platforms and that they are supportive of such proposals. Other jurisdictions have also passed similar legislation to safeguard against online harms. Perhaps the Ministry can also explore reciprocal immunity provisions with other like-minded jurisdictions in the future. This would help harmonise and set an international standard for compliance of cross-border directions in the future.

My next clarification is in relation to clause 45H(1)(d) on the period of takedown notice. Can we consider a fixed period? For example, Germany's similar law provides for a 24-hour requirement for takedown. This reflects the imminent risk of such online harms. If there is a standard fixed period by legislation, it will lead to a reasonable expectation or standard within the industry for compliance and sufficient company internal compliance processes can be put in place to meet with such expectations. Alternatively, perhaps, Minister can clarify what is the estimated period that is envisaged for a typical direction or order?

Lastly, may I clarify what is meant by clause 45H(2)(b), that a requirement "must not require the doing of anything with respect to the provision of an online communication service to the whole or part of any area in Singapore"? Perhaps, can Minister provide an example for this?

Finally, Mdm Deputy Speaker, in relation to the Code of Practice, I welcome such Code of Practice. I understand that the a Code of Practice can be revised or amended pursuant to a process to be followed, which allows for future flexibility and adaptability.

My question is, given the fluidity and dynamic nature of our digital ecosystem, would the Code be able to keep up with the changes brought upon by the rapid changes in technology?

I note that there is a process before a change in Code can be made, as envisaged at clause 45L. However, how long would that take? Perhaps, if I may suggest the formation of a council to formulate and update the Code regularly to keep up with technology trends? Such Council can comprise of service providers, regulators, important law enforcement or community stakeholders.

In conclusion and most importantly, digital safety is the responsibility of everyone. Notwithstanding the above clarifications, Mdm Deputy Speaker, I stand in support of the Bill.

Mdm Deputy Speaker: Mr Leon Perera.

6.47 pm

Mr Leon Perera (Aljunied): Mdm Deputy Speaker, the Online Safety (Miscellaneous Amendments) Bill marks a step in the right direction to create some regime for enforcement of basic standards of protection and decency against acts of online harm. I support the thrust of the Bill and stand in agreement with the arguments made by my Parliamentary colleague, Aljunied Member of Parliament, Mr Gerald Giam.

In particular, I strongly support Mr Giam's call on the Government to provide clear and unequivocal assurances that this law will not be used to curtail the exercise of legitimate free speech that touches on the public acts of public figures and that is not of a vicious and personal nature. I hope the Government can provide such assurances during the Parliamentary debate, to be recorded in the Hansard. Members will recall how the Protection from Harassment Act (POHA) was used by the Ministry of Defence (MINDEF) to take action against an individual, the legitimacy of which was disputed in a subsequent ruling of the Court of Appeals.

Madam, my speech will focus on the topic of online bullying, particularly as it pertains to children and of the harms of loot boxes and other gambling-like elements in electronic games. Before I address these two topics, let me make some general suggestions and clarifications on the approach taken in this Bill, where more clarity is, perhaps, needed.

Madam, under the proposed Bill, there are two key parts to the regulatory approach. Firstly, requiring OCSs with significant reach or impact to comply with Code(s) of Practice; and secondly, dealing with egregious content on an OCS that is accessible by Singapore users, by enabling IMDA to issue directions to deal with such content.

In addition to such measures and perhaps embedded in the Codes of Practice, I wonder if there could be a system where users of OCSs can first report content on the platform that is egregious and/or seriously harassing, to the OCS itself. If the OCS fails to take action within a stipulated time frame, say seven days, then there could be a mechanism whereby the complainant can raise this issue to the IMDA and ask the IMDA to issue the appropriate directions in relation to such content.

This draws on the Australia's Cyberbullying Scheme. One of its features is that a person may make a complaint to eSafety about cyberbullying material that targets an Australian child and this acts as a safety net, because they must have first reported the material to the relevant online service provider before taking the step. In addition, if the OCS fails to investigate or take action about that content reported by a user within the stipulated time frame, there could be penalties for the OCS.

While there are already reporting platforms or tools on most, if not all, of the OCSs, such a requirement embedded in a Code of Practice would create some legal pressure on the OCS to further investigate and act on user notifications or reports of egregious content in a timely manner.

Next, section 45D defines "egregious content". While this section does contain some specific definitions of egregious content, there could be other types of egregious content where more specificity would be welcome, as my colleague Mr Giam and other Members alluded to.

In particular, I think that the definition of egregious content given here in the Bill does not adequately deal with the following two categories: firstly, revenge porn or unwanted sharing of intimate images; and secondly, cyberbullying.

But at paragraph 4 of the First Reading speech, the June 2022 survey by MCI found that, "Sexual content, cyberbullying and violent content were the top three types of content that respondents felt the young needed to be protected from most". And I was actually not referring to the speech, but to the statement from MCI.

It would seem that the proposed Bill might not address the first two concerns in a sufficiently specific manner. In Canada, there is an offence of sharing intimate images without consent. I hope the subsidiary legislation can be more specific about these two types of egregious content.

Next, Madam, I note that at paragraph 8 of the Ministry's statement. it is stated that "The Codes of Practice may require Regulated Online Communication Services (ROCS) providers to put in place measures on their services to mitigate the risks of danger to Singapore users from exposure to harmful content and provide accountability to their users on such measures."

I would like to ask the Government how this accountability will come about. Will the Government commit to public consultations when new Codes of Practice are issued? This is hinted at in section 45L(2), but it is not a requirement. I hope that, given the evolving nature of online harms, as well as the need to balance privacy and free speech concerns, that a proper consultation process will be the norm in future before new Codes of Practice are issued.

Next, Madam, part 9(d) of the same statement refers to how ROCS providers should collaborate or cooperate with the conduct of research studies by experts approved by IMDA. Such research would allow IMDA to understand the nature and level of the systemic risks in the ROCS, and the evolution and severity of such risks.

Madam, I support this provision. It is a very positive move for MCI to formally refer to the use of research in this manner, since this area of online harms is a rapidly evolving space and since more research needs to be done on the effects of certain online activities, particularly on children.

Mdm Deputy Speaker, I will now move to talking about online bullying of children. Madam, this is a serious issue in Singapore. A 2019 Programme for International Student Assessment (PISA) study found that 26% of Singaporean students reported being bullied at least a few times a month, compared to an Organisation for Economic Co-operation and Development (OECD) average of 23%. This is not a small number by any means.

Madam, this subject is somewhat personal to me, as there were short periods of time in both my children's primary school life when they were bullied by friends. It is traumatic for the child and can leave lasting psychological scars. But it is also hard on the parents, arousing feelings of concern, frustrations and yet, helplessness. Why do I say helplessness? Because sometimes, the parents feels that they cannot address the problem with a sledgehammer, by coming down hard on the bullies, who are themselves children and may not fully understand what they are doing.

The effects of cyberbullying can be deadly. In America, a girl called Megan Meier committed suicide three weeks before her 14th birthday and her suicide was attributed to cyberbullying on the social networking site MySpace. I think the Minister had referred to this example as well. Her classmate's mother had created a fake MySpace account, pretending to be a teenage boy, Josh. "Josh" messaged Megan on AOL Instant Messenger (AIM) saying something very hurtful. Megan killed herself shortly after.

Madam, page 9 of the Bill excludes certain services from the scope of the Bill, including SMSes and, in part (e), "an electronic service where the only user-generated content enabled by that service is communication between two or more end-users that is of a private or domestic nature." This would imply that the law excludes bullying that takes place among a group of children, where one or more children bullies one child on a chat group, on say the victim's Instagram page.

Madam, children are a group where cyberbullying can have a very serious effect, given their lack of maturity and lack of life experiences in accessing resources that could help them. A large-scale National Institute of Health study in the United States (US) found that "the child participants who experienced cyberbullying were more than four times as likely to report thoughts of suicide and attempts as those who did not." I think this subject of youth mental health, in the context of social media, has been the subject of much public discussion and much research and rightly so, as many experts are tending towards a view that there is a connection between the very extensive social media usage that we are seeing among young people, and the kinds of mental health issues that are coming to the fore.

Madam, how to deal with this very difficult question – and I confess that I do not have a legislative magic bullet to suggest here. I am not advocating for law enforcement agencies to police private conversations in a way that compromises privacy, could itself be subject to executive over-reach and could corrode our children's capacity to spontaneously interact with one another as well as learn social lessons thereby.

I would suggest that for now, the problem can be addressed through education and would like to call on MOE and the Ministry of Home Affairs (MHA) to explore this. There are helplines for young victims of bullying, including cyberbullying, such as Tinkle Friend by the Singapore Children's Society, which does good work and deserves support from all of us.

I am also aware that cyberbullying is addressed in the current curriculum relating to cyber wellness in schools. However, how effective has this been? I would urge the Government to conduct further research into this area and study innovative programmes that have worked around the world.

In particular, we need targeted education in primary and secondary schools that helps students to recognise that as bystanders, they have an important responsibility to step in and stop bullying, or at least to not cooperate and to flag out potential issues to teachers or others in authority positions.

Encouragingly, the PISA study I cited earlier found that 94% of Singaporean students agreed that it is a good thing to help students who cannot defend themselves. But does this translate to bystanders pushing back when bullying actually happens online? This needs to be studied. The role of bystanders here is crucial.

For this kind of education to work better, I am wondering if we can enlist students who have stood up to bullying, either as victims or bystanders, to be anti-bullying ambassadors who give talks in schools as to what they experienced, what they did, how they coped and also, when and how to bring the authorities in, as opposed to necessarily escalating every single situation to people in authority.

6.57 pm

Mdm Deputy Speaker: Order.




Debate resumed.
6.58 pm

Mr Leon Perera: And perhaps video clips and other communications material can be rolled out to a mass audience featuring such anti-bullying ambassadors.

Next, Madam, let me move onto the subject of loot boxes and other gambling-like elements in electronic games. Madam, I have asked several Parliamentary Questions about this subject. The most recent was in 2019 which elicited the reply that the Remote Gambling Act (RGA) would apply to loot boxes, only if the loot thus obtained could potentially be converted into real world money. To quote from that 2019 reply, "Given the randomness of the prizes in loot boxes, they are a game of chance. However, whether the loot boxes are considered a form of gambling under the RGA, depends on whether there are in-game facilities that allow players to convert game credits or any in-game items, for example, weapons and skills, to real-world money or merchandises."

I also spoke about this subject in the 2020 MHA Committee of Supply (COS) debate, pointing out that loot boxes, a consumable virtual item used to redeem a randomised selection of further virtual items, are increasingly seen, in some quarters, as a form of gambling. I highlighted that from a study by academics at the University of York, there is some evidence linking loot boxes to problem gambling.

Belgium has banned loot boxes purchased using real money. The UK National Health Services (NHS) have also called for the industry to ban loot boxes. Its mental health director Claire Murdoch warned that these were in danger of "setting kids up for addiction."

Since my 2020 speech in this House, fresh research has strengthened the evidence of a causative link between loot boxes and problem gambling. A report produced by researchers at the universities of Plymouth and Wolverhampton in UK, as reported by the BBC in April 2021, found that loot boxes "are structurally and psychologically akin to gambling."

According to the study in the UK, "Of the 93% of children who play video games, up to 40% opened loot boxes. Twelve out of 13 studies on the topic have established 'unambiguous' connections to problem gambling behaviour…The big spenders – the crucial 5% for the industry – can spend more than £70 or £100 a month on the boxes", the report said.

A report from the Norwegian Consumer Council from May 2022 suggests that some of these in-game gambling-like elements "contribute to a sense of anticipation and reward and are presumably designed to trigger dopamine releases that keep the player opening packs."

Madam, my concern is that, aside from young children spending money in something that is potentially of little benefit, there is the risk that our young children playing these games, or perhaps a significant minority of them craving that dopamine hit, could experience changes in their brain chemistry that would render them more susceptible to problem gambling as adults. The studies I cited shows that there is some evidence to support this.

Madam, problem gambling is a serious social problem. While it affects a small percentage of the population directly, the financial losses and social harm they generate can easily affect their family members and loved ones.

I am glad to note that the Gambling Regulatory Authority announced in August this year that it is looking at loot boxes. I hope that stronger action will be taken against such elements in games, beyond just a dollar cap.

To be sure, electronic games can nurture positive traits, such as strategic thinking, information processing and mental agility. But the elements in some games which suggest that players can pay to play, but with randomised results, could well result in mental habituation and attraction towards gambling later in life.

If the Government will not ban such elements outright, as I hope it will, I hope, at the very least, that it would mandate warning labels on games that would draw parents' attention to the potential links to problem gambling. That would serve as a nudge to game developers to remove such elements from their games. I hope that such measures can be incorporated into the subsidiary legislation for this Bill or to another Act.

Mdm Deputy Speaker: Mr Gan Thiam Poh.

7.02 pm

Mr Gan Thiam Poh (Ang Mo Kio): Mdm Deputy Speaker, one of the greatest challenges for safety in the online universe is anonymity. Users need not disclose their true identities and profiles. Rogues, under this dark cover, send out harmful content. The proposals to require OCSs with significant reach to comply with the new Code of Practice will go some way to reduce Singapore users' exposure to such materials.

I have a clarification regarding the term "Singapore users". Does this apply to all users who register, or have accounts set up in Singapore, or base their Internet Protocol (IP) addresses in Singapore at the point of registration? In addition, do the restrictions apply at point of access subsequently, namely their IP addresses?

Another query is about Virtual Private Networks (VPNs) which are increasingly popular with users. With VPNs, how would the Ministry ensure that Singapore users would not be exposed to harmful content?

Next, would the Ministry elaborate on the definition of harmful content within the Singapore context and legal perspective? In dealing with OCSs with global reach, there may be some differences between desirable and undesirable, Eastern and Western values. Even what most of us agree on protecting children from, such as violence, cyberbullying and sexual content, have grey areas, especially for teenagers. I hope the Ministry can share its perspective on this.

Finally, a point about companies' management, culture and environment, as we hear about in the recent developments at Twitter. Twitter staff's ability to prevent misinformation and hate messages has been greatly reduced. In such situations, how would the Ministry ensure that we are able to enforce the controls on such platforms? With that, I support the Bill.

Mdm Deputy Speaker: Deputy Leader.


Second Reading (9 November 2022)

Resumption of Debate on Question [8 November 2022], "That the Bill be now read a Second time." – [Minister for Communications and Information].

Question again proposed.

1.31 pm

Mr Darryl David (Ang Mo Kio): Mr Deputy Speaker, the Internet and social media have become a part of us and is a key component of our social lives. While the boundarylessness of social media has increased our access to information, eased communications with friends and family, and facilitated freedom of expression, it has also, inadvertently, exposed us to potentially undesirable content.

The Government has, in recent years, taken proactive steps to mitigate the possibilities of online harm through legislation, such as online falsehoods (POFMA), foreign interference, hostile information campaigns (FICA) and online harassment (POHA). I believe the Minister had referenced these in her speech yesterday.

Yet, I believe more can be done to help Singaporeans, especially our children and youths, navigate the World Wide Web where nefarious contents might lurk and negatively impact them.

I am thus heartened to know that this Bill seeks to introduce new safeguards into the Broadcasting Act (1994) by regulating online communication services (OCS), including social media services. Today, algorithms on social media platforms are designed to maximise users’ engagement by creating as many ways users can interact with them as possible and by appealing to what interests their users the most.

It is good that we are now considering how we can hold OCS more accountable and responsible for the type of content they carry, to limit the negative impact of egregious online content on our community and society.

The Cambridge Dictionary defines “egregious” as “extremely bad in a way that is very noticeable.” Using this definition, I believe that some content on OCS would clearly be regarded as egregious, for example, materials that encourage and instruct one on how to commit suicide, supporting and promoting communal violence, videos that preach intolerance, extremism, terrorism and so on.

However, not all content on OCS is so clear-cut. For instance, consider the now infamous “Blackout Challenge” that encourages participants to choke themselves until they pass out has actually claimed the lives of several youths and young children.

The parents of one of the deceased children sued a social media platform and its parent company in May 2022 for exposing their child to harmful content, only for the lawsuit to be dismissed by a federal judge just last month, stating that the social media platform and its parent company were not responsible for the child’s death.

What is the Government’s stand on such questionable social media trends? I would be interested to know how online materials that might not be so overtly and blatantly harmful would be regarded if there is the potential for them to be harmful.

Some viral Internet challenges posted on OCS, while not overtly life-threatening at first glance, might have the potential to lead to significant public nuisances, injuries and possibly the loss of life. You might or might not be aware of the “milk crate challenge”, for example, which encourages the stacking and climbing of milk crates, which might not seem directly life-threatening – quite ridiculous, I know, but not directly life-threatening – but I would imagine that it could result in significant injuries, such as dislocations, cartilage damage and even potential permanent spinal cord damage, if one should fall badly.

So, my question to the Government is, where would we draw the line on what would be considered egregious and what would be permitted? I must admit, Mr Deputy Speaker, as a parent of a 14- and 11-year-old, I would always want to err on the side of caution, but I also realise if we take that to the extreme, then we run the risk of anything with the slightest level of danger being banned. It would be good to have some clarity on this.

While we debate on what is egregious content, I would like to ask what the Government's stand would be with regard to egregious behaviour. If the aim is to keep the online space safe, as indicated in the title of the Bill, then behaviour that is egregious should also be dealt with.

By this, Mr Deputy Speaker, I am referring to the issue of online bullying or cyberbullying, as already referenced by other Members of the House. Bullying, especially of young people, is something that has always been an issue, especially in schools. However, with the pervasive proliferation of the online space, the issue of cyberbullying has multiplied into something that we cannot take lightly.

For children and youths, cyberbullying can be intense, overwhelming, psychologically harmful and can result in mental health issues, poor academic performance, school avoidance and even suicide ideation. While our counsellors and psychologists no doubt continue to work with the victims of cyberbullying and they are doing great work, I am sure, can we and should we expect the OCS to do more in this area? After all, if we feel that egregious online content that causes harm should be regulated and controlled, then should not egregious behaviour that could result in harmful or negative outcomes also be treated the same way?

And if OCS have the sophisticated algorithms to identify cases of cyberbullying, then should we not work towards treating egregious behaviour like egregious content and possibly remove such material from the online platforms?

Mr Deputy Speaker, legislation is an important lever in helping to protect our society from online harm, but it is not a panacea. While the Government can introduce measures to protect individuals from harmful content, individuals need to also exercise self- and social responsibilities. It would be impractical and impossible for the Government to monitor all contents on OCS. A large part of social monitoring would have to be done by end-users themselves.

When end-users come across questionable content, would there be a dedicated Government-administered portal for them to provide feedback to? Or would end-users be expected to file reports with the respective OCS? While there are different merits to having a centralised Government-administered portal versus a portal administered by the individual OCS, the crux to effective reporting would be that the portals are user-friendly and it would not be too onerous for users to file reports against questionable online content. The ease of reporting will help to encourage timely reporting of such content and facilitate timely actions to regulate the content that could possibly cause online harm.

Mr Deputy Speaker, I believe that everyone would agree that in our digitally evolving world, there needs to be a greater emphasis placed on online safety and safeguards, especially for children and young adults. However, it is also unrealistic to ban anything that seems remotely harmful or dangerous as that would over “nannify” the online space. Furthermore, I accept that what is deemed harmful and dangerous is, in itself, subjective.

Nevertheless, I believe that this Bill is a step in the right direction as it would compel a large number of OCS, some of which have been criticised in the past for using algorithms to encourage and even drive certain behaviours, to also play their part, a significant part, in ensuring that the online space remains safe and conducive.

I would like to conclude with a quote from Jaron Lainer, computer scientist and virtual reality pioneer. I believe this was from the documentary, "The Social Dilemma", which, if you have not seen it on Netflix, I would recommend all Members of the House to watch as a very sobering insight into how social media works. Jaron Lainer says: “We’ve created a world in which online connection has become primary, especially for younger generations. And yet, in that world, anytime two people connect, the only way it’s financed is through a sneaky third person who’s paying to manipulate those two people. So, we’ve created an entire global generation of people who were raised within a context with the very meaning of communication, the very meaning of culture, is manipulation.”

Does this manipulation lead to harmful outcomes? If it does, then it is good that all stakeholders – Government, users and the OCS – sit up, realise this and take steps to deal with this challenge. My clarifications notwithstanding, I end my speech in firm support of the Bill.

Mr Deputy Speaker: Ms Nadia Ahmad Samdin.

1.39 pm

Ms Nadia Ahmad Samdin (Ang Mo Kio): Mr Deputy Speaker, Sir, I rise in support of the Bill. First, I would like to declare my interest as a board member of SG Her Empowerment, an Institution of Public Character that strives to empower girls and women through community engagements, partnerships and research, including on the topic of providing more holistic support to those who have faced online harms.

Sir, with this Online Safety (Miscellaneous Amendments) Bill, Singapore joins the ranks of jurisdictions, such as Australia and Germany, which have dedicated laws regulating the online space. The proposed amendments aim to protect Singapore users from harmful online content and move beyond prescriptive statutory compliance towards public accountability.

Online platforms can enhance connectivity and access to information for users.

During COVID-19, online platforms allowed us to stay in touch and check in on each other. But social media platforms can be all-consuming, for example, when doom-scrolling displaces other forms of interactions and feeds on one's insecurities, putting the number of friends and followers you have on display and subjecting physical appearance to visible likes and comments, sometimes, under troll anonymous accounts. For many, it is a tricky place where being yourself is simply not enough and putting yourself out there may have unwanted consequences.

Nearly half of Singaporeans polled by the Sunlight Alliance for Action have personally experienced online harms. These include being cyberbullied, stalked virtually and befriended by people who have fake identities. Yet, over 40% of people affected said that they would not take action, as they believe that nothing will come out of it.

Recognising these grim realities, it is important to ensure that regulations are put in place to safeguard the online space as much as we would our physical space. This Bill sends a strong signal that the Government does not take online harms lightly. That said, I would like to highlight three specific portions where I believe there is room for the Bill to be expanded on and I will be addressing potentially sensitive issues, such as gender-based online harms and suicide, in my speech.

First, the amendment tackles online harms in general, but does not specifically address the prevalence of gender-based online harms. The gender element of online safety cannot be ignored. In fact, the Sunlight Alliance for Action found that 31% of respondents have experienced or witnessed gender-based online harms. While the Bill governs a wide range of actions by online communication services (OCS), it will not cover communications within private chat groups, such as SMS and direct messages. Gender-based online harms take many forms and a significant proportion of these occur in private communication channels.

A number of women have shared with me that they have been on the receiving end of lewd or explicit messages. In 2021, AWARE reported that 70% of technology-facilitated sexual violence seen by their Sexual Assault Care Centre involved image-based sexual abuse. And almost 30% of these cases took place on private message platforms, such as WhatsApp and Telegram, which is an increase in percentage from previous years.

I recognise that there are significant privacy concerns and logistical challenges in regulating private messaging.

Indeed, the UK's proposed Online Safety Bill came under significant fire last year for its heavy-handedness, which would have compelled online intermediaries to proactively access the content of users' private messages. There have been cases of heavy-handed content moderation in other jurisdictions, which can be seen as an over-reach and invasion of privacy, causing individuals to lose access to their email and social media accounts, leading to much consternation.

However, there is potential for the Bill to compel social media giants to share data, such as historical posts and messages when the user, who is a victim of image-based sexual abuse, requests it and requires evidence, for example. This is not impossible. For instance, the Personal Data Protection Act already requires organisations to reveal the data they have on platform users, should the users ask for it. In light of this, I would like to ask the Ministry what factors has it considered in deciding that the current Bill will not provide such recourse in covering private messaging channels.

Relatedly, I do appreciate that legislation exists to deal with other aspects of gender-based online harm. For example, when Telegram groups have been found to distribute obscene images of women, administrators were charged under the Penal Code, including with distributing obscene material and facilitating the provision of sexual services. Has the Government considered streamlining all online-related harms into the Online Safety Bill, rather than relying on sections in the Penal Code and Protection from Harassment Act, placing a burden on victims to show that harm has, indeed, been inflicted? For instance, Australia's Online Safety Act, which came into effect early this year, replaced the patchwork of online safety legislation to create a more consistent and clearer regulatory framework.

Next, more can be done to ensure that minors are protected from hyper-sexualisation or inappropriate content. In Parliament, I have spoken about the need to educate youths and parents on a safer Internet culture. Minors are a vulnerable group who may not understand the way algorithms work to micro-target audiences with tailored messages and content who could be influenced to consume or produce content that is not age-appropriate before they have the ability to fully evaluate the harms and risks of such behaviour.

Tragically, the case of Molly Russell in the United Kingdom is a striking reminder of the very real dangers. In 2017, 14-year-old Molly died by suicide after viewing online content about self-harm on platforms. In the six months before her death, it was reported that Molly had saved, liked or shared 16,300 pieces of content on one platform alone, of which more than 2,100 in total, or 12 a day, were related to suicide, depression and self-harm.

Algorithms had pushed harmful materials to her accounts, some of which she had not requested. The coroner's inquest concluded recently, ruling that harmful social media content contributed to her death "in a more than minimal way".

While such devastating cases are not common, even one case is one too many and we must do all we can to prevent such occurrences from happening in Singapore, including imposing appropriate regulations and considering how we can hold the appropriate parties responsible for the content they allow on their platforms, which can also lead to other physical harms, such as eating disorders or self-injury.

Sir, I have heard concerned parents debating the merits of confiscating the mobile phones of their children to prevent them from seeing such information. But banning Internet access from our youths today is unrealistic, if not impossible. It also fails to educate and empower them.

I continue to believe that more can be done to support minors to make informed decisions about Internet content which is age-appropriate for their consumption and hope that we will continue to evaluate whether age verification is an appropriate measure to be introduced.

I note that, in the Bill, there are references to restrictions protecting children of different age groups and, indeed, the knowledge of a 13-year-old would differ from that of an 18-year-old. I would be grateful if the Ministry could share some examples of age-appropriate restrictions and how these are differentiated.

Of course, regulating providers is only one side of the equation. Consumers of online content need to be savvy, too, and guardians need to be educated so that they are more aware of the cyber risks that their children are exposed to and feel confident to support minors.

MCI has previously pointed to useful resources by the Media Literacy Council and I hope that more proactive steps are taken to directly reach out to parents and educate them. I look forward to hearing more about the Government's future efforts in this area.

Lastly, I would like to seek clarification on the definitions of egregious content. One section of the Bill lays out the state's enhanced ability to regulate online communication services if they are found to display egregious content, including content advocating suicide or self-harm, physical or sexual violence, terrorism and so on.

While I agree it would be appropriate for access to extremely harmful content to be disabled in Singapore – for example, when misinformation outrightly serves to sow discord between different faiths or promote violence and harms – this is only loosely described and illustrated in the Bill and consultations. This could be problematic and I also wonder whether certain forms of satirical or creative content may be affected.

When it comes to regulation in this space, there are no easy answers. I recognise that in the ever-changing online space, egregious content can take many forms and, hence, it might be desirable for legislators to retain some flexibility when defining the parameters.

However, in borderline cases, whose standard of egregious content will prevail? Will there be a committee or council to decide on these boundaries? As the Internet develops and new types of content emerge, will a deliberative body be set up to monitor and revise the definition of "egregious content"?

How will such a body consult with Singaporeans to shape changes to these definitions to ensure that the standards of egregious content are widely accepted? Mr Deputy Speaker, in Malay, please.

(In Malay): [Please refer to Vernacular Speech.] Last year, a survey of an obscene nature involving 12 female relief teachers was uploaded on a social media platform. What was even more disappointing was that over 1,000 people took part in the survey.

This was a very appalling act and an affront to the dignity of women.

Many women grapple with online dangers and it could cause long-term trauma.

In addition, we have also read accounts of photos or videos of women, ex-girlfriends or ex-wives being uploaded on social media as a form of revenge or simply for entertainment.

I understand that, presently, this code of practice will only cover regulated OCS that MCI has determined as having significant reach or impact in Singapore. Here, I would like to ask how often the regulated OCS list will be reviewed in view of the development of social media applications and platforms, and how fast the information can be disseminated, so that any action will be taken decisively and visibly to mitigate the risk of harm to Singaporean users from exposure to harmful content and to inform their users to be responsible.

While we may not be able to eradicate all harmful online content, with the introduction of this Bill and greater responsibility on the part of technology platforms, I hope we can make the Internet a safer place for everyone.

However, legislation and actions taken by technology platforms alone are not enough. Parents should also educate children about proper conduct and ethics when they are online. At the same time, our community members, whether male or female, young or old, should not be afraid to lodge a report if they encounter any harmful content.

(In English): Deputy Speaker, Sir, the digital landscape is constantly changing and regulations will always be playing catch-up to realities on the ground. The flexibility in the Bill demonstrates an awareness of this, allowing legislators room to respond to emerging needs and OCS providers with some leeway and timelines to address cases, depending on each circumstance's complexity.

Moving forward, it is likely that further amendments may be needed to respond to new developments in the digital space. Any new regulations should be developed as a whole-of-society effort, in consultation with the people and private sectors, too. Mr Deputy Speaker, Sir, I support the Bill.

Mr Deputy Speaker: Dr Shahira Abdullah.

1.53 pm

Dr Shahira Abdullah (Nominated Member): Mr Deputy Speaker, Sir, an online poll conducted by the Sunlight Alliance for Action (AFA) in January 2022 found that nearly half of over 1,000 Singaporeans have personally experienced online harms – the bulk of them youths. About 43% of them will not take action against it. They think that it will not make a difference.

This is concerning. It is clear that although online spaces have opened up a whole new world of information for youths, allowing them the opportunity to express themselves and connect with people globally, it has also exposed them to new forms of dangers that the previous generation has not faced, such as cyberbullying and online grooming.

Sir, an important aspect of the Bill is that it has attempted to define this content, identifying seven categories of egregious harm.

I agree with these categories as they will bring about the most harm to society. Having these definitions gives enhanced clarity to online communication services as to what is harmful in Singapore's context.

For example, in 2020, a Facebook post from the NUS Atheist Society, which is actually not affiliated with NUS, suggested using the Bible and Quran as alternatives in the event of a toilet paper shortage. In that situation, IMDA stepped in to ask Facebook to disable access to the post before it was taken down.

As a healthcare professional, I am also gratified by the inclusion of content that constitutes a public health risk. A 2021 systematic review, which studied the impact of fake news on social media and its influence on health during the COVID-19 pandemic, found that social media platforms have contributed to the infodemic, which can perpetuate fake news and information. This, in turn, has caused distrust in governments, researchers and health professionals, which directly impacts people's lives and health.

I support this Bill. However, I have queries on certain aspects of it.

Firstly, section 45H directs the online communication services (OCS) provider to disable access by Singapore users to egregious content. May I ask whether there is a specified amount of time the OCS would have to disable the content? Certain content may be more harmful and may need to be disabled immediately to mitigate the damage. One case in point is the Christchurch Attack in 2019. Using a GoPro camera, the attacker filmed himself using Facebook Live, gunning down 51 people in two mosques. The video quickly went viral and was viewed about 4,000 times before it was removed. As Washington Post reporter Drew Harwell summed up in a tweet, "The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commented about on Reddit and mirrored around the world before the tech companies could even react."

Secondly, if the OCS fails to comply with IMDA's direction to disable access or is unable to do it within a specified time, under section 45I, Internet service providers (ISPs) can be directed to block Singapore users' access to the OCS. Could the Minister clarify how long the OCS is expected to be blocked in such cases and what factors are used to determine the length of time?

Thirdly, I would also like to seek some clarity on who would be designated a regulated online communication (ROC) provider. Although section 45K(1) sets out some criteria, they are factors that do not bind IMDA and are actually quite wide. For example, how will IMDA quantify or measure "the extent and nature of the effect that different types of OCS have on the people of Singapore"?

Fourthly, section 45L sets out the boundaries of IMDA's powers in the code of practice. However, it is not clear how IMDA intends to implement the different objectives under section 45L. Could I seek clarification regarding the type of measures that may be introduced by IMDA under the code of practice? For example, how does IMDA intend to have OCS prevent children of different age groups access content that presents a material risk of significant harm to them? Most already have terms and conditions that require users to declare they are above a certain age. If a child user declares falsely, will OCS be liable for the child’s actions? If an OCS sets out community terms that explicitly prohibit egregious content but a user posts such content, will OCS be liable for the publishing of such content? What safeguards are there to ensure the audits requested by IMDA are not excessive?

Fifthly, technology evolves at a rapid speed. During the time the UK started drafting the online bill, TikTok emerged. We also now have the metaverse. How often will the code of practice be updated to keep up with these changes?

Finally, I would also like to draw attention to the work of the human content moderators. The constant exposure to disturbing and inappropriate content takes a toll on their well-being. In 2019, Cambridge Consultants reported that "moderating harmful content can cause significant psychological damage to moderators. The psychological effects of viewing harmful content is well documented, with reports of moderators experiencing post-traumatic stress disorder (PTSD) symptoms and other mental health issues as a result of the disturbing content that they are exposed to." May I suggest, therefore, that we also make it a part of the code of practice to protect our protectors – the content moderators who risk personal wellness for social good?

In addition to continually improving artificial intelligence (AI), OCS can protect content moderators' mental health through a variety of practices, such as having access to mental health services and support, as well as having exposure limits.

Overall, I am glad that in the drafting of this Bill, extensive industry and public consultations were carried out. However, tackling harmful online content requires a whole-of-society effort.

As the good Minister stated yesterday, laws are not a silver bullet. It is also insufficient and inappropriate to pass the onus of regulating such content on OCS alone without recognising that OCS services are merely a tool or platform that reflects the way individuals use them.

Ultimately, individuals – parents, children, teachers – law enforcement authorities and governments all have a role to play, too. Everyone has a part to play by not engaging in online harm, by proactively taking steps to prevent exposure and by supporting those who have become victims of it. Mr Deputy Speaker, Sir, clarifications notwithstanding, I support the Bill.

Mr Deputy Speaker: Ms Yeo Wan Ling.

2.00 pm

Ms Yeo Wan Ling (Pasir Ris-Punggol): Mr Deputy Speaker, Sir, with Singapore having one of the highest rates of digital penetration globally, the ubiquitous influence of the Internet and social media among Singaporeans has been well documented and is certainly here to stay. Today, social media platforms are not merely a conduit through which people interact with one another, but they serve as an extension of an individual's being, a representation of who they are and what they stand for. Given how much our digital spaces shape our physical realities, the formation of online communities and groups mirrors the physical society we live in and, therefore, the actions we take and the information we read online bear no less significance than those we do in person and in print.

If we accept that our societies are governed by rules and laws that aim to maintain peace and harmony in our nation, then why should our online societies be absolved of such a need? It is for this reason that I welcome the proposed amendment to the Online Safety Bill and, in particular, two significant requirements that were laid out in the new measures.

Firstly, the call for greater transparency from social media platforms in their decision-making process and, secondly, the obligation for these platforms to provide practical guidance and guidelines to educate local online users, along with the implementation of simple ways for users to report harmful content and unwanted interactions.

First, greater transparency. Behind the personalised content that we see on our social media accounts, there exist complex algorithms with various inputs and metrics that determine what is recommended to us. With online platforms expected to provide greater clarity regarding how they intend to protect local users from harmful content, users are not only granted the opportunity to make informed decisions but are also able to better understand the decision-making process with regard to appeals and requests they make surrounding egregious content.

Secondly, the requirement for practical guidance from social media platforms. To successfully build a safe online environment, it is imperative that users are well-educated on what constitutes harmful conduct and content, and it is my hope that the new measures fulfil this need. Furthermore, providing easy and open communication channels will best allow users to voice their concerns as users play an equal part in fostering a safe online environment.

While these developments are, indeed, a first step in the right direction, it is important that we do not just stop at this juncture. While the amendment primarily focuses on specific egregious content that includes posts advocating suicide, self-harm, terrorism, sexual exploitation and other material that risk damaging our social fabric, these are not the only issues that constitute online harm. If we recognise that our online society is very much an extension of the physical society we share in, then we, as a country, should strive to regulate our social media just as we would our physical reality. This is especially important for native issues that arise specifically from the use of social media. Specifically, in the realm of their public and private posts, and their platform's direct messaging services on issues, such as cyberbullying, doxxing, or even revenge porn.

Even as Singapore has enacted legislation, such as the Protection from Harassment Act (POHA), to deal with such specific issues, these laws, currently, do not afford the capacity to hold social media platforms responsible regarding their action, or lack thereof, in dealing with such situations, especially where the platform wields significant control over the speed of the entire process and the final outcome. For a victim to have no agency over information about their private lives being shared online, such an experience can be extremely destabilising and traumatising and, in certain cases, this could even place them at risk of physical harm.

I am reminded of a rather vexing and traumatic personal story shared with me by a young lady who was scared, traumatised, worried and confused. As a victim of doxxing, she had her personal details – her name, school, employer, amongst other things – revealed online against her desire by an anonymous account. When she attempted to remove the post and sought recourse from the social media platform, the lack of urgency in dealing with the matter exacerbated the already extreme levels of vulnerability and fear she experienced, knowing that every extra second the post was allowed to remain online was another opportunity for a vile user to re-victimise her. Even though she had also lodged a Police report, there was no certainty that the report would expedite the removal of her personal identifiable information that has been shared online.

On my part, I had reported the offending account to the social media platform on the same day that the young lady had reached out to me for help. I received an acknowledgement note almost immediately from the platform saying that they had received the report and that they would use their community guidelines to review the account. It was not until four days later that I received a note from the social media platform saying this, "We didn’t remove xxx account. Because of the high volume of reports we receive, our team hasn’t been able to review this account. Bullying and harassment are not okay and are against our community guidelines. There are a few things you can do to avoid seeing accounts you find upsetting. If you don’t want to see xxx on our media platform, you can unfollow, mute, or block them to hide their posts and comments from your feed. If you think this account has posted specific content that shows bullying or harassment, please let us know by reporting the relevant post, comment or story, so we can review it. Case closed."

I note that at the time of the report, the account only had one post, which is the one the young lady had brought to my attention.

It was extremely troubling to know how helpless she, and even I, felt throughout this whole situation, as she received no update or communication from the social media platform regarding the case until the outcome was finalised, of which the post was removed only after an extended period of time. The account, however, remains active to this day, leaving the victim at the mercy of the anonymous user who could yet again subject the victim to another episode of doxxing. Clearly, social media platforms ought to take more responsibility in such a situation.

Social media platforms, you can do better, and we can do better for online safety.

Mr Deputy Speaker, while I know that we cannot reasonably expect social media platforms to transform their processes and policies overnight, I hope this anecdote serves as a reminder of the extent of harm that our local users can potentially be subjected to, and perhaps provide a reference point and an impetus from which regulation can derive its requirements from. It is, therefore, heartening to see the call for greater transparency surrounding the decision-making processes of social media platforms, along with the tools, resources and enforcement options available to them. With greater clarity, users are better informed and well-assured about how social media platforms can support them upon experiencing online harm. Yet, these measures do not fully grasp the emotional and mental turbulence that one experiences as a victim of online harm. The apparent lack of a human touch in the process of dealing with online harm by social media platforms reflects a certain degree of uncongeniality and coldness that does not work to protect and support victims in times of need, and perhaps should be re-evaluated.

In conclusion, if the new laws enacted aim to mitigate the present online harms, then it is my hope that we will continue to expand our remit and regulations to protect our users from having to suffer from experiences, such as that which I have recounted. I support the Bill.

Mr Deputy Speaker: Ms Janet Ang.

2.09 pm

Ms Janet Ang (Nominated Member): Mr Deputy Speaker, I stand in support of the amendment to the Online Safety Bill.

This Bill is very timely and I appreciate MCI taking proactive steps to put safeguards in place to protect Singaporeans from harmful content, especially those created and distributed with malicious intentions to disrespect, bully, cause division in society, degrade the dignity of persons and of humanity. In her speech yesterday, the Minister had given many examples of such harms and I trust that we do not need to be persuaded further that it is urgent and imperative for the amendment Bill to pass.

After listening to hon Member Tin Pei Ling's horror stories involving an avatar in Web 3.0, I am more convinced than ever that we need to take courage to act urgently to protect ourselves, our children and our children's children.

I support the approach which will be taken under the amended Act, but I do have a few questions for the Minister.

What are the challenges envisaged by the Ministry with regard to enforcement of the law and how does the Ministry intend to address those challenges?

Second, it is good that education of students in schools and Institutes of Higher Learning (IHLs), as well as education of the general public, will be undertaken in conjunction with the legislation. Education will increase awareness in the vulnerable community and the general public of the prevalence of harmful contents that are lurking in the web and how they can participate in reporting sites or services that carry those contents and, in doing so, be part of the "neighbourhood police" to detect and help to catch bad actors on our Internet and social media platforms. Just as in the fight against scams, awareness and alertness in the community can go a long way to fight the proliferation of harmful malicious content. How does the Ministry intend to organise such neighbourhood policing against online harmful content? How can volunteers and civic-minded tech-savvy Singaporeans and residents support the Government in this effort?

Third, it is critical that the Government is able to act quickly when there is non-compliance to the Act so that the harmful content does not spread and cause any social epidemic or even pandemic. Like COVID-19, we needed a circuit breaker to ensure that the common good of our society and the social health of our citizens and people are protected. Whether the content is harmful or otherwise, it may be said to depend on the values of the adult individuals. Who will decide what content is harmful and thus will be subject to the legislation? How does the Government intend to facilitate the voices of the society to be heard as far as determining what is harmful without compromising the need for swift action to take down the bad actors?

How will the Government handle cases when the content originates from overseas and is not distributed by locally-licensed online communication service providers?

How will MCI leverage technology in this war against online harmful content?

Harmful malicious content spreads like virus and we need the Government and the whole of society to fight against this virus of harmful online content together. This Bill, which is being debated today, is downstream, when the content is already out there, and actions laid out in the Bill are needed to stop the spread and protect Singaporeans from being harmed.

As parents, educators, Government and professionals in the business of communication, we need to focus on moving upstream and pay attention to the need to assist people, especially young people, to develop sound and critical sensing, and to learn how to distinguish truth from falsehood, right from wrong, good from evil.

As a country, it is on us that the soul of our nation and the values of our society are not put at risk, even as I appeal to the conscience of creators of content and online service providers everywhere to do no evil. Notwithstanding my clarifications, Mr Deputy Speaker, I support the Bill.

Mr Deputy Speaker: Mr Louis Ng.

2.14 pm

Mr Louis Ng Kok Kwang (Nee Soon): Sir, I thank the Government for holding a public consultation and multiple engagement sessions in preparing for this Bill.

After all, the Government cannot, by itself, ensure online safety for Singaporeans. Internet companies, experts, parents and young people are all essential partners it must work with.

The discussion on online safety also comes at a time when I am being pressured by my daughter, Ella, to allow her to play Roblox. Actually, she has been nagging me for years to be allowed to play this game, but it has intensified lately as all her friends are playing this game. I am terrified of her being exposed to harmful and inappropriate content online. All parents are. I hope this discussion and this Bill will make steps forward to ensuring a safer online space for our children.

I have three clarifications on this Bill.

My first clarification is about the definitions of "egregious content" and "harmful content".

First, the Bill defines one category of "egregious content" as content that is "likely to cause feelings of enmity, hatred, ill will or hostility against, or contempt for or ridicule of, different racial or religious groups in Singapore." Can the Minister share why this category does not include content that has a similar impact on other demographic segments, such as gender?

Second, IMDA's draft of the Code of Practice for Online Safety lists six categories of harmful content. Social media platforms must "minimise users' exposure" to such content. In the final version of the Code, will IMDA provide more specific category names, detailed explanations for each category or sub-categories for exclusion? The ambiguity makes it possible for educational or otherwise beneficial content to be caught in the dragnet for harmful content.

Third, will IMDA also consider adding new categories for harmful content? For instance, harmful content should include content that promotes extreme beauty standards. Such content harms our youths by giving them unrealistic expectations. It affects their self-esteem and encourages them to engage in unhealthy behaviour to meet these standards.

My second set of clarifications relates to the scope of these new provisions.

It seems clear that these regulations will apply to platforms like Facebook, YouTube or TikTok. But many other companies also use user-generated content. For example, e-commerce platforms may rely on user reviews and comments. Online games, as I shared earlier, may have extensive user interaction. Can the Minister clarify if social media services also include online platforms whose core business is not social media?

Private or domestic communications are also excluded from these provisions. Can the Minister clarify if this excludes direct messages (DMs) or other user-to-user interactions? This is a potential channel for harmful content to be transmitted. For example, a study has found that one in 15 DMs sent by strangers to high-profile women are potentially abusive.

Can the Minister also clarify if semi-private communities, such as Discord servers or Telegram groups, will be treated as private or domestic communication?

My last set of clarifications relate to whether we can do more to help individual victims of harmful content. Victims of revenge porn, cyberbullying or doxxing suffer direct harm to their lives.

Since 2016, AWARE's Sexual Assault Care Centre has supported 747 clients who experienced technology-facilitated sexual violence. Survivors suffer a loss of dignity and privacy and experience an uphill battle in containing the spread of content once uploaded onto the Internet.

I have three suggestions on how we can help these victims.

First, funds from penalties under this Bill can be set aside to support these victim-survivors. These funds should be used in partnership with civil society groups who are already active in the community in helping these victims.

Second, we can create a general duty of care on online communication providers to compensate individuals for harm they suffer due to the platform's negligence in managing harmful content. Platforms could be negligent if they are too slow in taking down harmful content or failing to meet standards in the Code of Practice for Online Safety. This duty of care allows the victim to be compensated for their harm under the law of negligence.

This duty of care should apply not only to large social media platforms, but all online communication providers, as the potential harm does not discriminate. Small platforms may also be a way to escape detection by sharing harmful content there, then using links to circumvent the safeguards in larger platforms.

Finally, as long as end-users believe they can hide behind the cloak of anonymity of the digital world, they will continue to try and publish harmful content. Individuals who are affected have to rely on themselves to work with the online platform to get the content removed.

Going to the Police may not be useful as the Police may lack the jurisdiction or capability to investigate matters of this nature. To ensure sufficient deterrence against end-users, will the Government consider increasing resources and training for the Police to assist victims and take swift action against end-users who post harmful content? Sir, notwithstanding my clarifications, I stand in support of the Bill.

Mr Deputy Speaker: Mr Mark Chay.

2.19 pm

Mr Mark Chay (Nominated Member): Mr Deputy Speaker, thank you for the opportunity to join this debate on the Online Safety Bill. I believe that this is a move in the right direction in making the Internet safer for all to use.

I am comforted that measures are now being taken to combat the proliferation of violent and harmful materials online. Through this Bill, we are sending a strong statement that, in Singapore, we will regulate our online communication services and hold them to a high standard of conduct because there is no place for the public sharing of content that can reasonably be deemed to have a harmful outcome or intent.

The Bill introduces a new part to the Broadcasting Act to regulate Online Communication Services (OCS). Social Media Services (SMS) are listed as a type of OCS. Under the Bill, SMS will need to implement measures to limit local users' exposure to harmful content, thus forcing OCS to be more accountable to users. As there are many forms of social media, it would be good if the Ministry can clarify if messaging platforms, such as WhatsApp, Telegram, WeChat and Kakao Talk would be considered as SMS.

Mr Deputy Speaker, I am also pleased that, in Singapore, we recognise that the law is only one vehicle of the social order. With respect to safeguarding children from online harms, rather than tackling this issue primarily from the legislature, robust and comprehensive education programmes should be in place to complement these policies. I am particularly heartened to see that MOE's Character and Citizenship Education syllabus includes "cyber wellness". As part of the syllabus, students learn about navigating cyberspace. Students gain knowledge and skills to, one, harness the power of information and communication technology (ICT) for positive purposes; two, maintain a positive presence online; and three, be safe and responsible users of ICT.

Going a step further, I would suggest creating easily accessible safe counselling spaces in schools or create online chatrooms for children to access counselling services if they are adversely affected after an incident or exposure.

Ideally, parents should be the first port of call to address any feelings of confusion, disgust or anxiety that can follow exposure to online harassment. The reality is that not all parents are equipped or prepared to have such conversations in a helpful, supportive or productive manner. Children, and even adults, can be severely traumatised by explicit content and hateful language. When trauma is left unaddressed and repressed, it can manifest itself in unhealthy ways. In the code proposed for designated SMS, I would like to propose that tools that allow children and parents to manage online safety and directions to where to seek help if exposed to harmful or inappropriate material be made available.

Mr Deputy Speaker, while this new Bill puts a lot of emphasis on moderating and cleaning up content on social media services and platforms, harmful content also exists in online games as well.

Toxic behaviours, coarse language, harassment, threats and explicit sexual and violent content are examples of harmful content found in online games. And while this Bill is not intended to address gaming, perhaps the Ministry can engage the e-sports and gaming community and consider similar protections for players and gamers in the future. Today, the industry is already recognising toxicity as a problem and some major gaming companies have taken initiatives to launch their own anti-toxicity programmes, but with little success.

As technology and graphics become increasingly sophisticated, many of the most popular games are thrilling and exciting in part because they showcase violent behaviour and explicit conduct with realistic detail. Storylines and game themes can also depict disagreeable messages and social behaviours. With Big Tech firms investing heavily in the metaverse, we can expect more pervasive, more immersive technology and experiences very soon in our everyday lives. Clearly stated age ratings, warnings and age restrictions are already in place. But are these enough? Can we better protect children from exposure to sexualisation and violence? I would like to encourage the Ministry to consider how players and gamers can be protected as well.

Online gaming may also expose vulnerable groups to Loot Boxes and Gochas. They are a commonplace in game design and architecture. They create an element of chance and reward which, in turn, creates a stickiness to the game. This element of chance may seed undesirable habits which not only affect the players, but their families and persons around them, much like how problem gambling affects a community and not just the gambler alone.

At this juncture, I would like to declare that I am an officer at the Global Esports Federation. And I would like to say that not all games are violent or gratuitous. There are many genres of games that focus on strategy, desirable values and are productive. I agree with the Minister that the Internet and technology are changing rapidly and as we debate about the Online Safety Bill in the context of SMS today, we can plan for legislation on gaming in the future. We should keep the conversation going. In fact, we should speak openly about gaming and how to game responsibly. I would like to suggest that the Ministry, together with other relevant agencies, such as MOE and MHA, come together with the gaming and e-sports community to craft a path forward.

In closing, Mr Deputy Speaker, notwithstanding my clarifications, I am in support of the Online Safety Bill. The Internet was created to be a tool of empowerment and should be a safe space for Singaporeans to explore and engage freely and securely.

Mr Deputy Speaker: Mr Melvin Yong.

2.26 pm

Mr Melvin Yong Yik Chye (Radin Mas): Mr Deputy Speaker, I stand in support of the Bill, which proposes new measures to tackle harmful content on online services accessible to users in Singapore.

Sir, the Bill proposes a wide range of measures to improve online safety of Internet users in Singapore, combat harmful content and empower users with information and tools to protect themselves from content which is harmful or detrimental to their well-being. In my speech today, I will touch on how we need to protect our most vulnerable groups and suggest ways that we can further enhance protection, particularly for children.

Sir, it is a fact that Internet users in Singapore are not immune from harmful online content. And we all know that content that is racially offensive, that promotes violence against individuals or certain groups and those that try to cause divisions among racial and religious groups, can have a destabilising effect on our society.

In her opening speech, the Minister also mentioned various engagement sessions that showed the growing public concerns amongst Singaporeans with regard to online safety. The proposed measures under this Bill are, therefore, timely, as it will require online communication service (OCS) providers to be regulated and to remove harmful online content when asked to do so.

Actually, I would argue that legislation has become necessary because the OCS providers have failed to properly self-regulate and remove harmful content from being posted on their platforms. Today, many, if not all, OCS providers have a robust algorithm that removes copyrighted content almost instantaneously. The technology to remove content automatically and fast already exists and I hope that the social media sites can channel this technology beyond policing copyrighted content and focus also on what is deemed as socially harmful content.

However, just as an OCS provider can use technology to aid enforcement, consumers can use technology, too, to circumvent blocked content. In cases where an OCS provider disables access by Singapore users to a harmful content, it can still be easily accessible with a simple subscription to a VPN service. I would like to ask how is the Ministry planning to address this gap.

I would also like to ask how fast must an OCS provider comply with a directive issued by IMDA under this Bill. I note that the current draft Code of Practice for Online Safety for Designated Social Media Services does not specify a timeline for compliance. I understand that in some of the European countries, the OCS provider is required to comply with a take-down notice within a stipulated time. I am, therefore, curious to know what are the reasons behind the omission of a timeline to comply with the IMDA directive.

The speed at which a video of the 2019 New Zealand mosque shooting spread across social media platforms has demonstrated yet again that tech companies are still struggling to control content, especially from popular social media services that offer livestreaming of events.

I read that, in Germany, social media companies, such as Facebook, Twitter, Google and YouTube, are part of a self-regulatory task force committed to removing harmful content quickly. These companies have introduced or improved internal reporting mechanisms and employed local experts to undertake supervision of content. Will the Ministry consider working with the key OCS providers to establish such a task force in Singapore?

Mr Deputy Speaker, the second half of my speech deals with the need to protect our children from harmful online content.

Children today have grown up with the Internet, and online media is a key part of their daily lives. According to various research, social media and other online media have a direct impact on our children. The content that our children see online shapes the way they think and influences how they perceive themselves and others.

Despite many social media services having minimum age limits for new account sign-ups, many young children do have a social media account. Some even have two accounts! One for their parents to see, and another for their friends which, typically, includes content deemed not suitable for the parents. Against such a backdrop, we must strive to protect our children when they are at an impressionable age and ensure that they have a safe space when they access the Internet and use social media services.

For our vulnerable young children, I hope that companies can put in place speed humps to slow down their use of the social media services. We can do so in two ways.

First, I call for mandatory age verification to be done for all new sign-ups, to ensure that new accounts being created comply with the terms of use set out by the social media service. While I note that the draft code of practice proposes that the services must minimise users' exposure to harmful content through measures, such as through content moderation measures, these are imperfect and many seemingly innocuous posts can end up being harmful to the very young.

Second, we should introduce mandatory screen times for very young children. Excessive screen time has been linked to poor development outcomes and we must ensure that our very young do not get hooked onto the never-ending spiral of scrolling mindlessly through social media content.

Sir, in addition to introducing speed humps to control online media consumption by our children, we must also do all we can to tackle online bullying that many children face.

According to statistics from MOE, the number of bullying incidents reported each year to schools has remained low, at about two incidents per 1,000 Primary school pupils and five incidents per 1,000 Secondary school pupils. I wonder if these figures included online bullying.

In August 2022, a video emerged on social media showing three teenage girls beating up another girl in a carpark. For every such incident that goes viral, how many more fly undetected under the radar? What about more subtle, but no less harmful, forms of online bullying, such as posting hateful and "troll" comments among peers?

Bullying, including online bullying, impacts the mental health of our children and we need to do all we can to tackle this. I hope that the Ministry can review the proposed code of practice and allow users to flag comments or posts that can be deemed as bullying behaviour, so that such harmful content can be addressed promptly.

Beyond social media sites, online bullying behaviour is prevalent in online games, too. As advisor to the Singapore Cybersports and Online Gaming Association, this is an area that I am greatly concerned about, as our children tend to treat what happens in online games very seriously.

A key concern about the popularity of video games is that so much of the content is hypersexualised. Pornography is often embedded in these games and many games do glorify violence and sexual exploitation. In fact, a quick online search will surface many stories about how violence towards women is encouraged amongst gamers who hide under the cover of anonymity. For example, in the game "The Sims Online", a "cyber-brothel" was developed by a 17-year-old boy using the game alias "Evangeline" and customers paid sim-money for cybersex by the minute. It was later reported that his account was cancelled, but no legal action was taken against him.

Sir, online games have evolved from being single player, single console activities to massively multiplayer online interactions, which often integrates playing with networking to build a huge online community. As such, many children unwittingly interact with strangers for the first time in these gaming platforms, raising significantly their online safety risk. I really hope that the Ministry can include regulation of the online gaming space in the next review of the code of practice.

Sir, the business models of social media services hinge on capturing our attention span. The longer we spend on the site, the more advertisements they can serve and the more revenue they can generate. Children, unlike adults, cannot make this trade-off conscientiously and they do not understand the potential unknown side effects from being hooked onto these social media sites. We must, therefore, strive to protect them and introduce speed humps to manage their consumption of such online content. We must also do all we can to stop and prevent online bullying, which can cause real and sometimes deadly, real-world consequences. With that, I support the Bill.

Mr Deputy Speaker: Mr Saktiandi Supaat.

2.37 pm

Mr Saktiandi Supaat (Bishan-Toa Payoh): Mr Deputy Speaker, Sir, according to Statista's research department, an estimated 5.29 million Singapore residents accessed the Internet in 2021. If you take into account users who access online social media platforms using their mobile device, the Internet penetration rate goes up higher from 89.5% to around 90%. The age of such users is also getting younger and younger. When speaking to children and their parents at community events, I have found that it is no longer surprising for Primary school children to have their own accounts on at least one or two social media platforms.

This Bill, therefore, is timely in putting in some minimum protections from the increasingly widespread use of online platforms. I have some clarifications to seek on the Bill, as well as several comments on how we can make the online aspect of our society a safe and secure place for Singaporeans and Singapore residents.

First, the principal amendment of the Bill is the addition of a new Part in the Broadcasting Act that will allow IMDA to issue directions to providers of online communication services to disable or block access to "egregious content". "Egregious content" is then defined in the new section 45D to include content that advocates or instructs on suicide, self-harm, child nudity and terrorism.

In connection with limb (b), I have two questions.

First, does "content that advocates or instructs on violence or cruelty to, physical abuse of, or acts of torture or other infliction of serious physical harm on, human beings" catch videos that encourage Singaporeans to participate in a foreign armed conflict? I recall that when the Ukraine conflict broke out in February this year, there were people ringing up the Ukrainian Embassy in Singapore wanting to join the fight in Ukraine. MHA even had to put out a statement to warn that it would be an offence to join a foreign war.

Second, why do we stop at violence, cruelty, abuse or torture on human beings? Are videos of animal cruelty or abuse not equally offensive? Perhaps this can be prescribed as "egregious content", too, under other Part 10A regulations.

I would also like to seek clarifications on whether there are any other categories of "egregious content" that the Ministry or IMDA is intending to prescribe under regulations. Will we also look to censor content which explicitly promotes lifestyles which are not in line with what is presently accepted as norms? Or the commercialisation of obscene and nude photos and videos? Should we also designate obvious scam advertisements as "egregious content"? I look forward to the Minister's response on this.

The amendments in this Bill will allow for more effective enforcement against "egregious content", by placing the onus on the online platform providers to disable or block access. On social media platforms with millions, if not billions, of users, it is more efficient to regulate the platform rather than individual uploaders.

The Bill provides that an electronic service provider is covered so long as it allows content to be accessed in Singapore, unless it is an "excluded electronic service". This would extend to a service that is provided from outside Singapore. For such foreign service providers who do not have a place of business in Singapore or its headquarters (HQ), how does the Ministry intend to effectively enforce the regulations which are premised on giving a direction to the service provider and making non-compliance with such directions a criminal offence? What will stop these foreign service providers from keeping their operations strictly outside Singapore, while flagrantly delivering "egregious content" into Singapore?

It appears from the new section 45R(3) that there will be individuals tasked to monitor and flag "egregious content" for the purpose of enforcing these new laws. Would a new department be set up under IMDA or the Singapore Police Force to do this? What would be the size of this enforcement department? This is relevant because the effectiveness of protection will depend on the speed at which we are able to block and "take down" offensive content.

For example, back in March 2019 – I believe this example has been shared in many speeches before me – the far-right extremist livestreamed himself on Facebook Live shooting and killing 50 people in the New Zealand mosques. Facebook did not block the livestream. Facebook, YouTube and Twitter had to fight to take down more than one million copies of the videos circulating online within the first 24 hours. The damage was already done.

I am sure most will agree that that is an obvious example of "egregious content". But there may also be other types that are more debatable, especially where Singapore's tolerance level and definition of racially or religiously offensive content may differ from the rest of the world or even from the perspective of the content provider or the service provider globally.

How fast do we expect to act to issue disabling and blocking directions to electronic service providers? Can the Government share some detail about the processes it intends to put in place to arrest any unforeseen scenarios promptly?

Even after the electronic service providers are issued the disabling or blocking directions, how long will they be given to comply with the directions given? Will the timeframe be stated in the direction? This is especially significant because the new section 45G reverses the burden of proof onto the electronic service provider to show that it had done the best that was reasonably practicable to do when it is charged with an offence under section 45E(1) or 45F(1).

Perhaps we can consider enhancing a second prong of enforcement by relying on the user reporting avenues to flag objectionable content. This may result in faster detection because we effectively rely on a wider pool of eyes to identify and flag "egregious content". From a regulatory standpoint, we could legislate higher frequency instead of an annual basis or more frequent audits on how quickly and effectively these electronic service providers act on the user reports that it receives and the systems and processes it has put in place.

Mr Deputy Speaker, I also note that the Bill provides for the potential expansion of the regulatory scope through the issuance of subsidiary legislation, orders or online codes of practice. While the Bill, if passed, starts off by targeting "social media services" only, the Minister will be authorised to amend or add to this list of services in the Fourth Schedule by publishing an order in the Gazette.

May I ask what other types of electronic services may, potentially, be brought within the scope of these new provisions? There are also going to be enhanced regulation of certain platforms that are designated a "regulated online communication service", having regard to the range of services provided to Singapore end-users and the extent and nature of the effect of such services in Singapore.

Has the Ministry provisionally identified which platforms will be designated as a "regulated online communication service" and whether the Ministry has already engaged with these platforms?

I give one example. We had a hackaton in Toa Payoh East CC and I had chatted with some of the youths there. They did share with me a platform called "Discord" – some of the gamers in this House may know about that. It is a gaming platform where gamers can chat, amongst others, within the Discord server. It is an online chat service. However, what is significant is that it has more than 140 million monthly active users now and it is being used beyond gaming. So, it is one example of an online communications service that has evolved and new things have come up.

These "regulated online communication services" may be subjected to one or more online codes of practice to be issued by IMDA, and the new section 45L states that the Minister will prescribe certain consultation processes to be followed before an online code of practice can be issued, amended or revoked. Are these processes already ready to be presented to Parliament for consideration? If not, when can we expect these processes to be finalised and legislated?

Besides a robust consultation process leading up to the issuance of an online code of practice, may I also suggest that the Ministry think about a periodic consultation process that can be put in place so that industry feedback can be canvassed on amendments that need to be made to an existing online code of practice.

May I also suggest that we have a council and, possibly, an advisory panel, set up to draw up and update the online code of practice from a diverse group of representatives?

Mr Deputy Speaker, I would like to conclude by sharing some thoughts that are beyond the scope of this Bill, but which I believe we should consider in our safety review of our online landscape.

First, I understand that this Bill presently targets "egregious content" which should not be appropriate for consumption regardless of age or maturity. On behalf of concerned parents with young or teenage children like myself, I would like to ask if the Ministry foresees that some of the provisions here may be watered down in future to provide for age-specific classification of objectionable content. My worry there is that it is simply not as easy to verify one's age in the online world as checking one's EZ-Link card or NRIC at the cinema.

Second, the algorithms on social media platforms that create "echo chambers" with like-minded individuals, while shunning opposing views, also have the risk of deepening divides in our society. While it is understandable for platforms to push products that are matched to user interests so as to maximise advertising revenues, there should be certain controls on pushing views that are consistent with an individual's echo chambers. The latter will only push people towards extremes and worsen the divisiveness of our society.

Third, there is also an observation that society will become less and less informed as social media platforms deprioritise news and other serious content in favour of entertainment-related content. This is especially acute, given the success of TikTok in recent years, where user clicks are driven more by less serious content and platforms are adapting their algorithms to stay commercially competitive. Mr Deputy Speaker, Sir, notwithstanding the clarifications sought, I support the Bill.

Mr Deputy Speaker: Mr Alex Yam.

2.47 pm

Mr Alex Yam (Marsiling-Yew Tee): Mr Deputy Speaker, most, if not all the hon Members who spoke before me are parents of young children or teens. I have four precocious young children of my own. Jocelyn and I, like many parents, sometimes relent and allow tablets and smartphones to be their temporary guardians, a salve for those moments that you just need a quick break from parenting. Yet, truth be told, our fussing and anxiety over them continue even while they quieten down for their much-enjoyed screen time.

As parents, these thoughts do course through our minds: are they playing games that are too violent? What questionable videos are they accessing, perhaps involuntarily? Are they being exposed to the wrong company or the wrong content in their online interactions? We cannot discount that the Internet and social media have been a great force for good over the last two decades. They have revolutionised education, transformed the global economy, increased the flow of information and knowledge, and also helped us all to keep in touch with one another. But the Internet is like water – a good servant, but a bad master.

As far back as in 1998, our late founding father, Mr Lee Kuan Yew, had described the Internet as a force for both good and evil; that the Internet is as much a purveyor of truth as it is of outright lies. He further pointed out that though it may take some time, morality and wisdom must find a way to control and tame new technology to preserve the fundamental values of society by which parents bring up their children to be good citizens.

Sir, big tech today is worth over $10 trillion. Let that sink in. Big tech is now so big and so ubiquitous that we have forgotten to be shocked by its growth and its value. And big tech needs to be held accountable. Let us just start by looking at what is deemed by some as the new epidemic of our times: pornographic content on various platforms. Up until the early 2000s, pornography in all its forms was somewhat isolated, consigned to corners of red-light districts, back alleys of night markets, confined to the margins by social norms and, to some extent, shame.

Today, by some estimates, the industry is estimated to be worth more than $100 billion annually. While statistics are hard to come by, some researchers indicate that it comprises one quarter of daily search engine requests and seven in 10 children are inadvertently exposed to such materials daily. I am glad that this Bill makes it clear that such content should not be exposed to our children, and child sexual exploitation in all its various forms on all platforms will be tackled aggressively. But I also believe it still does not go far enough to curb the risk to children through other content.

The safeguards proposed in this Bill require algorithms to be adjusted to prevent any content "detrimental to the physical or mental well-being of children" and to also include automatic parental tools. But these, as other members have pointed out, can be easily circumvented. I do agree that, for some, age verification dwells into the realm of privacy, but I think it is the only sure and needed way for content to become age-sensitive for the end user.

Over the last two years, we have lived through the crisis of a generation. The pandemic has forced us to make uncomfortable decisions at times. But, yet, for the sake of public health, some small sacrifices have had to be made. It has all turned out to be better. We can now freely debate various issues in this hallowed chamber without restrictions because we, collectively, took a sensible course of action.

But the lessons of COVID-19 misinformation must not be forgotten. How easily, in a crisis, public safety can be compromised through fake news, quack treatments and bogus remedies. If we had allowed misinformation to run rampant, thousands more could have died. From COVID-19 conspiracy theorists to vaccine deniers, we could all be in a vastly different situation today if we did not take a rational, science-based, evidential approach.

Some of you might be aware, I for one, was badly affected by a booster shot, enough to put me in hospital with still some lasting after-effects, perhaps for the long term. Yet, despite being amongst a very small minority unfortunate enough to suffer a severe reaction, I believe my decision to be vaccinated was firmly informed by science and evidence rather than to be hoodwinked by online hocus pocus.

We all saw how it could have been vastly different in other countries; even, locally, we have examples. The elderly lady, taken violently ill and hospitalised because her circle of friends convinced her through mutually shared online fakery, that deworming drug, Ivermectin, was a miracle drug against COVID-19. And these are just a small sliver of the massive deluge of misinformation online on COVID-19 that have flooded all countries and could have compromised, not just individual health, but the entire community and the safety of various countries.

As such, I do wonder if the Ministry has enough manpower to ensure that such egregious content can be expeditiously removed before further harm can be done. The current proposed Bill does not stipulate a timeline, as many other Members have pointed out, compared to other countries, for example, in Australia, that indicate a 24-hour take-down window. We should perhaps apply the same to our laws, so that the damage is limited and action can be taken so that harm to individuals can be avoided quickly in the light speed by which content travels in the metaverse today.

By the same measure of speed of content viralling, the entire industry itself is evolving dramatically day by day. While the major social media platforms have been engaged for this Bill, one of the players, Twitter, has changed hands just last week. The new owner, Elon Musk, is a free speech absolutist. He wants to transform Twitter into a platform that, oxymoronically, is a safe haven for all forms of free speech, including harmful ones.

As the company evolves to fit into their new owner's worldview, how will the Ministry take the process forward and how will we deal with potential difficulties of a social media giant which may well now become unwilling to accept the rules that we seek to enforce for the safety of Singaporean users? While I also accept that the Bill will similarly evolve and include more industry players, platforms, as the metaverse expands, I do call on the Government to consider including in the very near future social messaging and private messaging platforms where information can spread like wildfire, especially amongst older audiences, and also gaming platforms and chat platforms like Discord, where cyberbullying and misogyny are becoming rampant.

Regulating the metaverse is more challenging than herding cats. Therefore, we must act quickly before they run amok.

Mr Deputy Speaker, some commentators have also seen shadows everywhere. They decry the chilling effect on free speech that Bills like these introduced elsewhere and here will have on free speech.

But let us look at some surveys of what users themselves want out of their online experience.

In the most recent survey published by Pew Research in August 2022 on sentiments amongst teens and adults in that bastion of free speech, the US, 62% of teen respondents aged 13 to 17, who were surveyed, said that their top priority is a welcoming, safe online environment, and that is far more important than being able to speak their minds freely online. Even with the adults who were surveyed, while the results are more evenly spread, 50% as compared to 47%, they also prefer the same safe environment online. So, it does show that a safe environment both in the real world and in the online world, are what we aspire to, and we must do what we can.

Apart from regulations, which are needful, I believe that the Ministry would do well to empower all users – adults, parents and children – to do some self-policing. Online safety requires a whole-of-society effort to be educated on what is proper behaviour online personally and what is safe interactions with others.

Just as parents cannot simply hand over responsibility, care and control of their children to others in the physical world, we must be aware of our own rights and responsibilities online and how we educate our children to look out for themselves. We do not as yet have to confront the tragedy of Molly Russell within our own country. As observers, we can sympathise, we can lament; but I think, as parents, it is an experience that no one of us, no child, no family, should have to go through or endure.

Mr Deputy Speaker, we have a huge task at hand. I foresee the challenges that the changes to the metaverse would pose to society and to the Ministry in policing it and helping to keep all of us safe. I do hope that the Ministry will be able to muster sufficient manpower resources to quickly ensure that the safeguards spelt out in this Bill are as good in words as they are in action. Our children, especially, need us to take the right decisions to keep them safe. We owe them and others our responsibility. And so, Mr Deputy Speaker, I support this Bill and hope that my suggestions can be adopted.

Mr Deputy Speaker: Mr Desmond Choo.

3.00 pm

Mr Desmond Choo (Tampines): Mr Deputy Speaker, thank you for allowing me to join the debate on this Bill.

Harmful content found online has proliferated with the expansion and pervasiveness of social media services in recent years. Bad or irresponsible actors have exploited the accessibility of the digital world to spread harmful materials. We are now in a world where the nature of harm to minors are vastly different. There has been a pronounced shift from physical to online dangers, but still leading to real and physical harm.

As hon Member Alex Yam shared, the suicide of a teenager in the United Kingdom, Ms Molly Russell, in 2017 is a stark reminder of the harms in the online world. At the coroner's inquest, they found that the harmful content Molly encountered on social media had played a contributing role in her suicide. Coroner Andrew Walker said images of self-harm and suicide she viewed "shouldn't have been available for a child to see". Her death sent shockwaves across the UK and beyond, and the UK is now similarly considering a Bill to the one we are debating today.

According to Molly's father, Ian Russell, at the inquest, "It's a world I don't recognise. It's a ghetto of the online world that once you fall into it, the algorithm means you can't escape it and it keeps recommending more content. You can't escape it."

The law must be strengthened so that our minors do not fall into this commercially-driven quicksand.

The proposed Bill mirrors society's consensus. Social media services must take up more responsibility to protect people of all ages, especially minors, from harmful content.

It provides for a calibrated regulatory approach by implementing a code of practice prescribed which social media services must comply or face consequences, along with the authorities' overriding oversight to deal with egregious content. We are just one of the few jurisdictions in the world to consider codifying the responsibilities of these social media services. I believe that many other jurisdictions will follow in time to come.

I would like to offer a few areas for the Ministry's consideration to refine the Bill and offer a few suggestions to the code as follows.

First, we need to refine section A of the code to implement a robust age verification system to effectively delineate a higher standard of responsibility social media services owe to minors.

The Bill and the code rightfully prescribe additional obligations to be undertaken by these services to prevent minors from accessing harmful content. In practical terms, minors would be flagged out as minors by these services if the user inputs their age to show that they are under the age of 18. With this, the services can then moderate the content shown to minors.

My concern with this is that the current age verification process is very much circumventable. For example, what if a 12-year-old could falsely input their age as a 21-year-old? Hence, they would not be treated as a minor as far as the services are concerned.

In such an instance, it must also be the responsibility of the services to implement a secure age verification system. What are the standards required of the services for such verification?

In addition, would the Ministry also look into implementing a robust nationwide age verification system to ensure that the responsibility of services to children can be adequately effected in practice?

Such nationwide age verification systems are currently being trialed in Australia. Perhaps, this is something that we can also learn from.

Next, section B of the Code of Practice states that there must be an easy-to-use mechanism for Singapore end-users to report harmful content and unwanted interaction. However, clauses which directly relate to the liability of the services arising from their failure to comply with user requests in the code are not readily apparent.

Would the Ministry establish an avenue for Singapore end-users to report such non-compliance with the code?

Further, I understand that even though the code is still under review, section B, which imposes obligations on the services to act on user reports in a timely manner, seems to leave some room for ambiguity. Could the Ministry elaborate on whether it would consider implementing an objective or specific timeframe for the services to respond to user reports and perhaps have different timeframes which are proportionate to the nature of the harmful content identified? For example, the timeframe for services to respond to sexual cyberbullying content should arguably be done as soon as possible.

Next, on the legislative provisions in the Bill, under the proposed section 45E(1) where a service does not comply with blocking directions and the proposed section 45N which contemplates a situation where a prescribed service does not comply with the Code, the defaulting service may be liable to a maximum fine of up to $1 million.

For comparison, under the Privacy and Data Protection Act, the maximum financial penalty was increased to the higher 10% of an organisation's annual turnover in Singapore or $1 million to send a stronger signal of Singapore's stance towards the importance of privacy and data protection.

I believe that online safety is as equally important as privacy and data protection, if not even more. Both are crucial gears that must be safeguarded in the digital ecosystem. Would the Ministry thus consider increasing the maximum penalties that can be meted out under the Bill? This would also give the Ministry more flexibility in proposing appropriate penalties based on the non-compliance complained of.

On a related note, the current penalties almost seem like a slap on the wrist, compared to other jurisdictions. In the United Kingdom, the Online Safety Bill currently prescribes a maximum fine of the higher of £18 million or 10% of the defaulting service's worldwide revenue. In Germany, the Network Enforcement Act prescribes a maximum penalty of up to €50 million.

Mr Deputy Speaker, in conclusion, I believe that the Bill will, nonetheless, make considerable strides in our efforts to protect our people against harmful content.

The success of this goal will also be premised on contemplating a whole-of-society approach, where our efforts are to work closely with the community and industry stakeholders to equip Singaporeans with the knowledge and skills to keep themselves safe online. This remains a key piece in this journey. Mr Deputy Speaker, I support the Bill.

Mr Deputy Speaker: Order. I propose to take a break now. I suspend the Sitting and will take the Chair at 3.30 pm.

Sitting accordingly suspended

at 3.07 pm until 3.30 pm.

Sitting resumed at 3.30 pm.

[Deputy Speaker (Mr Christopher de Souza) in the Chair]

Online Safety (Miscellaneous Amendments) Bill

Debate resumed.

3.30 pm

The Minister for Communications and Information (Mrs Josephine Teo): Mr Deputy Speaker, I thank Members for their interest in the Bill. All 16 Members who spoke have given their support, reflecting the broad consensus on the need and timeliness of the proposals. Members raised many useful points which I will address.

Let me start with clarifications on the types of services that the Bill will cover. Ms Tin Pei Ling, Mr Louis Ng and Mr Saktiandi Supaat asked what other types of services, besides social media services, may be specified in the Schedule of Online Communication Services, or OCS. Dr Shahira Abdullah wanted to know how IMDA will decide which service providers to designate. She and Ms Tin Pei Ling also asked about updating our regulations to keep in step with new technologies.

Like many Singaporeans we engaged, Members acknowledged the fast pace of change in the online landscape. We are, therefore, committed to updating our laws and regulations as frequently as necessary to keep them relevant and effective. In terms of the type of services, we will prioritise those that are more widely used in Singapore and where the safety risks have become or are becoming apparent. IMDA will use various data sources on user trends in Singapore to aid these assessments.

The Government is actively studying several areas, but I seek Members’ understanding that it can be counter-productive to discuss them prematurely. Let us instead better understand and characterise the issues, taking reference from regulatory attempts elsewhere, before moving to design a suitable set of interventions for Singapore. For example, Mr Melvin Yong, Mr Alex Yam, Mr Gerald Giam and Mr Mark Chay asked about online gaming, whereas the Bill only covers social media services currently. We share their concerns about online gaming. We have been thinking about it and we will share more details when ready.

Within each specified OCS, which entities to designate will depend on how much reach or impact they have with Singapore viewers. IMDA will consult services before designating them under the Bill, to ensure that designated services are clear on the requirements and are given the opportunity to provide input on the proposals laid out by IMDA.

Mr Zhulkarnain Abdul Rahim, Mr Saktiandi Supaat and Mr Leon Perera asked about the consultation process. Details will be set out on how IMDA will work closely with the designated services. Having built constructive relationships with many of these services over the years, we are confident the processes will be robust. The list of services to be designated eventually will be published by IMDA.

Ms Nadia Samdin, Mr Alex Yam and Ms Tin Pei Ling asked why private communications have been excluded. The short answer is that there are legitimate privacy concerns, which Mr Gerald Giam also shares. But users are not without recourse. IMDA’s draft Code of Practice for Online Safety will require designated social media services to provide easily accessible user reporting mechanisms throughout its service. If individuals encounter harmful messages or unwanted interactions in private messages when using these social media services, they could block the sender or report the sender to the service.

While we do not intend to police private communications, we are also aware that there are groups with very large memberships, which could be used to propagate egregious content, making them no different from non-private communications. In such instances, IMDA will be empowered to take the same actions against them. Mr Louis Ng and Mr Zhulkarnain asked about the specific factors in determining whether communications are private, that could shield such services from complying with IMDA’s protective measures. Mr Mark Chay asked about messaging platforms.

Labelling a group or communications as private does not make it so. The Bill sets out a list of factors that must be considered collectively. For example, it may be possible to conclude that a social media group is public, even if that social media group has been set to “private” and requires the owner to grant permission before one can access the content, but the owner is indiscriminate in granting that access. We will continue to study this issue closely with other agencies, industry and international partners.

Next, on what type of content the Bill will, or will not, address. Mr Zhulkarnain asked whether drug abuse and other illegal activities will be covered. Mr Saktiandi highlighted a particular area in the online domain that is of growing concern to many users – scams.

Under “egregious content”, as defined under the Bill, content that may cause risk to public health will be covered. Depending on the facts of the case, this may include drug-related content. IMDA’s Code of Practice for Online Safety also requires services to apply content moderation systems to vice and organised crime, including fraud and scam content.

Mr Gerald Giam and Mr Leon Perera asked whether the Bill would cover non-consensual sharing of intimate images, and Mr Louis Ng asked why content “likely to cause feelings of enmity, hatred, ill will or hostility” is applied only to racial and religious groups, and not to other demographic segments, such as gender.

To a large extent, the kinds of problematic content the Members have in mind will already be covered within the Bill. Content that advocates or instructs on violence, including sexual violence, to individuals will be covered. IMDA’s draft code of practice for Online Safety will require services to assess and act on cyberbullying, including content that is likely to cause harassment, alarm or distress to the user, which Mr Leon Perera and Mr Darryl David, as well as Mr Melvin Yong, also emphasised the need for. For cases of harassment, there may also be recourse under laws, such as the Protection from Harassment Act (POHA). I will say more about this later.

To Mr Louis Ng’s question on providing more details of “harmful content” under the Code, IMDA has issued a set of draft guidelines giving examples of the content covered, which will be finalised together with the Code.

Mr Leon Perera spoke at length about the problem of loot boxes in online gaming. Mr Mark Chay also raised this issue. This matter falls under the Gambling Control Act, but since Members have raised it, I will briefly address it.

The Government recognises the potential harms of loot boxes. This is why we made significant updates to the Gambling Control Act earlier this year to ensure that our laws are able to address emerging trends and products, such as in-game loot boxes, which are monitored by the Gambling Regulatory Authority. I invite the Members to file Parliamentary Questions if they wish to discuss this issue in greater detail.

Mr Saktiandi also raised queries on content, such as lifestyles that go against traditional norms of society, participation in foreign armed conflicts, animal cruelty and commercialised nudity. Should we go beyond concerns over safety of individuals and communities to cover other types of content at this juncture? This has the same problem as if our proposals attempted to cover other types of services prematurely. The Bill will become unwieldy, our proposals lacking in focus and the results likely ineffective.

Ms Nadia Ahmad Samdin asked if we had considered streamlining all online-related harms into the Online Safety (Miscellaneous Amendments) Bill. Our approach has been to identify and address specific areas of harm in a targeted manner. As to whether the laws will be consolidated later, that remains to be seen. At this time, it is more important that we put in place legislation that effectively addresses and combats the respective harms. For example, at the Committee of Supply debates this year, MHA announced that it was studying potential levers to deal with criminal offences committed online. Work is in progress. These levers are envisioned to complement the provisions under the Online Safety (Miscellaneous Amendments) Bill.

This leads me to questions raised by quite a few Members on how the types or thresholds of harmful or egregious content are determined and whether a committee or deliberative body could be set up to formulate or review these thresholds.

The Government had consulted various stakeholders, including parents, community groups and industry representatives in arriving at the proposals in the Bill. Egregious content can take many forms and exist in grey areas which can be difficult to define clearly. A case in point is Ms Nadia Ahmad Samdin’s example of online forums for users to share their experiences with one another to deal with depression and anxiety, and to provide mutual support.

When assessing whether a piece of content is harmful or egregious, IMDA will take an objective approach, considering the context in which it is presented. If such content is educational in nature or helps users to overcome these harms, naturally, it will not be considered harmful or egregious. On the other hand, social media trends or challenges may sometimes appear innocuous, such as the “milk crate” challenge Mr Darryl David had mentioned. But if they result in harm to users, such as by advocating or providing instructions on self-harm or suicide, they would be considered harmful.

If the concern is whether individual social media services have done enough to curb exposure to harmful content, the Government will continue to consult widely across society and share the feedback with the companies. In other words, we would want to hold the mirror to them so that they know what our society’s expectations are and be able to make adjustments accordingly.

When urgent action is needed, such as to remove offensive content that advocates violence towards certain communities or could cause serious injuries to them, IMDA must be able to act fast. In such situations, consultations with stakeholders are better done as part of an after-action review.

To Mr Gerald Giam’s question, if services are aggrieved by IMDA’s regulatory decisions, they can appeal to the Minister. And the Minister’s decision can also be challenged on judicial review.

Mr Gerald Giam and Mr Leon Perera sought assurances that the Bill will not be used to curtail democratic rights or freedom of expression. I stated in my opening address that IMDA does not have unfettered ability to issue new Codes. The Bill clearly sets out the purposes for which IMDA can issue these Codes, which are recorded in Hansard. I would also like to remind Members of the overarching purpose of the Bill. And that is, to provide a safe environment and conditions that protect online users, while respecting freedom of speech and expression, as enshrined in Article 14 of the Constitution.

Let me also address a specific area that Mr Giam raised – the provisions on journalistic content in the UK’s draft Online Safety Bill. I thank Mr Giam for his support of the Bill – I mean the Singapore Bill, not the UK Bill – and also his suggestion for Singapore to mirror the UK proposal. We are always watching developments internationally and considering what would be useful in our context. I will make three brief points on Mr Giam’s suggestion.

First, the draft Bill in the UK has not been passed into law. The draft provisions have been through several revisions and are far from final. So, whether this part goes in eventually, that remains to be seen.

Second, without going into detail, there have been criticisms that the provisions on journalistic content may be exploited by bad actors. It could inadvertently allow anyone, under the guise of being a “citizen journalist”, to communicate egregious content and expose users to harm.

Third, this Bill is about online safety. It has no interest in curbing legitimate journalistic content.

This brings me to my next point on enforcement, which several Members have raised. I will explain the enforcement measures that the Bill provides for at each stage and how these relate to the online service providers.

IMDA will, first, assess if there are instances of non-compliance, either with the Code's requirements or with directions issued by IMDA. It does not matter whether there are management changes within the companies. Accountability resides with the legal entities.

Where there is non-compliance, in general, IMDA will engage the services to understand their reasons. This includes services that do not have a corporate presence in Singapore.

Thereafter, if there is no meaningful response or mutually acceptable solution, and IMDA finds the services to still be in breach of their obligations, measures, such as financial penalties, will be considered.

Mr Desmond Choo asked if the penalties for non-compliance are too low to have sufficient impact or deterrence. The financial penalty quantum is comparable with other local legislation that covers social media services, such as the Foreign Interference (Countermeasures) Act and the Protection from Online Falsehoods and Manipulation Act.

Services will also face reputational damage. Imagine if a service is consistently found to be in breach, and IMDA over a period of time is regularly issuing them penalties, these will be known to the public and users themselves can exercise the decision whether to continue using the service. So, I think the reputational damage has also to be considered.

In the event that these still fail to address our serious concerns, IMDA may then issue a blocking direction to Internet Access Service Providers to stop Singapore users from accessing these services.

But to Mr Zhulkarnain's question, the purpose of section 45H(2)(b) is to ensure that this happens only if the platform had refused to comply with IMDA's direction. This reflects our proportionate approach towards regulating content.

To Dr Shahira Abdullah's question regarding the details of a blocking direction, such as duration, this will depend on the individual case. Suffice to say that it is a measure IMDA will not take lightly. But IMDA's resolve in protecting Singaporeans' interests should not be tested.

Let me also address various technical questions from Members.

Mr Gan Thiam Poh asked how "Singapore users" will be determined. The OCS providers will, typically, have geolocation data on whether a user accesses the service from Singapore. This is common practice.

Mr Gan and Mr Melvin Yong also asked how the Government would ensure that Singapore users are not exposed to harmful content, given the use of VPNs.

Just like fire codes cannot prevent people from playing with fire, neither can we shield people completely if they intentionally seek out harmful content online. Parents have a role to play, as do the individuals themselves as well as our wider society, to be aware and vigilant.

Mr Zhulkarnain asked what we mean by "reasonably practicable" steps taken by the OCS to comply with IMDA's direction. This requires the balancing of various considerations, such as the technology that is available to implement that direction. So, we will have to look into the details.

Mr Zhulkarnain also asked about the proposed section 45J(2). This provision ensures that compliance with IMDA's directions does not cause a service provider to incur liability in Singapore, if, for example, the content creator takes issue with it. IMDA's concern is to protect users in Singapore and this Bill only requires action against content accessible in Singapore. Thus, this provision, naturally, only insulates against liability under Singapore law.

Since our measures are also proportionate to the harm and consistent with leading jurisdictions, it is unlikely that the service providers will attract liability elsewhere for complying with IMDA's directions in Singapore.

But we will monitor international developments and keep in mind his suggestions on reciprocal immunity.

Mr Zhulkarnain, Mr Alex Yam, Mr Saktiandi Supaat as well as Mr Gerald Giam asked who will enforce the Bill, whether it is a dedicated new body, such as an eSafety Commissioner that will be set up, and whether the respective Government teams are sufficiently resourced.

I thank them for looking out for the teams working behind the scenes on online safety, including their mental well-being, as highlighted by Dr Shahira Abdullah. As I mentioned above, compliance assessments will be undertaken by IMDA, which has both the experience and expertise in performing this role. If egregious content is flagged to IMDA, and IMDA assesses there is a need to act, action will be taken.

All this will be a lot of work, but we will, periodically, review our resourcing to ensure that the team is able to carry out its responsibilities fully and effectively. And, here, I notice that my colleague from MOF is also right behind. I am sure we have the support of the Ministry if more resources are to be needed.

Members have asked how individual users can provide feedback about problematic content or non-compliance. I agree with Mr Saktiandi Supaat that users are effectively a wider pool of eyes who can help to identify and flag problematic content.

Mr Alex Yam is also right to remind us that users must play a role in policing harms they may come across.

Users are, indeed, our first line of defence. This is why we expect social media services to take user reports seriously and to ensure that their systems and processes are sufficiently robust.

Under IMDA's draft Code of Practice for Online Safety, designated services will be required to provide effective, transparent, easy-to-access and easy-to-use reporting mechanisms to all individuals. This is a more effective way, to tackle voluminous online content at source.

In turn, users expect that social media services assess their reports and take appropriate action in a timely and diligent manner. Services will be required to include information on these actions in their annual reports. With this information, IMDA will be able to assess the adequacy of the service's measures. Audits may also be undertaken to ensure compliance.

Mr Gerald Giam and, I believe, also Mr Saktiandi Supaat, asked for the social media services to submit reports at a higher frequency than annually to establish the services' effectiveness in acting on user reports. As a start, IMDA intends for the reports to be submitted annually but this can be reviewed later on.

Given the speed at which harmful or egregious content can be amplified and spread online, the speed of action must be proportionate to the potential harm of the content identified.

Members asked about the timelines for services to act on directions issued by IMDA or to respond to user reports. IMDA's directions will stipulate a specific timeline for disabling access. For egregious content that could cause serious harm, the timeline would, generally, be within hours.

IMDA will also require social media services to act on user reports in a timely and diligent manner that is proportionate to the severity of the potential harm. In particular, timelines must be expedited for content and activity related to terrorism.

Members have expressed concerns about the impact of harmful online content on young users. Ms Janet Ang and Mr Gerald Giam raised the need to leverage technology to combat harmful online content, including through setting default content restriction settings for young users.

We understand and share these concerns. Therefore, IMDA's draft codes will put in place additional safeguards to protect young users, including minimising their exposure to inappropriate content and providing tools for children or their parents to manage their safety online. The code also requires that services provide differentiated accounts to children, whereby safety settings are robust and set to more restrictive levels that are age-appropriate by default. Children and their parents or guardians must be provided clear warnings of implications if they opt out of the default settings.

We will continue working with industry players to see how such measures can be strengthened. We recognise that there are gaps. In practice though, users might try to circumvent these measures.

Mr Desmond Choo, Mr Melvin Yong, Mr Gerald Giam, as well as some respondents to MCI's public consultation in July, have asked about the possibility of requiring age verification systems. Mr Saktiandi Supaat, Mr Alex Yam, Mr Mark Chay and Mr Melvin Yong asked about measures to better protect the young, including age-specific provisions or mandating screen-time restrictions.

Most social media services that have got significant reach or impact already require users to be at least 13 years old to register for an account. Users have to declare their date of birth at the point of registration. This way, services will be able to apply age-appropriate policies to their respective users, including content moderation.

In line with this, PDPC will be clarifying that personal data may be used to implement such age-appropriate policies on social media services. It is permitted and we will make it clear.

To mitigate against false age declarations, which is the problem I think we all recognise, some social media services use a combination of artificial intelligence, machine learning technology and facial recognition algorithms to proactively detect and remove underage accounts. Some also allow users to report accounts suspected to be underage, which will be investigated and suspended if the reports are accurate.

However, there is, currently, no international consensus on the standards for effective and reliable age verification by social media services which Singapore can also reliably reference. Instead, we will continue to closely monitor and extensively consult on the latest developments in age verification technology, taking into account data protection safeguards and consider viable regulatory options. In addition, we will continue to work with social media services, educators and other stakeholders, to help parents guide young users navigating online spaces and make young users better aware of the safety tools that are available to them.

Members have also raised the importance of providing support to victims or users affected by online harms. We recognise that while laws provide the necessary legal tools for victims, they can often be daunting and difficult to approach.

Members would be glad to know that organisations, such as SG Her Empowerment, or SHE, have stepped up to augment Defence Guild's efforts in providing legal support to victims of online abuse. May I just register the MCI family's sincere appreciation to Mr Zhulkarnain and his fellow volunteers for stepping up to perform this very important function.

Continuing the work of the Sunlight AFA, which concluded its tenure in July this year, SHE is working with the Singapore Council of Women's Organisations to launch a support centre for victims of online harm. We see this as an important gap to plug. We are committed to making it happen and I believe that it will be made available soon. Those in need will then be able to seek support and legal advice from counsellors and probono lawyers from this centre.

As I mentioned in my opening speech, online harassment, cyberbullying and doxxing are dealt with under the Protection from Harassment Act 2014 (POHA). Victims of gender-based online harms, of which a commonly known example is image-based sexual abuse, will be able to seek recourse under POHA where the online harm amounts to harassment.

The Protection from Harassment Court has served many victims since it was established last year. And a reason that more have been able to get redress is because of the wider awareness of its existence.

MinLaw is also looking into how victims can be better empowered to put a stop to such online harms generally, and to seek redress against and hold accountable those who are responsible. This includes cyberbullying and more novel forms of online hurt, such as cancel campaigns, which Minister Shanmugam has spoken about before.

MinLaw's efforts will complement MCI's efforts to enhance the Government's regulatory tool kit, as well as MHA's efforts to address criminal offences committed online. More details will be announced at an appropriate juncture. But I think Members see that we are not stopping with this Bill. There are other proposals that are being considered and we probably will not have to wait very much longer for these to be known publicly.

Which leads me to my final point – that public education must come hand in hand with legislation. Ms Nadia Ahmad Samdin, Mr Alex Yam, Mr Mark Chay, Mr Leon Perera and Mr Zhulkarnain spoke about this. Members also called for more collaboration with service providers in this area.

For example, Mr Melvin Yong asked whether the Government would consider setting up a self-regulatory task force with key OCS providers. We can explore this suggestion when we engage further with the industry.

Mr Deputy Speaker, I seek your permission to distribute a handout to Members which contain a list of safety measures on social media services and public education programmes organised in collaboration with various technology companies and community partners.

Mr Deputy Speaker: Please do. [A handout was distributed to hon Members.]

Mrs Josephine Teo: Thank you. This will give Members a sense of the breadth and depth of public education efforts that are already available in Singapore even as we recognise that there are gaps that need to be plugged.

To highlight a few examples, Google held its Online Safety Park at the Digital for Life Festival earlier this year and it is partnering the Media Literacy Council (MLC) to bring its "Be Internet Awesome" programme to Primary schools to train 50,000 parents and children on online safety measures. The last I met with them, they said that the 50,000 target has been met. They are actually aiming to double it to 100,000.

Meta collaborated with the National Crime Prevention Council and MLC on a campaign to educate users on top scam typologies and tips to keep safe. This campaign reached over two million users and a second campaign has been launched on e-commerce scams. There are many others and we will continue to build on these efforts.

Mr Deputy Speaker, may I make a few comments in Mandarin, please?

Mr Deputy Speaker: Please do.

Mrs Josephine Teo: (In Mandarin): [Please refer to Vernacular Speech.] Deputy Speaker, in this day and age, a piece of harmful online content has the potential to spread like wildfire, causing serious damage. We all know that firefighters are needed to put out fires. The cyberspace, too, needs such first responders. The purpose of the Online Safety Bill is to enable us to "fight fires" in a timely and effective manner.

While it is important to put in place relevant laws, it is impossible for the law to eradicate harmful online content completely. Instead, we need to adopt an agile and accretive approach to deal with the fast-changing cyberspace.

More importantly, the Government fully understands that we need partners to co-develop solutions.

One important stakeholder is parents. However, many parents are not digital natives themselves. Hence, keeping up with the ever-evolving online space proves to be challenging for them. The Government is, therefore, working with multiple stakeholders to enhance parents’ awareness of online safety and strengthen their capabilities to guide their children, for instance, by informing parents about the safety options available on social media platforms.

Although keeping the cyberspace safe is an uphill task, so long as we work together, I am confident we can create a safer and more vibrant digital future for Singaporeans.

(In English): Mr Deputy Speaker, in conclusion, I have tried to respond to as many of the questions and suggestions as I can.

The Bill before us today seeks to create a safer online environment for Singapore users. Users will be empowered with the tools to manage their own safety and equipped with the information needed to make informed decisions about how they wish to use online services.

In turn, online services will be held accountable for their systems, processes and actions. And where there is egregious content, such as content that undermines racial and religious harmony, the Government will step in to protect users.

Ultimately, we must recognise that there is no single measure that will assure us of online safety. We will need laws, codes, education, user reporting and a whole range of interventions. We will also need to keep updating our measures to deal with new risks. I am heartened that Members are united on this and I thank the House for its unanimous support.

Shared responsibility, parental guidance and active individual involvement will play a key role in ensuring that even in the face of harmful online content, users, including children, can stay safe online.

This Bill is a first step. We will continue to work with all of you and our various partners to keep our people safe online. Mr Deputy Speaker, I beg to move. [Applause.]

4.06 pm

Mr Deputy Speaker: Are there any clarifications? None.

Question proposed.

Question put, and agreed to.

Bill accordingly read a Second time and committed to a Committee of the whole House.

The House immediately resolved itself into a Committee on the Bill. – [Mrs Josephine Teo].

Bill considered in Committee; reported without amendment; read a Third time and passed.