← Back to Bills

Elections (Integrity of Online Advertising) (Amendment) Bill

Bill Summary

  • Purpose: The Bill seeks to amend the Parliamentary Elections Act 1954 and the Presidential Elections Act 1991 to prohibit the publication of online election advertising that contains realistic digitally generated or manipulated content, such as deepfakes, which falsely depicts a candidate’s speech or actions.

  • Key Concerns raised by MPs: Minister for Digital Development and Information Mrs Josephine Teo addressed anticipated concerns regarding the scope of the prohibition, specifically whether the law would inadvertently catch benign content such as beauty filters, memes, and non-photorealistic animations, or if it would infringe on private domestic communications and standard campaign posters.

  • Responses: Minister for Digital Development and Information Mrs Josephine Teo justified the Bill by highlighting the rising global threat of deepfakes in elections and explained that the law is "carefully calibrated" to target only harmful deceptions through a system of candidate declarations and Returning Officer directions. She emphasized that while individuals are protected by a "no-knowledge" defense, social media services face fines of up to $1 million for failing to comply with orders to remove prohibited content.

Reading Status 2nd Reading
Introduction — no debate

Members Involved

Transcripts

First Reading (9 September 2024)

"to amend the Parliamentary Elections Act 1954 and the Presidential Elections Act 1991 to prohibit the publication of online election advertising containing certain digitally generated or manipulated content about candidates, and for related purposes",

presented by the Minister of State for Digital Development and Information (Ms Rahayu Mahzam) on behalf of the Minister for Digital Development and Information; read the First time; to be read a Second time on the next available Sitting of Parliament, and to be printed.


Second Reading (15 October 2024)

Order for Second Reading read.

4.58 pm

The Minister for Digital Development and Information (Mrs Josephine Teo): Mdm Deputy Speaker, I beg to move, "That the Bill be now read a Second time."

Madam, 2024 is a bumper year for elections around the world. Almost half of the world's population have gone or will go to the polls this year. Unfortunately, there has been a noticeable increase of deepfake incidents in countries where elections have taken place or are planned.

Research conducted by London-based technology company, SumSub, suggests that the numbers are alarming. In India, compared to a year ago, there are three times as many deepfake incidents. In Indonesia, more than 15 times; and in South Korea, more than 16 times.

[Mr Speaker in the Chair]

Earlier in January this year, a fake version of the United States (US) President Joe Biden's voice was featured in robocalls that sought to discourage Democrats from participating in the New Hampshire primary. The robocalls reached thousands of people. The US Federal Communications Commission has since declared artificial intelligence (AI)-generated robocalls illegal, noting that they have the potential to confuse consumers with misinformation. The telecommunications company, which transmitted the fake robocalls, has been fined US$1 million and the individual behind it has been fined US$6 million and criminal charges.

During the Slovakian parliamentary elections last year, a deepfake audio of a politician discussing electoral rigging was posted online. Unsurprisingly, the audio went viral. Its impact was amplified by its timing, right before Slovakia's electoral "silence period", which is like our cooling-off day. The candidate lost the elections, despite having earlier led in the polls.

Did the deepfake audio contribute to his loss? No one can say with certainty, but surely we prefer not to have elections subject to such incidents.

Why have deepfake content proliferated? The short answer is that they have become very easy and cheap to produce. With your permission, Mr Speaker, may I play a video on the LED screens?

Mr Speaker: Yes, you may.

Mrs Josephine Teo: Thank you. [A video was shown to hon Members.]

Sir, Members will appreciate that artificial intelligence (AI) technology is improving quickly. If the deepfake video you just watched did not convince you of its impersonation of me, more advanced versions soon will. Around the world, countries have recognised the need to mitigate the harms of deepfakes to their elections.

For example, South Korea revised its Public Official Election Act to ban political campaign videos that use AI-generated content 90 days prior to an election. Violations of the revised law, which took effect in January this year, can lead to jail time of up to seven years or a fine of up to 50 million won, which is almost $50,000. To date, 388 deepfakes have been taken down by the National Election Commission of South Korea during its elections.

Another example is Brazil, which has banned synthetic electoral propaganda that will harm or favour any candidate during an election. The sanctions include the revocation of the candidate's registration or their mandate if they had been elected.

Last month, the state of California passed into law the "Defending Democracy from Deepfake Deception Act of 2024", which requires social media platforms to block materially deceptive deepfakes of candidates from 120 days before the election to the day of the election.

The Australian government is also considering the advice of its Electoral Commission to regulate the use of AI in elections, given the Commission's recent warning that it has limited scope to protect voters from deepfake videos and phone calls imitating politicians in Australia's upcoming elections.

It is not just governments which are concerned. The technology industry has also recognised the dangers of electoral deepfakes and the importance of ensuring voters can exercise their choice, free from AI-based manipulation. Twenty leading tech companies, including Meta, Microsoft, OpenAI and TikTok, signed the Tech Accord at the Munich Security Conference in February, committing to combat the deceptive use of AI in elections this year.

In the face of these developments, Singaporeans are rightly concerned. One study shows that more than six in 10 Singaporeans are worried about the potential impact of deepfakes on the next election.

In a 2021 ruling on a case related to misinformation and online falsehoods, our apex Court had said, "It is simply incompatible with the core principles of democracy to procure the outcome of an election to public office or a referendum by trading in disinformation and falsehoods."

Mr Speaker, I hope Members will agree that AI-generated misinformation can seriously threaten our democratic foundations and demands an equally serious response.

The Elections (Integrity of Online Advertising) (Amendment) Bill, or ELIONA Bill, is our carefully calibrated response to augment our election laws under the Parliamentary Elections Act and the Presidential Elections Act, ensuring that the truthfulness of candidate representation and the integrity of our elections continue to be upheld.

Sir, I will now bring Members through the key aspects of the Bill.

The ELIONA Bill will amend our election laws to prohibit the publication of content that: (a) is or includes Online Election Advertising (OEA); (b) is digitally generated or manipulated; and (c) depicts a candidate saying or doing something that he or she did not, in fact, say or do; (d) but is realistic enough that some members of the public who see or hear the content would reasonably believe that the candidate did, in fact, say or do that thing.

I will go through each of these criteria in detail.

First, OEA under our existing elections laws refers to any information or material published online that can reasonably be regarded as intended to promote or procure or prejudice the electoral success or prospects of a candidate or political party. The existing OEA provisions guide the transparent and responsible use of the Internet during elections, including for campaigning; and ensure that the elections are contested fairly.

The ELIONA Bill strengthens OEA regime by targeting the substantive content of OEA.

Second, the ELIONA Bill is scoped to address content that is digitally generated or manipulated.

This includes content generated or manipulated using AI techniques, such as generative AI. It also includes non-AI techniques, such as Photoshop, dubbing and splicing. These are now seen as more traditional editing methods, but they can still be used to manipulate content depicting candidates, making them as harmful and misleading as AI-generated deepfakes.

Third, the ELIONA Bill is scoped to address the most harmful types of content in the context of elections, which is content that misleads or deceives the public about a candidate through a false representation of his speech or actions that is realistic enough to be reasonably believed by some members of the public.

The condition of being realistic will be objectively assessed. There is no one-size-fits-all set of criteria but some general points can be made.

First, such content should closely match the candidates' known features, expressions and mannerisms. Technically, we would expect a degree of sophistication resulting in minimal inconsistencies in aspects like lighting, body movements or audio distortions.

Second, content may make use of actual persons, events and places so that the false representation appears more believable. For example, a fake rally speech touching on current affairs looks more real when placed against the backdrop of an actual and familiar rally site.

We must also recognise that audiences perceive and process the same content through different lenses shaped by their individual experiences, beliefs and cognitive biases. For example, many of us find it incredible that the Prime Minister would be giving investment advice on social media. But as Members of Parliament, we have all met residents who have fallen prey to such AI-enabled scams. In this regard, the law will apply so long as there are some members of the public who would reasonably believe that the candidate did say or do what was depicted.

Also, in assessing whether content matches a candidate's speech or actions, we will be relying primarily on declarations by the candidate if he or she had said or done that thing. I will elaborate on this later in my speech.

All four legal limbs have to be met for the content to be prohibited. That is, the content: (a) is or includes OEA; (b) is digitally generated or manipulated; and (c) depicts a candidate saying or doing something that he or she did not, in fact, say or do; (d) but is realistic enough that some members of the public who see or hear the content would reasonably believe that the candidate did, in fact, say or do that thing.

Mr Speaker, with your permission, may I ask the Clerks to distribute a handout that will illustrate our thinking on what will be allowed or disallowed under the new provisions?

Mr Speaker: Please go ahead. [A handout was distributed to hon Members.]

Mrs Josephine Teo: Sir, Members can also access this handout through the MP@SGPARL App.

Examples of content that would be prohibited include: (a) realistic audiofakes featuring a candidate saying things he did not say; (b) realistic AI-generated images of a candidate participating at events that did not happen, meeting people that he or she did not meet; and (c) realistic manipulated images or videos taken out of context and misrepresenting a candidate's actions.

It does not matter if the content is favourable or unfavourable to any candidate. The publication of such prohibited content during the election period, including by boosting, sharing and reposting existing content, will be an offence.

As we propose this new measure to tackle realistic misrepresentations of candidates online, we are mindful not to disallow reasonable use of AI or technology in electoral campaigning. While each case will be assessed on a case-by-case basis, there are several scenarios that the prohibition will not extend to.

The first is AI-generated or animated characters and cartoons. Most of these animations are not photorealistic replicas of real persons. Audiences will generally be able to tell that the speech or actions depicted are not real.

The second is benign cosmetic alterations, such as the use of beauty filters or colour and lighting adjustments of images and videos. Such alterations typically involve modifications that do not materially affect truthfulness and do not result in a misrepresentation of a candidate's speech or actions.

The third is entertainment content, such as memes. We recognise that such content can arise as part of online discourse during the election period. Memes will not be caught under the law as long as they are assessed to be unrealistic and do not mislead audiences about a candidate's speech or actions.

Some Members may be concerned, will candidates' regular campaign posters, showing individuals against the backdrop of a group representation constituency (GRC) or a single member constituency (SMC), be prohibited if they are put online? Such posters are usually obvious composite images, such as candidates disproportionately superimposed in front of a landmark or backdrop. Members of the public would not reasonably believe such content to be realistic depictions of a candidate's action. Such campaign posters are unlikely to fall within the scope of prohibitions.

Mr Speaker, I would like to make clear that the ban will not apply to certain types of content.

First, the Bill does not extend to private or domestic communications. This refers to content shared between individuals or within a closed group, like group chats with family or a small group of friends, in view of user privacy.

That said, we know that false content can circulate rapidly on open WhatsApp or Telegram channels. If it is reported that prohibited content is being communicated in big group chats that involve many users who are strangers to one another and are freely accessible by the public, such communications will be caught under the Bill and we will assess if action should be taken. The factors that determine whether communications are private or domestic are set out in the respective election Acts.

Second, the prohibition does not apply to news published by authorised news agencies. This is to give space to fair reporting on prohibited content, such that the public can be alerted to the false content about candidates in a timely manner.

Third, we recognise that a layperson may carelessly reshare messages and links without realising that the content has been manipulated. The legislation provides for a defence if a person did not know and had no reason to believe that the candidate did not, in fact, say or do the thing and inadvertently commits the offence.

Sir, the enhanced safeguards will apply during the election period, from the issuance of the Writ of Election to the close of polling on Polling Day.

From the issuance of the Writ to Nomination Day, we will introduce a two-part requirement for individuals who would like to identify themselves as prospective candidates and have the safeguards of the ELIONA Bill apply to them. First, pay the election deposit; and second, consent for their names to be made public by the Elections Department (ELD). The consent form will be made available via the ELD website. The intent is for the website to be updated daily.

Paying the election deposit is a prerequisite for standing in an election and an indicator of an individual's seriousness of intent to be a candidate. It is a step up from simply making a public declaration of one's potential candidacy. Publishing the names of those who placed the deposit will also make clear to the public which individuals are covered by the law. It is the candidate's choice entirely when to inform the public about his or her intention to stand for election before Nomination Day. Candidates can choose to put down the election deposit but withhold consent to make their names public until Nomination Day.

After nomination proceedings are completed and up to the close of polling on Polling Day, the safeguards of the ELIONA Bill will apply to content depicting all successfully nominated candidates. At the same time, individuals themselves should also readily come forward to clarify and debunk content that they believe misrepresents them.

The Government will use a range of detection tools to assess if the content has been generated or manipulated using digital means. We have at our disposal commercial tools and also those developed in-house or in partnership with researchers, such as those at the Centre of Advanced Technologies in Online Safety, or CATOS.

These tools are constantly updated to keep up with technology. Continuing strong investment in research is one way to ensure we stay ahead. We have channelled $50 million in funding over five years to CATOS, which will develop new technological capabilities to detect online harms, including harmful digitally manipulated content.

In reviewing reports of false content flagged under this new prohibition, the Government will prioritise candidates' reports to the Returning Officer. Besides the technical assessment on whether the content has been manipulated, the Returning Officer will also rely on candidates' declarations on whether the content falsely depicts their speech and actions.

We have placed significant weight on candidates' declarations to the Returning Officer under this new prohibition, as the candidate is in the best position to speedily clarify if the content is a truthful and accurate representation of himself or herself. The Government is unlikely to have all the evidence of whether a candidate actually said or did something, especially if it was in a private setting.

The declaration by candidates will be made to the Returning Officer, for candidates to attest to the veracity of his or her claim. This declaration form will be made available to candidates via the Candidates' Portal on ELD's website.

To deter abuse of the law, such as candidates requesting to take down unfavourable content that is, in fact, a factual representation of their speech or action, it will be made an illegal practice for candidates to knowingly make a false or misleading declaration in a request.

The penalties for an illegal practice are set out in the elections Acts. If convicted of an illegal practice, one may face a fine not exceeding $2,000 and become ineligible to be elected as a Member of Parliament (MP) or the President. Further, if already elected, the election of a candidate as a MP or President may also be invalidated.

Currently, the Returning Officer has powers to issue corrective directions to any relevant person, including social media services, to remove or disable access in Singapore to prohibited OEA, or to stop or reduce its electronic communication activity.

The ELIONA Bill extends these powers for the Returning Officer to issue corrective directions against this new category of content and corrective actions must be taken within the specified period of time.

As mentioned earlier, the Government will prioritise candidates' reports and declarations to the Returning Officer. If assessed to be a genuine case, the Returning Officer will issue corrective directions. Only in exceptional cases will directions be considered against false representations without a candidate's declaration. This may arise when the objective facts are widely known or if the Government has access to data that reliably confirms the candidate's actual speech or action.

The public can also report potentially prohibited content of candidates to the authorities for review.

To better equip the public to make informed choices during the elections, the public will be notified about corrective directions that have been issued against offending content.

Non-compliance with a corrective direction issued by the Returning Officer is an offence. Recognising the extensive reach and responsibility that social media services must uphold, we have raised the fine of up to $1 million for a provider of a social media service that fails to comply with a corrective direction. This is a reasonable adjustment. The revised penalty is on par with similar offences under other content regulation tools like the Protection from Online Falsehoods and Manipulation Act (POFMA), the Broadcasting Act (BA) and the Online Criminal Harms Act (OCHA).

For all others, including individuals, there is no change to the financial and custodial penalties for non-compliance with a corrective direction and remains at a fine not exceeding $1,000, or to imprisonment for a term not exceeding 12 months or to both.

We have engaged major social media services on the requirements under the ELIONA Bill and shared our expectation that such directions are to be promptly complied with to uphold the integrity of our elections.

Mr Speaker, as noted by Dr Carol Soon of the Institute of Policy Studies, the ELIONA Bill is carefully calibrated in its scope of the "what", "when" and "whom" and is a continuation of our principled approach towards the conduct of elections in Singapore.

First, the Bill addresses the most harmful digitally generated and manipulated content, including deepfakes, that can influence electoral outcomes, while recognising the value of novel content creation techniques and the desire of candidates to employ innovative methods to engage voters.

Second, it applies only during the election period, from the issuance of the Writ to the end of polling on Polling Day, to safeguard the integrity of the electoral process and preserve space for fair and legitimate political discourse during the elections.

Third, the safeguards apply to all candidates, regardless of political party and potential impact of the fake content. This recognises that fake content favourable to one candidate must be unfavourable to another and vice versa. Our voters must be able to make informed choices based on factual and truthful representation of our prospective political leaders. Candidates, too, have a responsibility to conduct themselves with integrity during the elections.

The ELIONA Bill updates our suite of measures introduced over the years to address various forms of harmful online content. During and outside the election periods, existing content regulation tools will continue to apply to certain types of AI-generated misinformation and deepfakes.

For example, under POFMA, a Minister, or an appointed Alternate Authority during the election periods, may issue a direction for a recipient to communicate a correction notice. The Minister or Alternate Authority may also direct the removal or disabling of access to deepfakes on the grounds that they contain false statements of fact and it is in the public interest to issue the direction. Under OCHA, directions may be given to deal with online activities that are criminal in nature, such as deepfake-related scams. Under the Protection from Harassment Act, individuals may seek recourse for certain content that have caused personal harassment, alarm or distress to the individual.

Beyond the expectations set out in the ELIONA Bill during the election period, social media services should also bear greater responsibility for digitally generated or manipulated content at all times. Most major social media services are also prescribed Internet intermediaries (PIIs) under POFMA. They will be required to prevent and counter the abuse of digitally generated or manipulated content at all times through an upcoming Code of Practice under POFMA.

This includes an obligation to put in place adequate systems and processes to enhance the transparency of digitally generated or manipulated content, such as through labelling. Like the three other POFMA codes in effect today, the new Code is being formulated in consultation with the PIIs. We intend to finalise the Code for issuance in 2025. Mr Speaker, may I continue in Mandarin, please.

(In Mandarin): [Please refer to Vernacular Speech.] Mr Speaker, I believe that many people have experienced friends or family members around us believing in deepfakes online. No matter how we try to explain to them, they still believe in the fake content. Sometimes, even we can be deceived, and the spread of fake content is very difficult to guard against.

With the rapid development of Artificial Intelligence (AI), deepfakes will only become more prevalent and realistic, allowing individuals with ulterior motives to sow seeds of doubt in our society more easily and quickly. If such fake content is widely disseminated during a General Election, the consequences could be too dire to imagine. In fact, this concern is not unfounded. In recent elections held in several countries, we have already witnessed the harm caused by fake content. This new law aims to prohibit the online publication of fake content that distorts a candidate's speech or actions so that we can better protect our democratic process and the fairness of our elections.

However, legislation is not a panacea. Enhancing the public's ability to discern the authenticity of content is the most effective preventive measure. Therefore, let us work together to be more vigilant about online content, and verify them against reliable sources when necessary. Everyone should do their part and be the first line of defence against fake content.

(In English): Sir, the ELIONA Bill will add an additional layer of safeguards to our elections, but everyone – candidates, citizens, tech platforms – has a part to play in protecting our democracy.

We must keep our elections fair and honest, conducted on the basis of fact, not fiction. By and large, Members of this House, regardless of party allegiance, have supported these ideals and I appeal to Members to stand behind this Bill so that we can continue to uphold the integrity of our elections. With that, Sir, I beg to move.

Question proposed.

Mr Speaker: Mr Yip Hon Weng.

5.29 pm

Mr Yip Hon Weng (Yio Chu Kang): Mr Speaker, Sir, this Bill aims to safeguard the integrity of our elections. It is a response to the increasingly sophisticated technological threats we face today. However, I believe that robust legislation requires careful consideration. It also needs clear articulation to avoid unintended consequences. In this spirit, I seek several clarifications on the Bill.

First, Mr Speaker, Sir, we must carefully maintain the balance between two critical elements. These are our citizens' fundamental right to freedom of expression and our collective responsibility to share information responsibly.

The Bill introduces significant penalties for sharing digitally altered content. Under clause 2, section 61a, it states that this offence does not apply to "private and domestic communications."

Yet, many political messages are not created by political parties or candidates themselves. Instead, supporters or proxies, including fake accounts, generate these messages. Deepfake videos and misinformation can circulate not just on social media, but also through messaging channels, such as WhatsApp and Telegram. They are transmitted very quickly and can go viral easily. These channels are, hence, often more damaging than we realise. The harm could be done before anything is reported.

Therefore, can the Minister clarify how the Bill will address fake accounts and deepfakes from these private accounts and messaging channels? Specifically, how will it deal with deepfakes or messages shared through these supposedly private communication channels?

While we acknowledge the need to balance freedom of expression, we must also differentiate between harmless satire and intentional falsehoods. This distinction applies even if the content is shared privately. We must remember – private does not mean harmless.

Second, Mr Speaker, Sir, we need to address concerns regarding political overreach.

The vagueness of certain definitions within the Bill could be concerning. The terms "realistic but false representations" and "manipulated content" are broad. I understand the need to capture the evolving nature of digital manipulation. Nonetheless, this breadth could lead to confusion or inconsistent application. It could even lead to potential overreach in enforcement.

I urge the Ministry to offer greater clarity on this point. Could the Minister elaborate on the specific types of content that would be considered an offence under this law? Additionally, how does the Ministry intend to differentiate between malicious manipulation and minor alterations? This would include artistic edits or satirical content. Clear definitions and guidelines are important. They ensure that this law meets its purpose while avoiding the stifling of legitimate political expression. Will the Ministry provide examples of content that would fall under this Bill's purview?

Third, Mr Speaker, Sir, we must confront concerns about potential selective enforcement. The enforcement mechanisms could be seen as vulnerable to bias. This is particularly true for the provision allowing corrective directions by the Returning Officer. Such bias could lead to possible allegations of selective application.

What concrete assurances can the Ministry provide to guarantee transparency and impartiality? How will the enforcement of this Bill remain demonstrably free from any unfair allegations of political bias? This is critical, especially since the Bill empowers candidates to request such corrective directions.

Furthermore, given the extremely short nine-day election campaign, how will the Ministry ensure timely follow-up on such requests? It is essential to balance efficiency and due process. We must ensure that fake accounts and deepfakes are not just detected but swiftly taken down before they poison public opinion.

Additionally, I wonder if we are being harsh enough. Should we, for instance, mandate that perpetrators bear the onus of ensuring complete removal of such content from the Internet rather than simply issuing a correction notice?

Many would simply ignore a correction notice and continue consuming the content. Should we not implement more punitive measures by working with social media companies to take down fake content? What mechanisms are in place for this?

Considering the large amount of resources potentially incurred for rectifying the damage, we should consider holding perpetrators accountable for it.

Fourth, Mr Speaker, Sir, we must acknowledge the challenges posed by rapid technological advancements. While this Bill focuses on identifying and mitigating manipulated content, we must recognise that detection technology may lag behind AI-driven content generation. This creates a real risk. Harmful content could spread before effective action can be taken.

Given the fast-evolving nature of AI and digital manipulation, how does the Ministry plan to stay ahead? What systems, partnerships or research initiatives will ensure robust detection capabilities? How will corrective actions be taken swiftly and accurately?

Fifth, Mr Speaker, Sir, legislative measures must be coupled with public awareness and education. Our citizens deserve the power to discern truth from lies. They must protect themselves and our democracy from the invisible hand of manipulation.

It is not enough to penalise the creators of misinformation. We must arm our citizens with the tools to fight it themselves. We need to equip them to critically assess digital media, identify manipulation and avoid falling prey to misinformation.

This cannot be achieved overnight. It requires a sustained, long-term investment in digital literacy programmes. Beyond penalties, is the Ministry planning any public education campaigns? Specifically, are there initiatives addressing AI-generated misinformation and improving digital literacy? How will these efforts be tailored to different demographics?

Sixth, Mr Speaker, Sir, this Bill needs further clarification regarding its application to foreign influence. While it prohibits election advertising by foreign entities, the borderless nature of online content makes preventing foreign interference a challenge.

How does the Ministry plan to address cases where content from overseas is designed to influence our elections? What protocols are in place to track, identify and block foreign-sourced digital manipulation? How can this be enforced on foreign-based entities, for instance, online media outlets based in foreign jurisdictions?

Lastly, Mr Speaker, Sir, we must consider whether the timeframe for this Bill's provisions is sufficient. The Bill's provisions apply only after the Writ of Election is issued. However, misleading or manipulated content could circulate long before the campaign period begins. Given that election advertising can start unofficially well before the writ is issued, why does the Bill only cover the period after the writ is issued? Has the Ministry considered a broader timeframe? Could we establish a "pre-election period" with safeguards to address early content?

Moreover, with a condensed nine-day campaign period, what assurances can the Minister provide regarding timely and decisive enforcement? The rapid spread of online content demands an equally rapid response.

In conclusion, Mr Speaker, Sir, I support the Elections (Integrity of Online Advertising) (Amendment) Bill. It is critical for protecting the integrity of our election process. Globally, we have seen the threat that deepfakes pose to elections. For example, in Moldova, a video of the President endorsing a pro-Russian political party caused public discord; in Slovakia, audio clips surfaced of a liberal party leader allegedly discussing vote rigging; and in the US, AI-generated content showed a dystopian future following President Biden's re-election bid.

The threat of deepfakes is not a distant possibility. It is already at our doorstep. Deepfakes of our Senior Minister and Prime Minister have been used to promote fake investment products. We saw just now a deepfake of Minister Josephine Teo. This highlights the potential for misuse, even outside election cycles.

These examples show that the threat is real, immediate and happening globally. Deepfakes can erode trust, destroy reputations and mislead citizens. We cannot allow technology to hijack our democracy. That is why I have raised several key questions today.

Free speech should not be a free pass for spreading falsehoods. The Bill must carefully balance the freedom of speech with measures to counter misinformation. It needs clear definitions and safeguards against biased enforcement. Transparency and democracy must be upheld.

Countering deepfakes demands a multi-faceted strategy. For instance, how will this Bill tackle fake accounts and deepfakes from private messaging channels?

We must invest in sophisticated technology to stay agile in the face of AI advancements. It is equally important to empower citizens with digital literacy skills. Addressing foreign interference and ensuring swift enforcement within the election timeline is critical. Democracy must be defended not just at the ballot box, but in the digital space. Let us create legislation that safeguards our democracy and elections while protecting our citizens. I support the Bill.

Mr Speaker: Ms He Ting Ru.

5.39 pm

Ms He Ting Ru (Sengkang): Mr Speaker, deepfakes, particularly malicious ones, pose a serious threat to our democratic processes, particularly during elections. While exciting, technological advances in the field of generative AI bring new challenges in maintaining the integrity of our political and electoral system. The proliferation of highly realistic yet fabricated content, particularly in the digital realm, poses a risk to our electoral system and, if we are not careful, will shake the trust citizens have in the democratic process here in Singapore.

It bears reiterating that deepfakes and digitally altered content are a very real and present danger to democracy. As mentioned by the Minister and also Member Yip Hon Weng earlier, in the 2023 Slovak parliamentary elections, we witnessed the potential impact on elections of the malicious use of deepfakes to sway the results of an election. Just two days before the elections there, during the equivalent of our cooling-off day, a fake audio clip surfaced, which was set to have been a recording of pro-European candidate, Michal Simecka, discussing electoral fraud with a prominent journalist. Both quickly denied its authenticity but the clip went viral.

The impact was also amplified by the deepfake being released during the election's "silence period", when media is prohibited from discussing election-related developments. In that election, the pro-Russia candidate Robert Fico ultimately won, which naturally led to speculation about whether the deepfake contributed towards Simecka's loss, given that he was polling stronger than the ultimate victor.

While political scientists on the whole concluded that the deepfake alone did not cost Simecka to lose, the very speculation caused by its going viral laid bare how dangerous it is for a democracy to exist in an environment of low trust in public institutions and a population with a propensity to believe in conspiracy theories.

The Workers' Party (WP) therefore supports the introduction of legislation to combat the threat of digitally manipulated media contained in this amendment Bill, although I wish to raise two main areas of concern and seek further clarifications from the Minister on other further technical points.

First, the new section 42L(4) contains a number of carve-outs from the band of manipulated content during the election period for OEA, including for authorised news agencies. The reason given for this is to allow factual reporting. However, this is not enough reason to exempt these actors as factual reporting should not require reproduction of prohibited material. In fact, we should consider a concerning scenario – authorised news agencies, when reporting on prohibited content, might inadvertently spread misinformation.

In our attention deficit world, many readers skim headlines and images without carefully reading the full article or captions. This creates a risk where such content, even when presented as part of factual reporting, could be mistaken for genuine content and go viral as real news.

Thus, the very act of reporting by reproducing these prohibited materials might unintentionally amplify their reach and impact.

Our disquiet over creating such a two-tier media landscape leads to questions about how we can ensure that media entities exempt from the prohibitions of the Act do not publish such content without consequences. What mechanisms will be in place to hold these outlets accountable if they do publish or propagate prohibited content, intentionally or unintentionally? More specifically, does the Minister believe that the existing codes of practice governing authorised news agencies are sufficient to address the concerns raised above, or would further updates be needed to combat the unique risks associated with digitally manipulated content and deepfakes?

Would there thus also be new codes or updates to the existing codes of practice, such as the promised new code of conduct which the Ministry of Digital Development and Information (MDDI) states will be published to ensure social media companies do more to moderate content and when is the expected publication date?

Second, the Bill also states that private or domestic communications are exempted. The new section 61M(4) exempts private or domestic electronic communications between two or more individuals from the regulations. While we acknowledge the intent to protect personal privacy, we hope that this exemption does not become a Trojan horse used to overcome the Bill's defences combating disinformation. This is because disinformation often spreads rapidly through private channels. It is also not a secret that modern communication platforms have blurred the lines between private and public spaces. What would be the standing of spaces such as private Facebook groups, private Telegram channels, locked Facebook profiles or messaging group chats? Would whether a channel is private hinge on whether having people in the group not knowing each other, for example?

It is important to have clarity on this as academics have found that there is emerging evidence that propagandists increasingly exploit applications such as WhatsApp and Telegram, preying on their popularity, loose moderation policies and trust within private networks. In Slovakia, the example I raised earlier, Telegram has become a haven for pro-Russian propaganda and a deepfake of Simecka was spread widely on Pro-Fico Telegram channels ahead of the election.

In view of this, can the Minister clarify how the exemption for private messages will address the risks associated with widespread disinformation spreading through these channels? What are the criteria to be used to determine whether or not a specific communication is private and therefore exempt from the prohibitions contained in the Bill?

Aside from these, I have some clarifications around three broad areas: first, the scoping of the prohibitions; second, questions about the reporting and investigation of alleged offences; and third, the potential misuse of the regime.

On the scoping of the prohibitions, I note that the prohibitions and offences only apply during election periods and it is confined to Singapore. While it is necessarily the way it is scoped because the acts being amended are acts governing our two types of elections in Singapore, what is the treatment regulating prohibited content aimed at influencing political sentiment when we are not in an official election period?

Deceptive information may begin swaying public opinion well before an election is formally announced and this is especially the case as the potential window for calling a general election narrows over time. After all, foreign disinformation groups are known to wage persistent year-round disinformation campaigns to influence political outcomes. For example, the government of Canada detected a Chinese spamouflage campaign, where various Canadian members of parliament, including its' prime minister, leader of the opposition and members of the cabinet were targeted. The rapid response mechanism alerted the affected members of parliament who were provided with advice and support on how to protect themselves from the campaign. The aim? To discredit and denigrate targeted members of parliament, by questioning the political and ethical standards, using deepfake videos and fake social media profiles.

While the punishments outlined in this Bill are meant to act as strong deterrence, they do not fully address threats from those operating outside of Singapore's jurisdiction. How then will we effectively combat the risks associated with a foreign coordinated campaign using prohibited content like deepfakes?

Next, moving to an investigation of alleged offences. Given that members of the public can report alleged prohibited content, where and what is the investigative capacity to investigate claims made in this regard? Who undertakes the investigation, and how long would it take for these to be completed, before any further action is taken? What resources, both in terms of manpower and otherwise, would be available to the Returning Officer and ELD to make relevant decisions and take enforcement action?

After all, in Singapore, we are somewhat unique in having a very short campaign period and added together with the quick spread of digital information, it makes it even more imperative that decisions about claims have to be made rapidly.

Also, what happens after an offence is reported and the decision is made to issue a corrective order? Would the Returning Officer then simultaneously ask both the poster and platform to take it down? What then happens if there is a refusal to comply with the order? After all, for platforms, the maximum fine of $1 million dollars might be questioned about whether it was sufficient to compel trillion-dollar companies, such as Meta and TikTok to comply.

Would the Minister also be able to elaborate on any appeals process, if one were to disagree with the corrective order? This is particularly important too, as information has now gotten to the stage where experts sometimes even disagree about whether a piece of content is real or doctored.

Finally, moving to abuse of this process, particularly given the very short nature of our official elections period, and given that members of the public can also make reports. While we often view deepfakes as malignant and harmful, recent elections such as the recently concluded Indian national elections have seen instances where generative AI and deepfake technology has been used to manipulate videos of candidates in a way to benefit them. A classic example would be using deepfake technology to show candidates speaking in languages or dialects that they do not themselves speak, in a misleading effort to endear themselves to certain segments of the electorate. Would these cases of "positive deepfakes" fall under the scope of the prohibition?

Also, given that anyone can make a report, what is the penalty if a member of the public makes a false report? And how will this be communicated so that members of the public do not spam reports as an act of mischief during the election campaign?

Finally, while we have focused a lot today about the potential harms and dangers that deepfakes may pose to democratic processes, some researchers have also warned against being overtly alarmist. In a 2020 paper, Orben warns against what he terms as "technology panics", arguing that these can sometimes encourage quick fixes that "centralise control over truth". I believe that Member Mr Yip also raised similar concerns about overreach earlier.

Instead, I think it would be more helpful to invest in nuanced and effective proactive public education strategies. A technique that appears promising is "pre-bunking", the process of debunking lies, tactics or sources before they strike, because prevention is more impactful than cure. It works like inoculation and aims to build mental resilience against misinformation before being exposed to its full force. Much like Ministry of Defence's Exercise SG Ready earlier this year involving a simulated phishing exercise run by organisations. Pre-bunking works by exposing people to weakened forms of misinformation and uses this to teach them to spot manipulative techniques used by fake news peddlers.

This approach seems to work across different cultures and across those with differing political views and should be integrated into our wider strategy tackling the effects of misinformation on our population. Specific proposals could include enhancing media literacy education in schools and other touchpoints, where our citizens hone their critical thinking skills necessary to navigate the increasingly complex information landscape. We can also use short-form content on social media and interactive online experiences to reach a wide audience, teaching them to recognise common manipulation techniques used in deepfakes and other types of misinformation campaigns.

To conclude, we support the addition of measures to tackle the harm that deepfakes and such manipulated content can cause during the especially vulnerable period of an election campaign. However, I believe that there are a number of concerns and clarifications that I hope the Minister can address, as we work together to ensure that our democratic process does not come under threat by sophisticated manipulated media.

Mr Speaker: Mr Zhulkarnain Abdul Rahim.

5.51 pm

Mr Zhulkarnain Abdul Rahim (Chua Chu Kang): Mr Speaker, Sir, I rise today to express my support for this timely and necessary Bill, which seeks to address the growing threat posed by deepfake technology, especially in the context of elections. As we stand at the intersection of rapid technological advancements and evolving societal dynamics, it is crucial that our legal framework keeps pace with the changes in the world – in terms of changing technology, information dissemination and diverse electorate.

Countries around the world have begun taking legislative steps to combat the potential misuse of deepfake technology, particularly in electoral context. The US, for example, has seen various states introduce Bills aimed at regulating deepfakes, especially during election periods. Some states have enacted laws that prohibit the use of deepfakes to deceive voters, particularly close to elections. Similarly, the federal government has started considering regulations focused on transparency and accountability in political advertising.

Meanwhile, the United Kingdom (UK) has been exploring the implications of deepfakes on democracy, suggesting that existing laws on misinformation need to be updated to specifically address deepfakes, with a focus on protecting electoral integrity. Mr Speaker, Sir, allow me to speak in Malay.

(In Malay): [Please refer to Vernacular Speech.] Deepfake technology poses a significant threat to our democracy and electoral process by creating realistic but false content that can mislead voters, distort public opinion and undermine confidence in the validity of elections. Through audio and video manipulation, deepfakes can be weaponised to spread misinformation, damage reputations and sow confusion during critical moments in elections. The speed at which manipulated content can spread on social media only increases the risk of potentially influencing a large portion of voters before the information's authenticity can be verified.

This erosion of trust is dangerous as it weakens the foundations of a fair and transparent electoral system. Everyone should be concerned and vigilant about this threat because it has the power to distort truth in ways that are difficult to detect and refute. In an electoral context, this could mean voters making decisions based on falsehoods or maliciously wrong content.

As Singaporeans and concerned members of the public, we must be cautious about the information that we receive and share, especially in the digital space. Critical thinking, media literacy and verifying the authenticity of content before sharing are key responsibilities that we should all shoulder in facing this ever-growing threat.

To protect against the dangers of deepfakes, the Government, technology companies and individuals need to work together. Legislative measures are in place to address false content during elections and ensure that those behind such malicious actions and intentions are held accountable.

Social media platforms must enhance their ability to detect false or misleading content. At the same time, public awareness campaigns can help educate citizens on how to recognise manipulated media content. Through vigilance, legal protections and responsible content sharing, we can safeguard our democratic system from treacherous elements seeking to mislead and divide our people through technologies like deepfakes. This is the purpose of this Bill.

(In English): Mr Speaker, Sir, these developments reflect a growing international recognition of the dangers deepfakes pose to electoral processes. Transparency, accountability and public awareness are the cornerstones of these legislative efforts and our Bill today is part of this global movement to safeguard the integrity of elections.

Nevertheless, I have a few clarifications.

My first, on defining the election period, the Bill targets deepfake deployed during the election period. However, what exactly is the election period is not so clearly stated. I heard Minister Josephine Teo stating in the Second Reading speech earlier, that the election period begins from the issuance of the rate of elections until the close of polling day. This clarification is welcome as I believe that the Bill itself does not define the election period and it is somewhat unclear in the Parliamentary Elections Act which only defines the postal voting period.

I welcome the clear and certain timeline that the Bill seeks to address, which is within the election period only. This prevents overreach beyond the election period. While we must act to prevent malicious use of deepfakes during elections, we have other tools, such as defamation laws, the Protection from Online Falsehoods and Manipulation Act, or POFMA, and the Online Criminal Harms Act, or OCHA, to address such issues and deepfakes in non-election context. We should avoid chilling free expressions of speech outside of election periods. Hence, I thank the hon Minister for her assurance just now during the Second Reading speech.

The second clarification is on the disclaimers for AI-generated content and satire. The Bill rightly offers some defences for those accused of spreading deepfake content. Under clause 2 of the explanatory statement of the Bill, it is a defence for an individual to prove that they did not know and had no reason to believe the representation of a candidate was, in fact, untrue and that the offence does not apply to private and domestic communications, the publication of news by authorised news agencies and other prescribed circumstances.

This means that a person forwarding a link by an authorised news agency to his family members in a WhatsApp group chat might not be considered an offence, if that person did not know or had no reason to believe that the representation of the election candidate was false. I thank the hon Minister who clarified that the offence will not cover all private and domestic communications like WhatsApp and closed group chats.

Nevertheless, we should still emphasise for the need for the public to be responsible and, as far as possible, ascertain the truth and veracity of the source of the information before spreading such information. This will be in line with our vision of building a nation of discerning digital natives who are vigilant to guard against online falsehoods and disinformation.

Next, we must also consider content that is obviously labelled as generated by AI or created for satire purposes. Would such labelling be sufficient to avoid any liability on the part of the content creator, even if such content was done by AI or deepfake, and is clearly meant not to be relied upon for the truth of its contents? The downside is that, due to the brief but intense period of election, the harm that such contents would do may outweigh the considerations for artistic licences, as not many people may have the time or context to understand that the content is satire.

A case in point is the Australian case involving the deepfake video of a Queensland premier, Annastacia Palaszczuk, in 2020. The post depicted the premier as hosting a press conference claiming that the state was "in massive debt" and had "huge unemployment". The deepfake video then had her as saying "I would like your vote on 31 October, but if you want to get rid of us, I completely understand." She obviously did not say any of this and this is entirely not true.

The video, though clearly marked as fake, was widely viewed and caused significant damage before it was taken down. This demonstrates that even when deepfakes are labelled as fake or satire, they can still cause harm, particularly in the short timeframe of an election period.

In California, the law on deepfakes makes an exception for satire or parody, provided that such content is labelled as such. For instance, election communications that contain materially deceptive content which constitutes satire or parody and containing such disclosure that the content has been manipulated for the purposes of satire or parody would be allowed.

However, in Singapore, we do not have a full defence for satire and parody. The defamation laws only provide full defences for fair comment, justification and privilege. In Singapore, there are clear elements to establish the tort of defamation. Firstly, the statement or content must bear a defamatory meaning. Secondly, there must be a publication to a third party. And lastly, there must be a reference to the complainant. The Singapore Court of Appeal has held that defamation also includes inferences or implications that the ordinary, reasonable person may draw from those words in the light of general knowledge, commonsense and experience. Hence, if the content is premised on an untruth or deepfake, then they should be caught by this offence even if there is a clear disclaimer because the cost of social media clicks and comedy cannot come at the price of our politics and democracy.

Given the sensitive nature of election periods, content creators should be mindful of their responsibilities to ensure that their work is not based on untruths or deepfake. This would encourage more responsibility content creation and dissemination.

My third and last set of clarifications is on the due process and procedure. An important aspect of this Bill is the due process afforded to individuals or entities accused of distributing deepfake content. The Returning Officer has the power to issue corrective directions and candidates can request such directions under the new section 61MA. However, we must ensure that the process is not open to abuse.

I welcome the hon Minister's clarification that there will be a requisite form for such reporting and penalties for abuse of the same. Given the seriousness and severity of the consequences, perhaps it is important that any such request for a corrective direction be accompanied by a statutory declaration and, if necessary, a Police report, to demonstrate the seriousness of the claim.

There must also be safeguards to prevent the law from being abused as a political tool to take down legitimate election content. This also provides a basis for recourse for a business or someone who is not a candidate but was nevertheless adversely impacted or affected by such false content to possibly seek consequential damages after the election period. We must also ensure that the content in question is truly a deepfake before any corrective action is taken.

Given the rapid advances in technology, it has become increasingly difficult to detect deepfake. While early signs, such as unsynchronised lips or unnatural lighting, could previously indicate a deepfake, generative AI technology has now reached a level that is beyond where these signs are no longer reliable. This underscores the need for the Government to allocate sufficient resources to keep pace with these advancements.

Another critical issue is the potential misuse of deepfake technology to create false endorsements from non-candidates, such as influencers or public figures. Currently, the Bill does not cover such instances. For instance, in the heat of campaigning for the US presidential elections, global superstar Taylor Swift was recently the target of fake images implying her endorsement of a particular political candidate. This prompted her to publicly endorse another candidate to prevent future falsehoods from happening again. Such instances highlight the need to extend protections not only to candidates but also to individuals whose likeness or identity may be misused to manipulate the electorate. What are the safeguards for non-candidates who may be the subject of such deepfake content within the election period? The fall-out from such a deepfake incident may similarly influence, sway or confuse the electorate, no more than if it were done onto political candidates.

In conclusion, Mr Speaker, Sir, this Bill represents a significant step forward in safeguarding the integrity of our electoral process. Deepfakes, if left unchecked, can be weaponised to undermine our democracy, distort the truth and damage the reputation of our elections. It is crucial that we act now to put in place the necessary legal safeguards to protect our democratic values. Quite apart from the political arena, we must remember that candidates have their own lives, their own families, their own children, career, organisations and loved ones, all of whom will be impacted by such false information long after the dust of election has settled and long after the heat of hustings have cooled down. As Victor Hugo wisely wrote in Les Miserables, "Whether true or false, what is said about men often has as much influence on their lives and particularly on their destinies as what they do".

Let us ensure that, in Singapore, what is said about our elections and our candidates is rooted in truth, integrity and respect for our democratic process, ensuring that truth, not deception, guides our democratic processes. Mr Speaker, Sir, I stand in support of the Bill.

Mr Speaker: Mr Vikram Nair.

6.06 pm

Mr Vikram Nair (Sembawang): Mr Speaker, I support this Bill. This Bill aims to deal with the hazards posed by manipulated content during an election. It is now possible, with the assistance of technology, to recreate not only images, but also videos of people, to have them say or appear to do things they never did. Thanks to machine learning, even voice, mouth movements and speaking style can be replicated to create videos that appear authentic. The video the Minister played in her introductory speech of an AI-generated deepfake video of herself is a chilling example of the power of this technology. Likewise, the example she had shared of President Joe Biden's voice being imitated in robocalls to tell potential Democrat voters not to vote is a recent real-world example of such technology being used in a campaign. It is clear that such technology cannot be left unchecked.

Against this backdrop, this Bill makes it an offence for any person to publish or cause to be published such videos. Notably though, this Bill does not make it an automatic offence to create such a video, but only to publish it. I would like the Minister to clarify the intended scope of the word "publish" and whether it is intended to capture, for example, every person that recirculates such a video on social media. Is each share on Facebook, for example, a fresh publication? If so, I think this should be made clear so that people are mindful about what they share, knowing that they may be taken to have published material if they reshare the video on their own platforms. The law of defamation already treats a person who shares a Facebook post as a potential publisher of that content and I think taking a consistent approach with this is fair.

A related matter is whether Facebook page administrators will be held liable for content posted in the comments of their Facebook pages as publishers of that content. In the UK, courts have held that administrators of Facebook pages can be considered publishers in relation to comments posted by third parties on their page. And if the intention in this Bill is to extend this duty to Facebook page administrators, it will be good to clarify this. I think it is principled to do so because Facebook page administrators should take responsibility for contents on their page.

In terms of scope, I note that one of the requirements of limb "c" of the operative provisions is that the content must relate to "election in the electoral division". I would be grateful if the Minister can clarify that this is not intended to restrict the implementation, only where it can be shown that the video was related to a particular electoral division. A video that, for example, falsely attributes criminal or immoral behaviour to a candidate can damage that candidate, even if no reference is made to the constituency the candidate is in, or that such a video is directly related to the election.

I would also be grateful if the Minister could clarify the intended scope of the defence in subsection 4 of both proposed amendments that state that the offence in subsection 1 does not apply in any communication between two or more individuals that is private in nature. I think domestic communications are clear. But for private communications, is this exception intended to cover any sharing of such messages by private means, such as WhatsApp?

If so, I am concerned this exception may be too broad as, anecdotally, there is already a lot of fake news being circulated through WhatsApp, often forwarded by individuals to others in their group or contacts list. These may all be known contacts, but if a person forwards a video to several hundred people on his contact list, that may be more damaging than the same video on a Facebook account, precisely because the message comes from a trusted source in a private message channel. I think there is scope to say that if a person forwards a message to, say, a hundred contacts, that would be publication.

On the other hand, if this exception is to apply to any communications as long as it goes through a private channel, that means these manipulated videos may continue to be circulated through these means and some of the intended mischief may not be captured.

I also believe there is scope for legislation to go further in eradicating this threat of manipulated content. Some suggestions for future consideration include, first, this Bill only protects election candidates and only covers them from the time they are indicated as such. I think there would be some scope for extending protection to existing Members of the House and the President as well, since a video circulated ahead of the person being named a candidate in the next election may still do significant damage and there is no reason such harm should not attract punishment. This is also a much faster and less burdensome remedy than requiring the person to commence defamation action against any and all those who may have shared a link to such content.

Second, I think there is scope for significantly more serious and even punitive penalties to be imposed against the creators of the contents themselves. I think the current penalties of a fine of $1,000 and imprisonment not exceeding 12 months suggest that this is to be treated as a minor offence, with no distinction made for creators of content and those who may just publish it on their platform by sharing a link.

Third, the offence is currently targeted at a "person", which suggests a natural person. I think we may also have to deal with a situation where content may be published and circulated by bot accounts and perhaps even substantially generated by such accounts. Such publications may be outside the scope of this Bill and that would mean taking action against such accounts or even social media platforms more difficult, because the primary offence is not triggered.

Although I know most platforms have internal rules that require natural people to operate the individual accounts, this is not currently a matter that is regulated and there are many reports of a large number of bot accounts that operate on social media platforms.

Overall, though, I think this is an important legislation that takes concrete measures in helping to combat the insidious use of AI to undermine candidates and manipulate elections. I think there is scope for even stronger measures against the threat and would suggest that these measures be continuously evaluated and improved as needed to combat this threat.

Mr Speaker: Ms Hany Soh

6.13 pm

Ms Hany Soh (Marsiling-Yew Tee): Mr Speaker, I rise in support of this Bill. Among other amendments, this Bill introduces section 61MA, which criminalises the publication of OEA that contains realistic representations created using content that was digitally generated or manipulated of a candidate saying or doing something that he or she did not, in fact, say or do. I will refer to this as deepfakes in the following parts of my speech.

With the advent of technology and ubiquity of social media, consumption of digital content is a constant in our daily lives, arguably even a necessity. There are many ways we consume digital content. There are those we search by ourselves in order to keep abreast of current affairs or due to personal interests. Then, there are some that are shared with us by families and friends.

We often also come across digital content as a result of targeted advertisement algorithms. It is, therefore, safe to presume that all of us would have at some point seen deepfakes. The question is, have we been able to spot them? And of the digital content that we have viewed and remember, are there any falsities that are wrongly and inadvertently being perpetuated as truths by our memory?

The opening line of the Cyber Security Agency of Singapore's advisory published on 21 March this year accurately addresses the issue. I quote, "Artificial Intelligence or AI is being used to produce increasingly convincing deepfakes that are indistinguishable or even to the trained eyes".

Mr Speaker, according to a CNA article dated 17 January 2024, it states that politicians are prime targets of deepfakes. They are right.

In December last year, then-Prime Minister Lee Hsien Loong was portrayed in a deepfake video promoting a cryptocurrency scam. Also in December last year, Prime Minister Lawrence Wong was shown endorsing an investment scam. In April this year, Foreign Affairs Minister Dr Vivian Balakrishnan, along with other Members, Dr Tan Wu Meng, Mr Edward Chia and Mr Yip Hon Weng, each received an extortion letter containing a manipulated photo. The question of "whether we will be targeted" is a foregone conclusion.

With our next General Election to be called by November next year, this Bill is a timely defence apparatus against the ever-evolving weaponised deepfakes. This Bill will provide a key line of defence against attacks that would invariably seek to strike at the foundation of our democracy.

While the obvious danger posed by deepfakes would be misleading its audience into believing falsities, viewers could also be confused to the point where one is unable to tell fact from fiction. The public's awareness of deepfakes, which is what we trying to achieve by a whole-of-society effort, could even be manipulated by bad actors for nefarious purposes.

They could, for example, when confronted with matters of record and which are true but adverse to their interests, seek to avoid accountability by falsely claiming that they have been victims of targeted deepfake campaigns. A phenomenon coined by legal scholars Robert Chesney and Danielle Keats Citron as the "Liar's Dividend".

Clearly, Mr Speaker, we anticipate that we will soon be confronted with novel and previously unimaginable difficulties presented by deepfakes. We are gearing up for a battle against deepfakes, not fully knowing who our adversaries are and what their strategies will be. Notwithstanding, we have to do our best to be ready and ably protect the sanctimony of our democracy.

To this end, I raise the following clarifications.

Firstly, under this Bill, the Returning Officer's power to issue corrective directions under section 61N(1) will be extended to, among others, online elections advertising that contravenes the new section 61MA. How will the Returning Officer be supported in his or her new responsibilities, which are in addition to his or her already existing heavy and critical duties?

Secondly, in operational terms under section 61N(2A), what would be the expected response time by the Returning Officer, upon receiving a candidate's request in the prescribed form for a direction under subsection (1)?

Thirdly, how will this Bill be operationalised to augment and supplement our current measures against foreign actors and actors who reside outside Singapore, who seek to interfere in our electoral process using deepfakes?

Fourthly, section 61MA(4)(a) excludes any communication of content between two or more individuals that is of a private or domestic nature by electronic means. While this rightly protects freedom of speech and privacy, how can we ensure that the objects of this Bill will not be undermined by the "dark social" phenomenon?

Lastly, while corrective directions may be issued under this Bill, there remains a possibility that it may not reach all of us timeously, or at all. Would the Ministry then consider setting up a digital portal containing a repository of corrections and directions issued under this Bill? In Mandarin, please.

(In Mandarin): [Please refer to Vernacular Speech.] Mr Speaker, elections must be fair and just. When voting, what the voters ought to consider should be the information which the candidates has presented genuinely, not those information which are fabricated or distorted. However, with the advancement of media technology, especially AI, we find that many deceptively realistic videos have spread on social media platforms. Readers who do not verify with various reliable news sources can easily be misled. Therefore, I support this Amendment Bill to ensure that the publication of digitally fabricated and distorted content about the candidates' words and actions will be prohibited during elections.

Here, I would like to pose a few questions. One type of communication not prohibited is messages conveyed privately in family group chats. In this regard, I would like to ask if the Government has considered that this could be exploited as a channel to disseminate fabricated false content during election periods.

Secondly, I agree with what Minister said just now that this piece of legislation is not a panacea. It is far more important that the public is equipped with the ability to discern what is fake from real. Hence, apart from passing this legislation, does the Ministry of Digital Development and Information (MDDI) plan to launch any educational activities in future to educate the public on how to identify AI-generated fake messages?

(In English): Mr Speaker, notwithstanding my clarifications, I stand in support of this Bill.

Mr Speaker: Ms Joan Pereira.

6.21 pm

Ms Joan Pereira (Tanjong Pagar): Mr Speaker, Sir, I support this Bill because it is a much-needed step in the right direction, given the pace of developments in GenAI technologies. The proposed changes will help to ensure that our elections will be conducted in a fair and transparent manner.

However, I would like to ask the Minister why do the new measures only apply to digitally generated or manipulated content depicting political candidates? Would the Ministry consider expanding the scope of content to include indirect representations and depictions?

Candidates in an election would be the most vulnerable to direct and malicious acts of misinformation. However, they are just as susceptible to indirect attacks. Perpetrators can be creative with their methods and try to find ways around existing legislation. For example, they could exploit ways to cast doubt on or misrepresent a candidate and what the candidate stands for, without mentioning or depicting the candidates themselves. Would such a potential scenario be covered by any of the existing or proposed pieces of legislation?

Other jurisdictions have similar laws in place, including South Korea. Before its legislative elections in April this year, they implemented a 90-day ban on AI-generated deepfake content of a political nature. Would the Ministry consider doing the same as well?

Next, I would like to ask why forwarding via messaging apps is not included in the proposals. Besides social media platforms, which are under the purview of this Act, we should also consider closing the gap in terms of messaging apps. Messaging apps, such as WhatsApp and Telegram, are widely used by Singaporeans as key sources of information and news.

While I understand the Ministry's intention to maintain the balance between the freedom of expression in private and potential misrepresentation of what a candidate did or said, this is a loophole which may be exploited. Those with ill intent may exploit this gap in order to ensure that their fake content reaches the widest audiences possible to impact the electoral results. Sir, in Mandarin.

(In Mandarin): [Please refer to Vernacular Speech.] Besides the social media platforms which are under the purview of this Act, we should also consider closing the gaps in terms of messaging apps. Messaging apps, such as WhatsApp and Telegram, are widely used by Singaporeans as key sources of information and news. While I understand the Ministry's intention to maintain the balance between the freedom of expression in private and potential misrepresentation of what a candidate did or said, this is a loophole which may be exploited. Those with ill intent may exploit this gap in order to ensure that their fake content reaches the widest audience possible to impact the electoral results.

(In English): The practicality of enforcement actions may be a factor for consideration, but perhaps the Ministry would consider sending a clear signal that such content is banned and that forwarding them is an offence.

It is stated that the Returning Officer can issue corrective directions to individuals who publish such content, social media services and Internet Access Service Providers, to take down offending content or to disable access by Singapore users to such content during the election period.

The Bill will allow candidates to make a request to the Returning Officer to review content that may breach the prohibition and issue corrective directions. Candidates who have been misrepresented by such content can, in return, make a declaration to attest to the veracity of his/her claim.

May I ask how will the Ministry ensure that the department and personnel in-charge of handling such feedback and requests are adequately resourced and well-trained and if more sophisticated technological tools will be deployed to keep up with the workload? New AI content can be generated very quickly with little expense. How will the teams handle situations where there could be an influx, with some maliciously designed to overwhelm the system?

Mr Speaker, Sir, the integrity of our election environment, system and process depends on our capability to counter rapidly evolving threats and interferences. We must remain vigilant and be prepared to counter and deal with perpetrators severely, to neutralise their attacks and serve as a deterrent effect.

In addition to effective legislation and enforcement, I urge the Ministry to invest in preventive measures as well, such as identifying, training and empowering new talents in this field in our Ministries and agencies, and adopting advanced technological tools to combat misinformation in elections.

Mr Speaker: Leader.




Debate resumed.

Mr Speaker: Dr Wan Rizal.

6.27 pm

Dr Wan Rizal (Jalan Besar): Mr Speaker, I rise in support of the Bill. It is timely and necessary in an era when technology rapidly transforms how we live, engage with and consume information. Our democracy thrives on citizens' informed choices, which can only be made when voters access accurate, reliable information.

In recent years, we have witnessed how digital platforms have become powerful tools for political engagement. These platforms are integral to modern elections, from campaign advertisements to social media debates. However, with this shift comes the darker side – misinformation, deepfakes and digitally manipulated content designed to deceive and mislead voters.

This Bill is essential to safeguard against such threats by prohibiting the publication of realistic but false representations, particularly those manipulated through GenAI. Voters must cast ballots based on truth, not illusions.

Sir, GenAI is a remarkable technology with transformative potential industries from healthcare and education. But like any powerful tool, it can be misused. This Bill specifically addresses the risk posed by AI-generated deepfakes, audio or visual content that can make it appear that a candidate said or did something they never did.

Imagine the harm caused by a manipulated video showing a candidate making a statement they never uttered, such as endorsing a controversial policy. If not swiftly addressed, such a video could spread like wildfire across social media, reaching thousands within minutes and distorting public perception before the truth has a chance to catch up. In the digital world, the truth often struggles to keep pace with lies and by the time the damage is corrected, the harm is usually irreversible.

In 2020, a manipulated video of then-Belgian Prime Minister Sophie Wilmès, falsely depicted her blaming environmental damage for the COVID-19 pandemic. Although the video was quickly debunked, it spread widely online, illustrating how rapidly deepfakes can mislead the public before the truth emerges.

Similarly, during the 2023 Slovak elections, which the Minister mentioned earlier, AI-generated videos spread across platforms, like Facebook and Telegram, which include falsely depicting a candidate discussing vote buying. Such incidents highlight the urgent need to protect our elections.

Sir, we must also recognise the delicate balance this Bill attempts to strike. As with any legislation that governs speech and expression, there is always the concern that it could be perceived as going too far. We do not want this Bill to be perceived as inadvertently stifling legitimate debates. Therefore, I am pleased that the Bill exempts private communications and news reporting by authorised agencies. I am also glad that candidates have the right to defend themselves against false content. However, we must also be mindful of the potential for abuse.

The corrective request system should not be exploited. Therefore, robust mechanisms must be in place to ensure that only genuine, harmful misinformation is targeted and frivolous complaints are filtered out.

Sir, while I fully support the Bill, I would like to highlight a few areas where further clarity might be necessary.

First, there is the issue of determining what qualifies as a "realistic enough" representation to be considered misleading. The subjective nature of this standard could lead to difficulties in enforcement. Sir, I am grateful that the Minister shared some useful examples earlier, but, next is, how do we communicate this clearly to the public? Could the Ministry provide consistent and fair enforcement guidelines?

It is also worth noting that some of the most concerning platforms, such as WhatsApp and Telegram, operate through private, encrypted messaging systems. In fact, in Telegram, they do not even have to share their numbers. These platforms allow information to spread rapidly through group chats, yet they remain largely outside the scope of regulatory control due to their private nature.

At this juncture, we need to clarify that it is okay to chat and discuss about politics but wrong to spread deepfakes and misinformation.

This raises important questions. How can we effectively curtail the spread of misinformation on platforms where content is shielded by encryption? Should these platforms bear the same responsibility as public social media platforms?

Sir, in India, they provide an interesting model with their Deepfakes Analysis Unit, which allows the public to submit questionable content via WhatsApp for rapid verification. Such an initiative could empower Singaporeans to take an active role in reporting misinformation, especially on platforms that are difficult to monitor. By creating a similar content verification channel, we could involve the public more directly in efforts to counter disinformation, ensuring quicker identification and resolution of misleading content circulating on encrypted platforms.

Additionally, I would like to ask how the Government plans to manage the volume of manipulated online content that could be generated during election periods? Will there be dedicated resources or teams to monitor and enforce these regulations in real time and on time? Given that technology moves very rapidly, how are the tools used updated accordingly?

Clearly, a swift response is crucial.

Sir, another important aspect of this Bill that I welcome is social media platforms' increased responsibility. These platforms are not just passive conduits of information. They are potent actors in shaping public opinion. By holding them accountable and imposing penalties up to $1 million for failing to act on corrective directions, we ensure that platforms share in the responsibility for maintaining electoral integrity.

But then again, is $1 million fine truly fair or enough across the board? Platforms vary significantly in size and influence, some wielding far more power than others. Should they all be held to the same standard or should penalties scale according to their reach? What happens if a platform takes too long to respond? What qualifies as "long enough" in this context?

These are questions we must carefully consider as we strive for effective and fair enforcement.

Finally, Sir, I hope that the Minister considers a tougher stance against perpetrators beyond the corrective directions, given that misinformation and the loss of reputation would not only affect the candidate but their family, their loved ones and even their jobs. The mental impact can never be underestimated.

Sir, in conclusion, the Bill is a critical step in protecting the integrity of our electoral process in this digital age. It addresses the growing threat of digitally-manipulated content, reinforces accountability for candidates and platforms and ensures that our democracy remains a place of truth and informed choice.

As we look to the future, let us continue to adapt and evolve our laws to meet the challenges of the digital age. What seems like science fiction today could be a reality in the next election. While this Bill tackles AI-generated deepfakes and manipulated content, we must remain vigilant for new technologies that could be used to distort electoral processes.

We must also collectively, as a society, be able to identify and reject misinformation and deepfakes. With this Bill, we reaffirm our commitment to a fair, transparent and trustworthy electoral system for all Singaporeans. Notwithstanding the concerns and clarifications raised, Sir, I support the Bill.

Mr Speaker: Mr Louis Ng.

6.37 pm

Mr Louis Ng Kok Kwang (Nee Soon): Sir, this Bill seeks to uphold the integrity of Singapore's electoral process by giving the Returning Officer certain powers to combat deepfakes that misrepresent candidates during the General Elections and Presidential Elections. I have three points for clarification to raise.

My first point is on the candidates who have the right to request that corrective directions be issued. Under section 61N(2A) of the Parliamentary Elections Act and section 42LA(4) of the Presidential Elections Act, candidates may request the Returning Officer to take action against manipulated OEA. The wordings of the provisions are broad enough to include both situations where the deepfakes prejudice and advantage candidate. However, MDDI's public statements appear to suggest that only candidates who have been prejudiced by deepfakes can request corrective directions.

For instance, in a press release on 9 September 2024, MDDI stated, "Candidates who have been misrepresented by such content can make a declaration to attest to the veracity of his/her claim." In another media article, a MDDI spokesperson was quoted as saying, "In the case of deepfakes featuring political candidates, we do need the individual to come forward and say that this is a misrepresentation."

What if, conversely, the deepfake is beneficial to its subject? Do other candidates running in the same election have the right to request corrective directions?

This is not a hypothetical scenario. AI was used by the main presidential candidates in Argentina's elections in 2023. In addition to damaging images of the opposing candidate, the candidates also produced favourable deepfake posters of themselves. In Pakistan's 2024 elections, former-Prime Minister Imran Khan's party used AI to create and disseminate speeches based on notes that he passed to his lawyers from prison. Khan even delivered an AI-generated victory speech after wins by independent candidates backed by his party. The deepfake may be made of individuals who are not even alive. In the Indian state of Tamil Nadu, a political party used AI to recreate video speeches by a long-deceased party leader, in which he complimented current party leaders.

A candidate who is benefiting from a deepfake may have no incentive to curb the spread of that deepfake. For the avoidance of any doubt, can the Minister confirm if any candidate can request for corrective directions to be issued, not just candidates who are the subjects of the concerning content?

My second point relates to the level of belief necessary to establish the offence of publishing manipulated OEA. Under section 61MA(1)(e) of the Parliamentary Elections Act and section 42LA(1)(e) of the Presidential Elections Act, the representation must be realistic enough that it is likely that some members of the general public would, if they heard or saw the representation, reasonably believe that the candidate said or did that thing.

Can the Minister clarify how the Returning Officer and the Court should determine whether this standard of realism is met?

The timeframe for a Returning Officer to make a corrective direction is much shorter than Court proceedings prosecuting the offence. It will be more difficult for a requesting candidate to gather evidence within a short timeframe than a prosecution collecting evidence after the fact. What kind of evidence must the candidate requesting for corrective directions submit? Will the requesting candidate or prosecution need to establish actual belief or is potential belief sufficient?

Can the Minister also clarify how the threshold of "some members of the general public" compare to thresholds under other laws which deal with misinformation? For instance, under the tort of defamation, a statement that is defamatory "tends to lower the plaintiff in the estimation of right-thinking members of society generally". Under POFMA, a statement of fact is "a statement which a reasonable person seeing, hearing or otherwise perceiving it would consider to be a representation of fact".

By contrast, the offence of publishing manipulated OEA only requires the candidate to show that the deepfake would be regarded as genuine by "some members of the general public". Certain segments of the general public, such as the elderly, are less technologically-savvy and they may regard a deepfake to be genuine more readily than other segments of society.

This may lead to a situation where material that is found to be manipulated OEA, may not necessarily meet the threshold for the offence of communicating fake statements or facts under POFMA. A candidate may succeed in obtaining a correction directive for manipulated OEA but may not succeed under the tort of defamation.

Can the Minister explain how the level of belief necessary to establish the offence of publishing manipulated OEA compares to the requisite state of mind under other laws which deal with misinformation? If the differences in thresholds are intended, can the Minister share the rationale for the differentiated thresholds?

My third and final point is on the exceptions to the ban on deepfakes during elections. Under section 61MA(4) of the Parliamentary Elections Act and section 42LA(4) of the Presidential Elections Act, the ban does not apply to communications which are "of a private or domestic nature".

As other Members have shared, certain social media or messaging platforms can be used to communicate content to a large number of people, even within private chat groups. WhatsApp allows for groups of up to 1,024 members; Telegram supports groups of up to 200,000 members; and a person can add up to 5,000 friends on Facebook.

Will a message sent to a private group chat with the maximum number of members or a Facebook post, which is set to private but viewable by 5,000 friends, still be considered a private communication? What factors, aside from the number of persons receiving the communication, will the Returning Officer consider when determining whether the communications are "of a private or domestic nature"?

Sir, notwithstanding these clarifications, I stand in support of the Bill.

Mr Speaker: Minister Josephine Teo.

6.43 pm

Mrs Josephine Teo: Mr Speaker, I thank Members for their unanimous support of the Bill.

In fact, Members had expressed concerns about deepfakes even before the debate on this Bill. We have heard questions in this House about how we can better tackle impersonation scams. Earlier this year, Members Dr Tan Wu Meng and Ms Mariam Jaafar shared concerns about the impact of deepfakes on democratic processes and elections.

Taken together with today's debate, there is clear consensus on the pressing need to deal with the threat of digitally-manipulated online content because of what is at stake – the integrity of our elections.

Members have also sought clarifications on several issues. I will try my best to address them. Some Members have asked how the ELIONA Bill compares to other governments' attempts to tackle deepfakes.

Sir, we do take reference from other countries, but it is more important to be fit-for-purpose. We have, therefore, scoped the law to be appropriate for Singapore's context. Earlier, I mentioned how South Korea bans all political campaign videos that used AI-generated content 90 days prior to an election. Brazil has also banned synthetic electoral propaganda.

In response to Ms Joan Pereira’s question, we did consider a temporary ban on all deepfake content of a political nature during elections. After careful deliberations, we decided this was not necessary. There is nothing inherently wrong if AI is used, for example, to enhance the background of political communications materials. The key problem is with digitally generated and manipulated content that misrepresents a candidate’s words or actions. The Bill, therefore, targets such content.

Members have also asked for the rationale behind the proposed duration of the ban. Previously, Ms He Ting Ru asked about the recourse for political candidates affected by deepfakes during cooling-off or Polling Day in elections. The ELIONA Bill provides the recourse. But we know that purveyors of deepfakes will not constrain themselves to just the Cooling Off Day or Polling Day. If we are to effectively uphold the integrity of elections, the protections under the Bill must be available when election activities are the most intense and mischief makers most active. This is usually the election period, which is also defined in section 61S of the Parliamentary Elections Act and section 42R of the Presidential Elections Act. It starts from the issuance of the Writ of Election and ends after the close of Polling.

Mr Yip Hon Weng and Ms He suggested that the proposed duration of the ban should be longer. I thank them for their suggestion and agree with both their concerns. Practically speaking, however, even if we wanted ELIONA to take effect x days before an election, we cannot do so until the Writ is issued and Polling Day revealed. This is why we will introduce a Code of Practice to require specified social media services to implement safeguards beyond the election period specified in the Bill. This allows for calibration of the speed of response and the resource requirements outside of election periods.

Let me now deal with the types of content the Bill will and will not cover. Mr Zhulkarnain Abdul Rahim and Mr Vikram Nair have asked why the ban was not scoped wider to cover OEA that misrepresents persons other than candidates. For example, deepfakes that falsely show key influencers or artistes endorsing a candidate. We considered this carefully. The question is how influential must these other persons be for the prohibition to apply? Where do we draw the line and who decides? As the political contest develops, the dynamics may also change. How about persons who were previously not influential but suddenly gained prominence?

Similarly, Ms Pereira asked why the new measures only apply to content that explicitly depicts candidates and not content that indirectly misrepresents them. One such example is an AI-generated podcast that discusses their past.

This problem has existed even without AI or digital manipulation, for example, through coffee shop talk of people who claim to know something about the candidate. However, the difference is that deepfake content can be very realistic and, hence, persuasive. When they directly depict candidates doing or saying something, the audience is more likely to accept it as reality. In contrast, hearsay information or third-party accounts like coffee shop chatter tend to be discounted, or at least viewed with some scepticism.

There are also practical difficulties in extending the coverage of the ELIONA Bill outside of content directly depicting candidates' words and actions. For example, how do we ascertain the degree of misrepresentation and whether it warrants prohibition? The better alternative is to encourage a culture of truthfulness, where persons of influence and candidates themselves step forward to clarify to the public if they have been misrepresented through deepfake content. Voters, too, must be vigilant and turn to trusted sources, such as our mainstream media.

Sir, in our review of how manipulated content has affected elections globally, we have seen examples of content being edited using non-AI means to very realistically misrepresent electoral candidates. Furthermore, traditional media editing software are now beginning to adopt AI technologies, such as Photoshop's introduction of GenAI capabilities to add and remove content, with photorealistic results. This further blurs the line between content that has been purely manipulated via AI technology and other traditional means.

This is why ELIONA does not exempt from prohibitions content that has been partially edited by AI or other more traditional technology. For instance, if one manually wrote a speech, but uses AI-generation to produce a video of a candidate reading it, the video will be considered to be AI-manipulated and will be prohibited. This addresses the point raised by Mr Vikram Nair.

Mr Vikram Nair also asked if the Bill covers content about a candidate that does not directly relate to the constituency which the candidate is contesting in. The answer is yes. A piece of content does not have to refer to the specific constituency which a candidate is contesting in for it to be considered OEA. We will make a holistic assessment of what constitutes OEA, and if the online content that misrepresents a candidate has the potential to unduly influence the behaviour of voters in the election.

Members, including Mr Louis Ng, Mr Yip and Mr Zhulkarnain, have asked how we will treat OEA designed to entertain, such as satire or memes. As I mentioned in my opening speech, content that does not mislead and deceive people about a candidate’s actual speech or actions will not be banned. Besides the question of whether it is digitally generated or manipulated, the law requires us to consider these questions. If the public saw or heard the content, would they believe it is the candidate being depicted in the content? Would they also believe that the candidate did or said that thing in real life?

Memes and satire are already part of our online space. Most of such content will show caricatures of individuals, and a reasonable person will be able to distinguish fact from fiction. This also applies to other online content, like online political campaign posters.

Mr Zhulkarnain asked what our approach will be if the offending content was labelled, in other words, declared to have been digitally generated or manipulated. Sir, labelling does not automatically exempt content from being prohibited by the ELIONA Bill. A label may not be noticed by everyone. There are also ways to remove labels from content before recirculation. What matters are the four criteria I have shared previously: (a) the content constitutes OEA; (b) it is digitally generated or manipulated; (c) it is realistic; and (d) it shows the candidate doing what he did not do or saying what he did not say. If these criteria are met, the content will be prohibited, even if labelled.

Some Members asked for confirmation if the proposed ban covers private or domestic communications, such as messages on WhatsApp and Telegram. The election rules are not intended to police private or domestic communications. When deciding whether a communication is of a private or domestic nature, the Returning Officer will consider various factors, such as the number of individuals in Singapore who can access the content, if the group is public or closed and the relationships between the individuals.

As an example, chat groups on WhatsApp and Telegram with very large memberships that anyone can freely join should not be considered private or domestic communication. If prohibited content is circulated in these open groups, the Returning Officer will assess if action should be taken. If the same prohibited content is posted online, on websites or social media platforms, we can issue corrective directions for it under the ELIONA Bill.

Members, such as Mr Louis Ng and Dr Wan Rizal, asked how we will assess if a piece of content is realistic enough to be believed. Clearly, this is not an exact science. But there are some factors that can be considered and I had outlined them in my opening speech. The aim of the ELIONA Bill is to uphold the integrity of our elections. We have seen how disinformation, even if believed by a small segment of society, can lead to drastic and violent consequences. Consider how allegations of election fraud in the US played a role in the deadly Capitol Hill insurrection on 6 January 2021.

So, I hope Members will agree that we should not accept any segment, no matter how small, voting based on a false representation. We have no way of knowing in advance the extent to which it will alter the course of our elections. But why should we subject our elections to such risk at all, if we can prevent it or, at least, minimise it? This is why the ELIONA Bill has been drafted in this manner, to allow for the prohibition of deepfake content as long as some voters reasonably find them believable.

Ms He opposed the proposal to allow mainstream media platforms to reproduce content prohibited under the ELIONA Bill when reporting on news and current affairs during the election period. I am slightly puzzled because in her speech Ms He also advocated educating the public through short-form videos, which will likely have to reproduce such content in some form to show how realistic they are. Our belief is that news agencies can and should play a part in preserving the integrity of our elections. The prohibition does not apply to news published by authorised news agencies because of their duty to report on news fairly and accurately to inform and educate the public. In fact, this is not new.

Today, media outlets report on online scams to educate the public about its dangers and to let citizens know how to identify and avoid scams. This often includes republishing images of online content to alert citizens to the scam.

Mr Yip and other Members sought assurances that the issuance of corrective directions will be impartial and that measures will apply equally, regardless of the party the depicted candidate belongs to. Sir, the Bill itself is designed for impartiality. We apply the same criteria to determine who are considered candidates. They are all provided with the choice of when to inform the public of their candidacy.

The defined election period is also known to all candidates at the same time. The duration and thresholds for content prohibitions are the same for all candidates. We do not even assess whether the prohibited content is favourable or unfavourable to a candidate. This would be highly subjective and open to dispute. Deceptive content will not be allowed, whichever party the candidate belongs to.

To ensure transparency and accountability, the public will be notified about corrective directions that have been issued against offending content, so that they can vote in an informed manner. To Ms Hany Soh's question on how this will be done, ELD will make an assessment and provide an update in due course.

Members including Mr Zhulkarnain and Ms He have also asked – what recourse is there if a piece of content was deemed to be wrongfully taken down?

Recipients of a corrective direction who feel that their content has been wrongfully taken down can contact the Returning Officer to provide supporting evidence of their claims. If the Returning Officer does not accept their appeal, they may apply to the Courts for judicial review of the Returning Officer's decision.

If content was found to have been mistakenly taken down due to a false declaration by a candidate, there will be serious consequences for the candidate, including the loss of his or her seat if elected. If the content does not otherwise meet the criteria for prohibition, it can be reposted.

To Ms He's query, there are also current penalties in place for persons other than candidates who knowingly provide false information to Government agencies.

During this debate and on previous occasions, Members including Ms Sylvia Lim and Ms He have acknowledged the difficulties in determining the authenticity of online media content. This is why candidates will have to make a declaration in addition to their request to Returning Officer to assess content under the ELIONA Bill.

Members will agree that we cannot just take a candidate's word at face value. Ms Soh highlighted what has been described as "liar's dividend". This is why, in addition to the candidate's declaration to the Returning Officer, there is an independent technical assessment made by the Returning Officer and his team of public officers and we have instituted severe penalties for a false declaration.

To the question by Mr Louis Ng, candidates will be asked to submit their declarations via an online form during the election period. This form will be available on ELD's candidate services portal. More details on the information requirements will be shared in due course.

Each of the requests and declarations made by candidates to the Returning Officer will be carefully assessed. The Returning Officer will only issue corrective directions for genuine cases that have met the requirements.

Mr Zhulkarnain asked if a candidate should instead affirm a statutory declaration for this purpose. My colleagues and I have studied the options and weighed the trade-offs between the formal process of making a statutory declaration in front of a Commissioner of Oaths and submitting a declaration online. An online declaration is both efficient and effective. The consequence is direct and appropriate. The offence is an election offence and should be punished in accordance with elections legislation, which provides for the loss of seat for egregious offences. The general punishment of making false statutory declarations does not capture the seriousness and context of this offence.

The digital mode of the declaration is also meant to facilitate a speedy and efficient declaration process for candidates during the election period. In the spirit of promoting fair elections, we want to encourage candidates to report content that misrepresents them by removing as many administrative barriers as possible.

Members like Mr Ng have also asked if any candidate can request corrective directions to be issued, not just candidates who are depicted in the impugned content. As I mentioned in my opening speech, we will place significant weight on a depicted candidate's declaration to the Returning Officer as he or she is in the best position to clarify if the content is an accurate representation of himself or herself. Therefore, in most cases, we will rely on candidates making requests and declarations when they are depicted in the impugned content.

Further details of the prescribed modality will be shared in future.

In cases of positive campaigning, where the impugned content actually portrays a candidate favourably, other candidates and even non-candidates can make a request for review. However, we will still ask the depicted candidate for a declaration as the Returning Officer and his team are unlikely to have the full facts.

If the depicted candidate does not make a declaration for whatever reasons, the Government is still empowered to issue directions if we have other objective information that the content is in breach and should be prohibited.

Members including Mr Yip and Ms Soh have asked about the timely issuance of corrective directions and safeguards against foreign-based entities who attempt to influence our elections. We recognise the need to quickly disable such false online content about candidates, but there is also the need to be rigorous and fair. The Returning Officer will have to strike a balance.

Once a corrective direction is issued, the expectation is for individuals, social media services and Internet access service providers to respond within hours. This is to minimise the potential harm that such content could cause during our election period. The proposed ban covers the publication in Singapore of all digitally generated and manipulated OEA depicting candidates, regardless of the nationality of the user who created or published the content. This addresses the question by Ms He.

In addition, we already have rules prohibiting foreigners or foreign entities from knowingly publishing or publicly displaying any election advertising. This is in line with the principle that Singapore's politics are for Singaporeans alone to decide. If we are aware of hostile information campaigns or foreign interference, we will address them under the Foreign Interference (Countermeasures) Act, or FICA.

Ms He asked if the penalties for non-compliance by the social media services are too low to have sufficient impact or deterrence. Dr Wan Rizal asked if the penalties should be scaled according to the platform's reach and impact.

Sir, the financial penalty quantum is comparable with other local legislation that covers social media services, such as POFMA and the Broadcasting Act. Non-compliance with the corrective directions is an offence punishable by a fine of up to $1 million and it will be for the Courts to decide the appropriate level for each offence.

Beyond the exact quantum involved, the imposition of financial penalties on the services for not doing enough to preserve free and fair elections would have reputational implications for the respective platforms, whether from the perspective of Singapore users or globally.

Some Members including Ms Pereira asked about tools that the Government will use to detect deepfakes. The Government will use a mix of commercially available and in-house tools such as AlchemiX, a tool developed by the Home Team Science and Technology Agency which can compare recordings of a suspected deepfake video with a recording of a speaker's actual voice.

Deepfake technology is constantly improving and our capabilities must evolve accordingly. I seek Members' understanding that we will err on the side of caution and not reveal the full extent and capabilities of our detection tools. This is to guard against malicious actors who may seek to exploit this information and use it to game or circumvent our systems.

Some Members like Dr Wan Rizal and Ms He have asked if the Returning Officer or election officials can proactively monitor the Internet to identify prohibited content and how they will be supported in enforcing provisions under the Bill.

During the election, there are processes in place to monitor for and minimise the risk of election interference that can arise from the spread of prohibited OEA. There will be dedicated teams stood up during the election period for this purpose and they will work closely with the social media services to act swiftly on prohibited content.

As candidates will be best placed to determine if there is false OEA being circulated, we will rely primarily on their requests to review problematic content. However, the Returning Officer may still assess and act on problematic content without a candidate's request and declaration if the content is surfaced and deemed likely to threaten electoral integrity.

Mr Speaker, I have discussed how this Bill, along with other legislative levers, deal with various types of harmful deepfakes. The Bill focuses on a specific category of deepfakes during elections while other legislation such as POFMA, OCHA and the Broadcasting Act may be used to tackle other forms of harmful deepfakes. However, beyond outrightly harmful consequences, the proliferation of deepfake content is also concerning. When users can no longer differentiate what is real and what is fake, there is a wider threat to trust in online media.

As I said in my opening speech, the IMDA will introduce a Code of Practice to deal with digitally manipulated content at all times, beyond the election periods. This will mean requiring social media companies to play a larger role in the complex issue of tackling deepfakes, given their extensive influence in shaping our online experiences.

MDDI and the IMDA are in the process of engaging the major social media services in Singapore. The companies have been receptive to our proposals and recognise the need to do more against digitally manipulated content. We aim to introduce the code in 2025.

Mr Yip, Ms He and Ms Soh have asked about our public education efforts to alert our citizens to the dangers of AI-generated misinformation. We agree that a digitally-aware public is the strongest defence we have against misleading and deceptive manipulated online content. Public education plays a critical role in empowering Singaporeans to safeguard themselves against risks in the digital space and be resilient to such threats.

To this end, the Government has put in place public education programmes to equip the public to be discerning producers and consumers of information and protect themselves against online falsehoods.

For example, the National Library's S.U.R.E. programme, which stands for source, understand, research and evaluate, has developed resources and organised activities to educate Singaporeans about the dangers of misinformation. In fact, the National Library Board is currently rolling out its community outreach initiative, Be S.U.R.E. Together: Gen AI and Deepfakes Edition, which provides opportunities for the public to learn about the uses and threats of generative AI.

Mr Speaker, the Bill before us seeks to further protect Singapore's future elections from misinformation caused by deceptive deepfakes. We introduced this Bill after careful study of global trends and a realistic assessment of what could happen in Singapore's elections if this threat was left unchecked.

I urge all Members to support this Bill. Together, we can ensure that deepfakes and other digitally generated and manipulated content do not prejudice the fair and free elections that Singaporeans should be able to experience. Mr Speaker, I beg to move.

Mr Speaker: Any clarifications for the Minister? Ms He Ting Ru.

7.13 pm

Ms He Ting Ru: Thank you, Mr Speaker. I just have one clarification on the confusion that the Minister mentioned about my concerns raised about exemption for authorised new agencies and her linking it to one of the suggestions that I made in relation to public education efforts.

In fact, I think I mentioned in my speech when I talked about the use of pre-bunking in a sort of inoculative approach, I talked about exposing people to weakened forms of misinformation, so, not necessarily real deepfakes. Also, I think when I talked about using short-form content, I was actually referring to generic short-form videos, for example, but not during the election period.

Mrs Josephine Teo: Mr Speaker, I thank Ms He for her clarification. I think it is helpful.

Mr Speaker: Mr Gerald Giam.

Mr Gerald Giam Yean Song: Sir, I think I heard the Minister say that if a candidate writes a speech and uses AI to deliver it, such a practice will be prohibited under the legislation. However, does not this AI still present the candidate communicating the message as they intended? Will it be prohibited if it does not misrepresent a candidate's words or actions?

It could be argued that a television image is also a virtual image and is not a real person yet, no one is suggesting that that should be prohibited. To cite a real example, would the AI-generated video that the Minister played at the start of her speech be prohibited during elections?

Mrs Josephine Teo: Mr Speaker, the short answer is, yes. The video that I played earlier was completely generated by AI, notwithstanding the fact that the script was approved by me and it could have been penned by me, that image itself is problematic. I did not actually stand in front a camera and articulate those words. So, the way the Bill is designed is to not offer any room for misunderstanding. If you did not, in fact, read out a speech, even if you had written it, you used AI to generate that speech, that is prohibited.

7.16 pm

Mr Speaker: Any other clarifications? I do not see any.

Question put, and agreed to.

Bill accordingly read a Second time and committed to a Committee of the whole House.

The House immediately resolved itself into a Committee on the Bill. – [Mrs Josephine Teo].

Bill considered in Committee; reported without amendment; read a Third time and passed.