Written Answer

Accuracy of Deepfake Detection Technologies and Differentiating Between Harmful Deepfakes and Legitimate Political Satire or Memes

Speakers

Summary

This question concerns Ms He Ting Ru’s inquiry on the accuracy of deepfake detection technologies and how the government distinguishes harmful content from legitimate satire or memes. Minister for Digital Development and Information Josephine Teo stated that accuracy rates are not disclosed to prevent exploitation, though various internal and commercial tools are used to assess content. She clarified that action under the Protection from Online Falsehoods and Manipulation Act occurs only if falsehoods harm the public interest, typically excluding parody. If content is wrongly identified, individuals can file court appeals, and the government is currently reviewing additional safeguards to protect electoral integrity.

Transcript

29 Ms He Ting Ru asked the Minister for Digital Development and Information (a) what is the current accuracy rate of the Government’s deepfake detection technologies for AI-generated content; (b) how will the Government differentiate between harmful deepfakes and legitimate political satire or memes using similar technologies; and (c) what happens if videos are wrongly identified as deepfakes.

Mrs Josephine Teo: There are a variety of tools and techniques available to the Government to detect, identify and assess manipulated content, including artificial intelligence (AI)-generated content such as deepfakes. These may be sourced commercially, developed in-house or in partnership with researchers such as those at the Centre for Advanced Technologies in Online Safety. We do not publish their accuracy levels as our tools are constantly being updated to keep up with technology. It is also not in the public interest to reveal the full extent of capabilities as malicious actors may exploit it.

The Government can take action against online falsehoods when certain thresholds are met, including falsehoods generated with the help of AI. Action may be taken under the Protection from Online Falsehoods and Manipulation Act (POFMA) if such content is false and against the public interest. Satire or parody do not by themselves meet the criteria for POFMA action, unless they contain falsehoods that harm public interest. Individuals who disagree with POFMA directions issued to them, including those for deepfake content, can file an appeal in court.

Many countries have recognised the need to mitigate the harms and risks from AI use and application, including the malicious use of deepfakes. Some countries have already put in place safeguards, especially during elections, in order to protect the integrity of the electoral process. We are studying if further safeguards are required and will provide an update when ready.