Oral Answer

Use of AI Chatbots for Counselling and Mental Health Support by Teenagers and Young Adults

Speakers

Summary

This question concerns the use of artificial intelligence (AI) chatbots for youth mental health support and the measures implemented to protect vulnerable users. Dr Charlene Chen inquired about monitoring trends, data privacy risks, and the potential for AI to reinforce harmful thinking. Senior Minister of State for Health Dr Koh Poh Koon responded that the Ministry promotes clinician-validated alternatives like mindline.sg, which utilizes a structured, rule-based AI called Wysa to ensure safety. He highlighted safeguards including the Code of Practice for Online Safety, age assurance requirements for app stores, and the Model AI Governance Framework for Generative AI. Additionally, the Ministry will monitor the impact of these digital interventions over time to refine support and enhance public awareness of legitimate resources.

Transcript

2 Dr Charlene Chen asked the Coordinating Minister for Social Policies and Minister for Health in view of the increasing use of artificial intelligence (AI) chatbots for counselling and mental health support by teenagers and young adults (a) how the Ministry is monitoring this trend; (b) what measures are in place to guide users towards qualified mental health services where appropriate; and (c) what safeguards are being considered to protect vulnerable users.

The Senior Minister of State for Health (Dr Koh Poh Koon) (for the Coordinating Minister for Social Policies and Minister for Health): Sir, artificial intelligence (AI) chatbots have become so ubiquitous now that it is no longer practical to track its use for counselling or mental health support.

In general, it is not appropriate to use generative AI (GenAI) chatbots as a replacement for a qualified mental health care provider. AI chatbots are not designed to address mental health issues or provide treatment for mental health conditions and risk providing misinformation or inappropriate responses when dealing with serious mental health crises, and may cause harm instead.

But fundamentally, young people and many patients with mental health issues sometimes seek out these online chatbots because of the anonymity it offers and also because it is easily available and accessible, 24/7. Our approach is to encourage individuals seeking qualified mental health services to approach our First Stop for Mental Health services such as national mindline 1771, mindline.sg, Community Outreach Teams and CHAT. We put forth these resources so that they become the legitimate alternatives that those seeking the same advantages of anonymity and easy accessibility can now go to a legitimate source to get the same kind of services for which we know is legitimate, and they can get proper referrals onwards as well, to the care that they need beyond CHAT.

These resources which we put online are also more contextualised to our local needs. I believe there is a distinct advantage. But what we need to do is to do a lot more education, and make these resources be more available and make awareness for these resources be elevated amongst the public so that they go to these legitimate resources, rather than rely on the online chatbots that they can find.

While enforcement is not practical, there are safeguards in place to protect younger users online. Under the Code of Practice for Online Safety – App Distribution Services, users’ exposure to harmful content on these services must be minimised. Designated app stores are also required to implement age assurance measures by 31 March this year. Digital content developers are also expected to comply with the Model AI Governance Framework for Generative AI to ensure responsible development and application of AI for youths and children.

Additionally, the Infocomm Media Development Authority's (IMDA's) Digital Skills for Life framework includes content on how to use GenAI and manage its potential risks. Individuals can learn at their own pace through the available resources.

Mr Speaker: Dr Chen.

Dr Charlene Chen (Tampines): I thank the Senior Minister of State for his responses. I just have three supplementary questions. Given that Large Language Models (LLMs) may be used to train the data, train and help people understand the data better, how does the Ministry assess potential data privacy risks, and are there going to be safeguards or informed consent standards being considered to protect vulnerable users?

The second one is, given that AI systems may overly affirm users' views, which may actually reinforce harmful thinking, and also over reliance may reduce actual help seeking behaviours, has the Ministry assessed this risk?

And lastly, I am glad that the Senior Minister of State has mentioned cross-cultural differences. Is the Ministry willing to support studies to understand how AI counselling tools can be used better and how their impact will be on mental health outcomes in Singapore?

Dr Koh Poh Koon: Sir, I thank the Member for her pertinent questions and thoughtful questions on this very important issue that increasingly many people in public, especially young people, are concerned about.

On her first question on how can we assess data privacy and whether informed consent is needed, I think in the first instance, many of these resources we put online, such as mindline.sg, works on the basis of anonymity. So, you cannot really do an informed consent when you want it to be anonymous, so that the person seeking help does not have to worry about his or her data being exposed. The assurance we want to give to them is that first of all, the anonymity already ensures that none of these things that you actually divulge to the counsellor online or to the chatbot can be traced back to the individual. So, that is the first thing. And we want to make sure that the barrier to access this care is something that is as low as possible.

On the second question on whether these chatbots overly affirm users' views, especially those who may have suicidal ideation, and whether that will reinforce the person to end up taking action, let me just give a little bit more insight on what we use in mindline.sg, one of the First Stops for Mental Health services, which is a digital platform that uses a chatbot. It is a chatbot based on Wysa. It is a specialised mental health AI-enabled chatbot. This chatbot then guides users through digital therapeutic exercises such as mindfulness, deep breathing techniques, sleep hygiene practices that are inspired by cognitive behavioural therapy principles. It aims to supplement our existing professional counselling and therapy services by serving as a 24/7 available pocket therapist that removes any barriers towards help seeking and signpost help seekers to local human-based resources.

But just to reassure the Member, unlike GenAI chatbots, Wysa is designed to deliver such digital therapeutic exercises via a rule-based model. It is not something like the chatbot itself can be extremely creative and come up with a new suggestions. There is a rule-based model in it, so the conversation follows a structured decision tree, which is developed and continuously validated by clinicians. The Wysa chatbot has been clinically evaluated for its efficacy, safety and impact.

So, I hope this gives the Member assurance that the resources that we make available are legitimate. It has got its risks managed. We will continue to see how we can improve such resources.

The third question on whether we will support any studies to measure the outcomes, like I said, these resources were just started just about a year or so ago. We will see over time how we can collect data and then, analyse the impact of some of these interventions we put in the public domain, to make sure that we have a better insight on how to enhance them. [Please refer to "Clarification by Senior Minister of State for Health", Official Report, 27 February 2026, Vol 96, Issue 21, Correction By Written Statement section.]