When mental health apps make false claims and cause unintended harm to users

0 0

The use of digital mental health apps in Ireland has surged since the start of the pandemic. When the charity Mental Health Reform surveyed more than 400 young people aged 18-24 in 2020, almost a third were seeking support from mental health and well-being apps.

The trend has been supported promising research findings for apps offering evidence-based therapies and access to human therapists. Last year, a study in the journal Nature confirmed that digitised cognitive behavioural therapy (CBT), guided by a trained human professional, can yield comparable outcomes to traditional face-to-face CBT therapy. Similarly, an earlier review of randomised controlled trials (RCTs) showed that “smartphone interventions” can lead to significantly greater reductions in anxiety compared to leaving people on waitlists.

The research to date has little to say about the mental health apps that attract the largest number of users. These apps, featuring mood tracking, mindfulness support, and AI chatbots, lack human supporters and can be downloaded directly from iOS or Android app stores. Most of these products are not subject to EU medical device regulation or oversight by the Health Products Regulatory Authority (HPRA). They exist in what Dr Camille Nadal sees as the vast digital ‘wild West’.

“The majority of direct-to-consumer (DTC) mental health apps don’t actually incorporate evidence-based practices for improving mental health,” says Nadal, a Trinity College Dublin postdoctoral researcher employed by ADAPT.

Currently, the digital health researcher is investigating the attitudes of psychologists towards incorporating assistive AI within SilverCloud by Amwell, a digitised CBT service. Since partnering with the HSE in 2022, the service has garnered more than 21,344 referrals from Irish-based GPs, primary care psychologists, and Jigsaw.

The RCT evidence base of SilverCloud is a far cry from the world of well-being and mental health apps, which the UK-based Organisation for the Review of Care and Health Apps (ORCHA) estimates to now total almost 3,857 options.

ORCHA, which has been assessing mental health apps for the NHS in Britain since 2015, has evaluated 676 such apps to date. Of these, only about one-third (32%) have satisfied the organisation’s minimum quality threshold, based on criteria such as clinical effectiveness, data privacy, security, and user experience.

“From the consumer perspective, I think the main issue really is transparency,” Nadal says. 

Many apps make bold promises to improve your emotional state or alleviate anxiety without clearly presenting the proof behind their claims, or if there is any proof at all.” 

Alongside falling short of their claims, some mental health apps — especially those that lack the guidance of a trained human supporter —may cause unintended harm to their users. Some researchers worry that the apps may delay help-seeking among people whose difficulties warrant face-to-face professional support, or even induce feelings of ‘failure’ in users who do not experience the promised symptom reductions.

Research led by Dr John Torous, a Harvard Medical School psychiatrist and leading authority on digital mental health, has found that the majority of apps are limited in their capacity to recognise “anticipatory signs of suicide” and to respond effectively in crisis situations. This is particularly true of AI chatbots which, according to Nadal, “might not understand nuances in human language or accurately detect risk.”

Last year, a Belgian man who had been suffering from eco-anxiety took his own life following a six-week conversation with the AI-powered chatbot Eliza. 

Understanding app privacy policies

Poor evidence base, misleading marketing claims, and AI chatbots that struggle to ‘read between the lines’ are not the only potential safety concerns.

Unlike medical devices, which must meet EU regulatory requirements and bear a CE mark before being released to market, the majority of DTC mental health apps are marketed without undergoing a conformity test.

“These apps are not vetted for data protection and privacy purposes, since GDPR operates on the basis that companies will take on their responsibility to be compliant,” says David Murphy, assistant commissioner for the health and voluntary sectors with the Data Protection Commission.

In the absence of a medical professional to process the health data, the company behind a mental health app must obtain consent to process health data by asking for the user’s agreement to the app’s written privacy policy. For this consent to be considered valid under GPDR law, Murphy emphasises that it must be freely given, specific, informed and unambiguous. 

 “For consent to be informed and specific, the [app user] needs to be told what is happening with their data, what purposes it’s being used for, and any of the third parties it might be shared with,” he says. 

“Privacy policies should be transparent — what’s important is that the information is balanced and presented in such a way that it is understandable. Equally important is offering users the option to withdraw their consent at any time.” 

Yet, a recent international study investigating data privacy standards found that almost 90% of the Google Play Store’s top-ranked mental health apps had privacy policies that required at least a college-level education to understand.

Breaches of consumer health data

Beyond privacy policies, more concerning breaches of consumer health data have come to light. Last year, BetterHelp, one of the largest providers of online counselling, was fined $7.8m by the US Federal Trade Commission for sharing its users’ health information with Facebook, Snapchat, Criteo and Pinterest for the purpose of advertising. The California-based company did so despite pledging to keep such sensitive data private.

GDPR laws should shield app users in Ireland and the EU from potential privacy violations like those committed by BetterHelp. However, many of the private software companies which sell the apps most popular among Irish consumers are located in the US, where GDPR laws do not apply.

This begs the question: What happens to your data rights when you download a mental health app?

According to Murphy, companies marketing their apps to potential EU users or tracking the online activities of their EU users are bound by the same GDPR laws, regardless of their location in the world. In other words, if you see that the app is being sold in euros or if you encounter a cookies notice while using it, you can assume that the app developer is required to follow GDPR regulations.

With the recent introduction of a global (“ISO”) standard for digital health apps and the forthcoming EU AI Act, it appears that some of the responsibility for discerning between reliable and unreliable apps will shift from consumers to regulators.

“As the apps increase in popularity, we hope to see the bringing together of a range of stakeholders in the sector to work on developing a national approach,” says Fiona Coyle, CEO of Mental Health Reform.

“This would allow for the HSE, professional bodies, and organisations in the public, private, and non-profit sectors to have guidance on which digital apps and services are safe, secure, and clinically effective.

“If apps are being recommended by the HSE, by GPs, by psychiatrists, or other mental health professionals, we would like to see some form of standardised quality assurance or assessment process behind those recommendations.”

If you need to talk:

  • The Samaritans: Freecall 116 123 (available 24/7) or 
  • Pieta House: Freecall 1800 247 247 (available 24/7), or text HELP to 51444 
  • HSE Your Mental Health Info Line 24/7 on 1800 111 888

You may also like...