New report on technological experiments in mental health
This is a slight break in my occasional list on data-driven and algorithmic technologies in the mental health context from around the web. Instead, I wanted to share news of a report I co-authored with an amazing group of thinkers, Jonah Bossewitch, Lydia X. Z. Brown, Leah Harris, James Horton, Simon Katterl, Keris Myrick, Kelechi Ubozoh and Alberto Vasquez. This work has been over two years in the making. We launched the report last week, and you can find a copy here. We’ll aim to hold several webinars to help build on the ideas in the report, which I’ll tweet about as they arise.
The report surveys the rise of data-driven and algorithmic technologies in the mental health context, and considers the legal, political, economic and ethical contours of recent developments. (We take the US, EU and Australian regulatory contexts as our frame, so the report is skewed to English-language materials in these regions).
The report is meant for diverse audiences, including advocates, service users and those who have experienced mental health interventions, mental health practitioners, researchers, disabled people’s organisations, technologists, service providers, policymakers, regulators, private sector actors, academics and journalists.
As we wrote:
Urgent public attention is needed to make sense of the expanding use of algorithmic and data-driven technologies in the mental health context. On the one hand, well-designed digital technologies that offer high degrees of public involvement and can be used to promote good mental health and crisis support in communities. They can be employed safely, reliably and in a trustworthy way, including to help build relationships, allocate resources, and promote human flourishing.
On the other hand, there is clear potential for harm. The list of ‘data harms’ in the mental health context is growing longer, in which people are in worse shape than they would be had the activity not occurred. Examples in this report include the hacking of psychotherapeutic records and the extortion of victims, algorithmic hiring programs that discriminate against people with histories of mental healthcare, and criminal justice and border agencies weaponising data concerning mental health against individuals. Issues also come up not where technologies are misused or faulty, but where technologies like biometric monitoring or surveillance work as intended, and where the very process of ‘datafying’ and digitising individuals’ behaviour – observing, recording and logging them to an excessive degree – carry the potential for inherent harm.
Public debate is needed to scrutinise these developments.
We also pointed out the need for dialog between those engaged in debate on data and technology on the one hand, and disability and mental health on the other:
Meredith Whitaker and colleagues at the AI Now research institute observe that disability and mental health have been largely omitted from discussions about AI-bias and algorithmic accountability. This report brings them to the fore. It is written to promote basic standards of algorithmic and technological transparency and auditing, but also takes the opportunity to ask more fundamental questions, such as whether algorithmic and digital systems should be used at all in some circumstances—and if so, who gets to govern them. These issues are particularly important given the COVID-19 pandemic, which has accelerated the digitisation of physical and mental health services worldwide, and driven more of our lives online.
As we concluded:
There is cause for both optimism and pessimism in the application of algorithmic and data-driven technologies to assist people in extreme distress and crises, and to boost individual and collective opportunities for crisis support and flourishing. Vigilance is required to promote benefit and prevent harm, which won’t be possible without acknowledging the vast social inequalities and financial incentives that are shaping technological development in this area. As populations reckon with new digital responses to age-old experiences of distress, anguish and disability, optimism comes from these technologies and their benefits being publicly controlled, genuinely shared and firmly shaped by those most affected.
Please feel free to be in touch with any feedback or thoughts about the report, and please share with your associates who may be interested.
This report was supported through funding from the Mozilla Foundation and the Australian Research Council (No. DE200100483).
 Claudia Lang, ‘Craving to Be Heard but Not Seen – Chatbots, Care and the Encoded Global Psyche’, Somatosphere (13 April 2021) <http://somatosphere.net/2021/chatbots.html/>. Lang describes the potential for tech to 'weave together code and poetry, emotions and programming, despair and reconciliation, isolation and relatedness in human-techno worlds.'
 Joanna Redden, Jessica Brand and Vanesa Terzieva, ‘Data Harm Record – Data Justice Lab’, Data Justice Lab (August 2020) <https://datajusticelab.org/data-harm-record/>.
 Meredith Whittaker et al, Disability, Bias, and AI (AI Now, November 2019) 8.
 Frank Pasquale, ‘The Second Wave of Algorithmic Accountability’, Law and Political Economy (25 November 2019) <https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/>.
 John Torous et al, ‘Digital Mental Health and COVID-19: Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow’ (2020) 7(3) JMIR Mental Health e18848.