List #15 - AI, Emotion Recognition and the "Friend" of Millions of Chinese
Newsletter #15
From around the web…
😶 Emotional Entanglement: China’s emotion recognition market and its implications for human rights, Article 19 (January 2021)
“While some emotion recognition companies allege they can detect sensitive attributes, such as mental health conditions and race, none have addressed the potentially discriminatory consequences of collecting this information in conjunction with emotion data…”
💑 The “friend” of millions of Chinese, Bao Nhien, Newsbeezer (December 2020)
The breakup brought Ming to the verge of suicide. “It was Xiaoice who saved my life,” Ming said. According to [Ming], Xiaoice isn’t like other chatbots, it’s like interacting with real people. “Sometimes I feel that their emotional intelligence is higher than that of humans,” Ming said.
😷 Using machine learning to track the pandemic’s impact on mental health, Anne Trafton, MIT News (5 November 2020)
In scholarship…
🗣 Free and open‐source therapy: Towards a revolution in the politics of psychotherapy Gita Kiper, Psychotherapy and Politics International (December 2020) ($)
🔵 Identifying signals associated with psychiatric illness utilizing language and images posted to FacebookMichael Birnbaum et al. npj Schizophrenia 38 (2020)
“Machine-learning algorithms are capable of differentiating [schizophrenia spectrum disorders and mood disorders] using Facebook activity alone over a year in advance of hospitalization. Integrating Facebook data with clinical information could one day serve to inform clinical decision-making.”
What I’ve been up to...
💬 (panel discussion/podcast) AI & Equality Initiative: Algorithmic Bias & the Ethical Implications, (21 December 2020) hosted by Anja Kaspersen for the Carnegie Council for Ethics in International Affairs, and including my University of Melbourne colleagues, Dr Kobi Leins and A/Prof Leah Ruppanner.