Globally, more than four billion people use social media, generating huge stores of data from their devices. That information can be used in tracking more than just what they buy, their political leanings or the patterns of social media usage during the pandemic. It can also be channeled to help better detect mental illness and improve well-being. A growing number of studies show that language patterns and images in posts can reveal and predict mental health conditions for individuals and also evaluate mental health trends across entire populations. Thanks to advances in artificial intelligence, natural language processing and other data science tools, researchers, tech companies, government agencies and nongovernmental organizations can make use of these gargantuan databases to look for signs of mental health conditions, such as depression, anxiety and suicide risk. In certain countries, Facebook’s online suicide prevention program uses AI to scan users’ posts for images and words that may identify a person who may have a tendency toward self-harm. A team of trained human reviewers is alerted to posts that display patterns of suicidal thoughts and sends out mental health resources to at-risk users. In serious cases, emergency services may be notified to warn of imminent risk for self-harm. Pinterest’s “compassionate search” connects users seeking information on anxiety and other mental-health-related topics with links that promote emotional wellness, including deep-breathing activities and more elaborate psychotherapy exercises. And Snapchat has developed the Here For You in-app support feature to help users who may be experiencing mental health challenges. One of the main functions monitors search terms related to mental health conditions and then provides users with links to helpful resources and a direct link to a help hotline. In addition to tech companies, several countries have also started to address mental health issues through social media channels. In 2018 Canada launched a pilot to analyze random social media data to identify geographical suicide hotspots, which enabled an appropriate allocation of resources. A number of research organizations are also using social media data to develop real-time observation tools that could be used by policy makers. The tools analyze well-being indicators such as happiness, other sentiments and signs of mental health issues. Researchers have recently demonstrated that depression indexes correlate with geographical and demographic patterns, as reported by the U.S. Centers for Disease Control and Prevention. During the COVID-19 pandemic, when traditional survey methods failed to provide results fast enough, researchers used positive and negative mood indicators to rapidly assess population status. Using AI algorithms to analyze moods and mental health conditions through social media posts is in its infancy. Still, it is not too early to address technical, ethical, cultural and social questions. There may be a desire to detect mental health conditions based solely on social media information—for example, via a hashtag such as #depression—rather than on an actual clinical verification of a person’s condition. Researchers, moreover, may have difficulty extracting data easily from most social media platforms, so the scope of what they find may be limited. Tech companies may develop detection and prediction algorithms of mental health indicators but do so without publishing their work in academic journals or having it reviewed by panels of independent experts. Before data mining of social media proceeds further, a number of questions need to be addressed. How can patterns of word usage on social media be connected to clinically rigorous definitions for mental health? Do algorithms for self-harm detection need to be validated by the research community? How can users’ privacy and their mental health data be secured? The numerous questions and challenges do not undermine an unprecedented opportunity to further develop the frameworks and tools necessary to harness technology to benefit mental health. Self-harm detection incorporated into social media could save lives because people at risk often do not approach family members or professionals. Mental health indexes derived from social media can also serve as tools to shape public health policies because they allow for rapid assessment of new initiatives’ impact in real time and the identification of at-risk subpopulations. Bringing similar ideas into widespread use will require public-private partnerships to help researchers access data, allow AI algorithms to become more transparent, promote collaborative innovation and ultimately bring about better technological solutions for managing public and individual mental health.
IF YOU NEED HELP If you or someone you know is struggling or having thoughts of suicide, help is available. Call the National Suicide Prevention Lifeline at 1-800-273-8255 (TALK), use the online Lifeline Chat or contact the Crisis Text Line by texting TALK to 741741. This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
A growing number of studies show that language patterns and images in posts can reveal and predict mental health conditions for individuals and also evaluate mental health trends across entire populations.
Thanks to advances in artificial intelligence, natural language processing and other data science tools, researchers, tech companies, government agencies and nongovernmental organizations can make use of these gargantuan databases to look for signs of mental health conditions, such as depression, anxiety and suicide risk.
In certain countries, Facebook’s online suicide prevention program uses AI to scan users’ posts for images and words that may identify a person who may have a tendency toward self-harm. A team of trained human reviewers is alerted to posts that display patterns of suicidal thoughts and sends out mental health resources to at-risk users. In serious cases, emergency services may be notified to warn of imminent risk for self-harm.
Pinterest’s “compassionate search” connects users seeking information on anxiety and other mental-health-related topics with links that promote emotional wellness, including deep-breathing activities and more elaborate psychotherapy exercises. And Snapchat has developed the Here For You in-app support feature to help users who may be experiencing mental health challenges. One of the main functions monitors search terms related to mental health conditions and then provides users with links to helpful resources and a direct link to a help hotline.
In addition to tech companies, several countries have also started to address mental health issues through social media channels. In 2018 Canada launched a pilot to analyze random social media data to identify geographical suicide hotspots, which enabled an appropriate allocation of resources.
A number of research organizations are also using social media data to develop real-time observation tools that could be used by policy makers. The tools analyze well-being indicators such as happiness, other sentiments and signs of mental health issues. Researchers have recently demonstrated that depression indexes correlate with geographical and demographic patterns, as reported by the U.S. Centers for Disease Control and Prevention. During the COVID-19 pandemic, when traditional survey methods failed to provide results fast enough, researchers used positive and negative mood indicators to rapidly assess population status.
Using AI algorithms to analyze moods and mental health conditions through social media posts is in its infancy. Still, it is not too early to address technical, ethical, cultural and social questions. There may be a desire to detect mental health conditions based solely on social media information—for example, via a hashtag such as #depression—rather than on an actual clinical verification of a person’s condition. Researchers, moreover, may have difficulty extracting data easily from most social media platforms, so the scope of what they find may be limited. Tech companies may develop detection and prediction algorithms of mental health indicators but do so without publishing their work in academic journals or having it reviewed by panels of independent experts.
Before data mining of social media proceeds further, a number of questions need to be addressed. How can patterns of word usage on social media be connected to clinically rigorous definitions for mental health? Do algorithms for self-harm detection need to be validated by the research community? How can users’ privacy and their mental health data be secured?
The numerous questions and challenges do not undermine an unprecedented opportunity to further develop the frameworks and tools necessary to harness technology to benefit mental health. Self-harm detection incorporated into social media could save lives because people at risk often do not approach family members or professionals. Mental health indexes derived from social media can also serve as tools to shape public health policies because they allow for rapid assessment of new initiatives’ impact in real time and the identification of at-risk subpopulations.
Bringing similar ideas into widespread use will require public-private partnerships to help researchers access data, allow AI algorithms to become more transparent, promote collaborative innovation and ultimately bring about better technological solutions for managing public and individual mental health.
IF YOU NEED HELP If you or someone you know is struggling or having thoughts of suicide, help is available. Call the National Suicide Prevention Lifeline at 1-800-273-8255 (TALK), use the online Lifeline Chat or contact the Crisis Text Line by texting TALK to 741741.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.