Photo Credit: Shutterstock
Back in November 2017, Facebook announced that it was using artificial intelligence to help
flag suicidal users. The tech was used to identify posts, videos and live streams that could be
indicative of suicidal thoughts. The algorithm also prioritises the order in which the
Facebook team reviews these posts. According to Facebook, these accelerated reports get
reported to local authorities twice as fast as unaccelerated ones – which could save many
Recently, the Canadian government has also announced its use of AI to help prevent
suicides in the nation. It partnered with Advanced Symbolics, an AI firm, which will research
and predict suicide rates by analysing Canadian social media posts. The firm aims to predict
which areas of Canada could see an increase in suicidal behaviour, after which the
government can plan and provide mental health resources, in the right places, at the right
“To help prevent suicide, develop effective prevention programs and recognize ways to
intervene earlier, we must first understand the various patterns and characteristics of
suicide-related behaviours"; a Public Health Agency of Canada spokesperson said to CBC in a statement. "PHAC is exploring ways to pilot a new approach to assist in identifying patterns,
based on online data, associated with users who discuss suicide-related behaviours.”
AI has tremendous potential in the field of suicide prevention as many people that need
help go unnoticed by others. However, there are potential issues that must be taken care of:
1. Safety concerns as AI programs must learn to respond appropriately so as to not worsen
suicidal users’ emotional states.
2. Privacy concerns regarding users’ health information.
3. The accuracy of the AIs in determining suicide intent, to list a few.
Nonetheless, small steps, like those being taken by Facebook and the Canadian government,
will provide valuable insights to improve and advance AI technology.