Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Inside Facebook's suicide algorithm: Here's how the company uses artificial intelligence to predict your mental state from your posts

Zach Tracer Healthcare 2
Facebook automatically scores all posts in the US and select other countries on a scale from 0 to 1 for risk of imminent harm. Hollis Johnson/Business Insider

  • Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk.
  • Facebook passes the information along to law enforcement for wellness checks.
  • Privacy experts say Facebook's failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse.
Advertisement

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence.

Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem.

But over a year later, following a wave of privacy scandals that brought Facebook's data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.

Facebook is creating new health information about users, but it isn't held to the same privacy standard as healthcare providers

woman with smartphone facebook
Facebook automatically scores all posts in the US and select other countries on a scale from 0 to 1 for risk of imminent harm. Photo Illustration by Ute Grabowsky/Photothek via Getty Images

The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of "imminent harm," according to a Facebook representative. 

Advertisement

That data creation process alone raises concern for Natasha Duarte, a policy analyst at the Center for Democracy and Technology.

"I think this should be considered sensitive health information," she said. "Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such."

Data protection laws that govern health information in the US currently don't apply to the data that is created by Facebook's suicide prevention algorithm, according to Duarte. In the US, information about a person's health is protected by the Health Insurance Portability and Accountability Act (HIPAA) which mandates specific privacy protections, including encryption and sharing restrictions, when handling health records. But these rules only apply to organizations providing healthcare services such as hospitals and insurance companies.

Companies such as Facebook that are making inferences about a person's health from non-medical data sources are not subject to the same privacy requirements, and according to Facebook, they know as much and do not classify the information they make as sensitive health information.

Advertisement

Facebook hasn't been transparent about the privacy protocols surrounding the data around suicide that it creates. A Facebook representative told Business Insider that suicide risk scores that are too low to merit review or escalation are stored for 30 days before being deleted, but Facebook did not respond when asked how long and in what form data about higher suicide risk scores and subsequent interventions are stored.

Facebook would not elaborate on why data was being kept if no escalation was made. 

Could Facebook's next big data breach include your mental health data?

suicide hotline
Facebook's algorithm is meant to be a next step from suicide hotlines, which only screen callers who are actively seeking help. ANWAR AMRO/AFP/Getty Images

The risks of storing such sensitive information are high without the proper protection and foresight, according to privacy experts. 

The clearest risk is the information's susceptibility to a data breach.

Advertisement

"It's not a question of if they get hacked, it's a question of when," said Matthew Erickson of the consumer privacy group the Digital Privacy Alliance. 

In September, Facebook revealed that a large-scale data breach had exposed the profiles of around 30 million people. For 400,000 of those, posts and photos were left open. Facebook would not comment on whether or not data from its suicide prevention algorithm had ever been the subject of a data breach.

Following the public airing of data from the hack of married dating site Ashley Madison, the risk of holding such sensitive information is clear, according to Erickson: "Will someone be able to Google your mental health information from Facebook the next time you go for a job interview?"

Dr. Dan Reidenberg, a nationally recognized suicide prevention expert who helped Facebook launch its suicide prevention program, acknowledged the risks of holding and creating such data, saying, "pick a company that hasn't had a data breach anymore."

Advertisement

But Reidenberg said the danger lies more in stigma against mental health issues. Reidenberg argues that discrimination against mental illness is barred by the Americans with Disabilities Act, making the worst potential outcomes addressable in court.

Who gets to see mental health information at Facebook

Once a post is flagged for potential suicide risk, it's sent to Facebook's team of content moderators. Facebook would not go into specifics on the training content moderators receive around suicide but insist that they are trained to accurately screen posts for potential suicide risk.

In a Wall Street Journal review of Facebook's thousands of content moderators in 2017, they were described as mostly contract employees who experienced high turnover and little training on how to cope with disturbing content. Facebook says that the initial content moderation team receives training on "content that is potentially admissive to Suicide, self-mutilation & eating disorders" and "identification of potential credible/imminent suicide threat" that has been developed by suicide experts. 

Facebook said that during this initial stage of review, names are not attached to the posts that are reviewed, but Duarte said that de-identification of social media posts can be difficult to achieve.

Advertisement

"It's really hard to effectively de-identify peoples' posts, there can be a lot of context in a message that people post on social media that reveals who there are even if their name isn't attached to it," she said.

If a post is flagged by an initial reviewer as containing information about a potential imminent risk, it is escalated to a team with more rapid response experience, according to Facebook, which said the specialized employees have backgrounds ranging from law enforcement to rape and suicide hotlines.

These more experienced employees have more access to information on the person whose post they're reviewing.

"I have encouraged Facebook to actually look at their profiles to look at a lot of different things around it to see if they can put it in context," Reidenberg said, insisting that adding context is one of the only ways to currently determine risk with accuracy at the moment. "The only way to get that is if we actually look at some of their history, and we look at some of their activities."

Advertisement

Sometimes police get involved

A communications officer works in s 911 dispatch center.
If a post is serious enough, Facebook will contact emergency responders. Mike Groll/AP Photo

Once reviewed, two outreach actions can take place. Reviewers can either send the user suicide resource information or contact emergency responders. 

"In the last year, we've helped first responders quickly reach around 3,500 people globally who needed help," wrote Facebook CEO Mark Zuckerberg in a post on the initiative.

Duarte says Facebook's surrender of user information to police represents the most critical privacy risk of the program.

"The biggest risk in my mind is a false positive that leads to unnecessary law enforcement contact," she said

Advertisement

Facebook has pointed out numerous successful interventions from its partnership with law enforcement, but in a recent report from The New York Times, one incident documented by police resulted in intervention with someone who said they weren't suicidal. The police took the person to a hospital for a mental health evaluation anyway. In another instance, police released personal information about someone flagged for suicide risk by Facebook to The New York Times.

Why Facebook's suicide algorithm is banned in the EU

European Union EU flags
The GDPR requires company to get consent before creating or storing mental health data. Carl Court / Getty Images

Facebook uses the suicide algorithm to scan posts in English, Spanish, Portuguese, and Arabic, but they don't scan posts in the European Union. 

The prospect of using the algorithm in the EU was halted because of the area's special privacy protections under the General Data Protection Regulation (GDPR), which requires users give websites specific consent to collect sensitive information such as that pertaining to someone's mental health. 

In the US, Facebook views its program as a matter of responsibility. 

Advertisement

Reidenberg described the sacrifice of privacy as one that medical professionals routinely face.

"Health professionals make a critical professional decision if they're at risk and then they will initiate active rescue," Reidenberg said. "The technology companies, Facebook included, are no different than that they have to determine whether or not to activate law enforcement to save someone."

But Duarte said a critical difference exists between emergency professionals and tech companies.

"It's one of the big gaps that we have in privacy protections in the US, that sector by sector there's a lot of health information or pseudo health information that falls under the auspices of companies that aren't covered by HIPAA and there's also the issue information that is facially health information but is used to make inferences or health determinations that is currently not being treated with the sensitivity that we'd want for health information."

Advertisement

Privacy experts agreed that a better version of Facebook's program would require users to affirmatively opt-in, or at least provide a way for users to opt out of the program, but currently neither of those options are available.

Emily Cain, a Facebook policy communications representative, told INSIDER, "By using Facebook, you are opting into having your posts, comments, and videos (including FB Live) scanned for possible suicide risk."

Experts agree that the suicide algorithm has potential for good

Most experts in privacy and public health contacted for this story agreed that Facebook's algorithm has the potential for good.

According to the World Health Organization, nearly 800,000 people commit suicide every year, disproportionately affecting teens and vulnerable populations like LGBT and indigenous peoples.

Advertisement

Facebook said that in their calculation, the risk of invasion of privacy is worth it.

"When it comes to suicide prevention efforts, we strive to balance people's privacy and their safety," the company said in a statement. "While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible. And we understand this is a sensitive issue so we have a number of privacy protections in place."

Dr. Kyle McGregor, Director of New York University School of Medicine's department of Pediatric Mental Health Ethics, agreed with the calculation, saying "suicidality in teens especially is a fixable problem and we as adults have every responsibility to make sure that kids can get over the hump of this prime developmental period and go onto live happy, healthy lives. If we have the possibility to prevent one or two more suicides accurately and effectively, that's worth it."

Have a tip? Email Benjamin Goggin at bgoggin@businessinsider.com or DM him on Twitter @BenjaminGoggin.

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Facebook Suicide
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account