Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This. Isn't. Facebook's. Job.

No, seriously. Facebook should host content and serve ads. That's it.

They don't need to worry about being the arbiter of truthful news.

They don't need to stop "hate".

They don't need to make calls about who needs a psych eval.

They just need to host content. But if they truly want to do this, what happens when someone sues them for not making the call?

I made the new year's resolution to delete my account yesterday. I almost chickened out, but then this article comes up.



I mentioned in a previous HN thread that there likely wasn't a line Facebook could cross that might make me delete my account. I'm not there yet, but I'm willing to admit before that I was wrong. If I posted on Facebook, I would leave the platform here and now.

I've had two friends who went to take advantage of university counseling services for anxiety and depression that they wanted help with, only to be committed to a local hospital involuntarily for saying the wrong thing until their families intervened to have them discharged. The episode would have merely been stressful for them, but they were each restricted from returning to their classes until the next semester. Neither of them were at any level of risk for suicide or self-harm, but the universities they went to naturally were obligated to err on the side of caution.

Such events are relatively rare, but keep many students who are suffering from mental illness from taking the risk of using available resources.

I'd honestly hate to be committed because I was a little too sarcastic in a Facebook comment, and much like public universities I fear Facebook is incentivized to make the least risky call.


> Such events are relatively rare, but keep many students who are suffering from mental illness from taking the risk of using available resources.

Even more: Exactly these kinds of rules lead to a strong incentive to simply commit the suicide because talking about it, will lead to worse consequences.


In the US, state laws strictly regulate who can and cannot be involuntarily committed to psych wards. Staff at the university really can't be any more "strict" or "harsh" than staff at any other councelling center.

In recent decades, psych-laws have become more strict because as a society we have decided that it is better to involuntarily detain 20 people for a few days rather than let one person end their life. That seems like a pretty fair tradeoff to me, although one that reasonable people could easily disagree on. If you disagree, you should be lobbying your state psychological association, not complaining about random therapists at colleges.


States often allow psychological holds and extensions.

There’s little to no evidence required to keep someone for days or weeks under “observation”. Some states setup entirely separate court systems without due process, since commitment is a “civil matter”.

Some amount of blame falls with states allowing this type of behavior.

The burden is on psychologists to demonstrate the practices work and hospitals are properly equipped.


I still might chicken out. There's a lot of people that are likely going to fall off my radar. The thought makes me sad.


Don't get me wrong. Leaving isn't worth it yet. Still quite a ways off.


Least risky to who, exactly?


Facebook does not just host content.

I would agree with you that Facebook should not be an arbiter of truthful news or stop hate.

But they are indeed an arbiter. They continue to push content to users to keep them consuming their product. They promote content that makes people engaged and that usually means content that makes people angry.

If they stop inserting posts from sources users don't follow and not rank stories from friends, I would agree with you.


They also host advertising. I wonder how much advertisers would be willing to pay for suicide risk flags.


I hear you... But what if they have a reasonably accurate model that tested well (say 75%).

Once day, their researchers decide to test how accurate the model is by setting real time flags, but take no action. And let's say, it turns out, of 100 people, 75 people did end up taking their life.

What would you propose they do?


Being able to do something is completely different from it being an advisable thing to do, and this is a can of worms that would be better left untouched.

For example, if Facebook can identify suicide risks in Ohio, then they will be pressured to identify gays in Saudi Arabia and Muslims in Xinjiang. Or weed smokers, underage drinkers, and a bunch of everyone-does-it-but-nobody-cares crimes, where the punishments are technically quite harsh.

The only effective standard that differentiates suicide from the others is "what Facebook's executive thinks is morally acceptable". It is certain that Facebooks executive are not the right people to arbitrate moral issues, not matter how black and white.

There best layer of protection we have here is Facebook taking an official position of "we don't judge, we just sell ads". It isn't perfect and it isn't necessarily logically consistent, but if Facebook sees itself as an arbiter then sooner or later it may well disagree with something we do and take action to stop us doing it.


Say there's a car manufacturer that makes a car. It's a reasonably safe car, but sometimes people speed in it.

Without really telling anyone, they install a webcam and point it at the driver to see whether they might speed on a certain day, based on facial expression and skin color. Why those? Just a weird hunch! They find that the person actually did speed. (Turns out, they didn't really tell anyone that all your driving data was uploaded to their servers, too!)

The problem is two fold. First, they shouldn't have been collecting this data on their users in the first place. The manufactures job is to build a car, not ensure that the car is being used responsibly. This is simply an impossible task to charge anyone with, and the implications are frightening.

Second, I'm not a believer in using algorithms to predict behavior. Too much black box "mangle the numbers until it works". You could make an argument that there's a correlation between skin color and speeding. You might even be able to make pretty charts, but I think this is simply bad science.

You make it sound like this model would actually continue to work at 75% perpetually. I think that it's far more likely it worked this one test, and performance would trend sharply downwards. However it makes this decision, I think it's about as absurd and wrong as the crazy example above.

On the other side of the coin, people are putting too much trust in magic algorithms they don't properly understand, and are far too complacent about egregious invasion of privacy.


There are any number of measures we could take that would prevent deaths. We could lock people in a cage and monitor them 24/7. That would keep them alive. Should we do it?

# of deaths reduced is not the sole factor we should use to determine whether a particular course of action is good


Agreed with this. In fact, I feel a lot of the issues society has with privacy and freedom of speech right now come from how it seems to be willing to trade virtually all freedom for the illusion of 'safety'.

A free society is also often a risky society, but the rewards of freedom outweight said risks.


I would have them either enter the mental health and public safety industries or I would have them throw the algorithm away.


Nothing, because Facebook should not be in the business of making these decisions.

For sone folks, like me and the GO, there are no “buts” about it.


It isn't facebook's job. They can choose not to do these things.

But I'd rather that they did do these things. They have an opportunity to make some people's lives better by doing things like this and I think it would be nice for them to do that.


With great power comes great duty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: