Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FB seals off some internal message boards to prevent leaking, immediately leaked (businessinsider.com)
283 points by the-dude on Oct 14, 2021 | hide | past | favorite | 159 comments


Ah yes, the time honoured tradition of improving your integrity department by removing transparency.


Their work is just so objectively good and correct that allowing insight into what they're doing can only harm the work they're doing.


Your statement reminded me of something I read during my undergraduate courses.

An excerpt of Abraham Lincoln’s seminal work of popular wisdom, “Boom! Goes the Capitalism”:

“Schrödinger's Policy is a seemingly paradoxical thought experiment devised by Erwin Schrödinger—-after a late Wednesday night of strong ale and copious amounts of poutine—-that attempts to illustrate the incompleteness of an early interpretation of quantum mechanics when going from subatomic to macroscopic systems. Schrödinger proposed his "policy" after debates with Albert Einstein over the Copenhagen interpretation, which Schrödinger defended, stating in essence that if a scenario existed where a policy making body within an (ostensibly) benevolent tech company could be so isolated from external interference (decoherence), the state of the group’s work can only be known as a superposition (combination) of possible rest states (eigenstates), because finding out (measuring the state) cannot be done without the observer interfering with the experiment — the measurement system (the observer) is entangled with the experiment.

Einstein reportedly suggested that tech companies could never be benevolent, especially when they are expected to maintain an objective view when their finances were concerned.

After an unknown number of rounds of distilled spirits derived from wormwood, Schrödinger conceded this was true and instead recast his thought experiment to feature a cat as the primary actor; a creature which he felt had a vastly higher likability.”

/hat-tip to the editors of the Schrödinger Wikipedia page


For those without a sense of humor, this is sarcasm.


I think it transcends sarcasm, circling back to how they actually feel.


your sarcasm – our oboros


I see what you did there.


Some cultures does not have sarcasm, while having a sense of humor.

And they probably also have forms of humor that you would not understand. Which does not imply that you don't have a sense of humor.


like talking loud (translates to ALL-CAPS) is considered humorous

the louder – the funnier


Really? Thanks!


Ask Apple how that turned out for them by being completely hermatically sealed from the public. Zero transparency.

This is a low quality knee-jerk emotional comment tickling the current FB zeitgeist and HN tends to love those.


Apple isn't even remotely in the same position as Facebook.

Let alone - Apple is private, Facebook is anything but private


Apple on Apple ads : "Want relevant ads?"

Apple on Facebook ads : "Want spyware?"


Given that their Chief Integrity Officer is Guy Rosen, someone known for running Onavo which was basically spyware... I question a lot the cognitive dissonance that FB employees might go through to believe there is any kind of integrity taken seriously in that fucking company.


special circumstances. it's a democratic grey area.


Sometimes transparency encourages bad actors.


No one owes Facebook the benefit of that doubt.


The point is that protecting your own privacy should never been used as an indication of guilt.

"If you've done nothing wrong, you have nothing to hide" is the lie of the authoritarian. It is naive to think that a bad actor cannot construct a falsehood with a collection of truths.

I have worked on court cases for both individuals and (small) businesses that were innocent, but pissed off the wrong government agency, and got pushed/intimidated into pleading guilty.

Judges are flawed human beings who rule against innocent people and companies ALL THE TIME because court cases are a flurry of biased witness accounts and misleading/incomplete/subjective evidence. Folks in HN should appreciate this most because as I work in IT forensics, the judges appreciation for the technology details of cases is usually very low and they make mistakes. Defense attorneys know that and so they encourage innocent people to strike plead (guilty) deal, even when they have good evidence of their innocence because literally anything can happen in a court room. Prosecutions also know this, and so they throw absolutely everything they can think of in front of the judge, from hearsay to irrelevant data points to try to influence the judge's overall impression of the defendant so the judge will get frustrated with the complex details and rule on other subjective evidence. It's really an eye-opening experience to see people getting railroaded in court.

Few appreciate it until it happens to them, and they are presented with a subset of facts and hearsay that the prosecution has architected to paint a misleading picture in front of the court.

That is just the unfortunate reality of the world. Protecting your own or your company's privacy is the prudent way to behave even when you are completely innocent.


No one things FB is evil because they aren't transparent. It is because of the observable harm on society and their perceived reluctance to mitigate that harm. Internally, they also suffer with retention because employees don't want to work at an evil or perceived to be evil place.

Thus, FB decides to have more transparency, gets a little credit that they want to fix the problem. All good.

Now, FB then revokes that after it leads to criticism (which is the point of being transparent...), so they revert to the previous state, only now there is even more evidence and the problem is worse.

It's the height of irony that a company that profits from selling information about you would ever try to defend itself with a right to privacy. Not only is that not the issue at hand but it speaks to the lack of self-awareness that I can only hope will doom FB to irrelevance or legal destruction.


I am not arguing Facebook's goodness or evilness.

> It's the height of irony that a company that profits from selling information about you would ever try to defend itself with a right to privacy.

Everyone deserves a robust defense - even the guilty. It is not for their justice - it is to protect the integrity of the system.

Everyone also deserves the right to privacy because bad actors (competitors, disgruntled employees, political activists, etc...) are always happy to twist facts leaked to make a biased case. This happens all the time - there is no shortage of talking heads on spin alley.

You're right in that FB should have probably never tried to be more transparent - and doing so was naive and destined for failure. In the court of public opinion / social media (and eventually probably ACTUAL court someday), you simply cannot successfully voice your innocence. Even apologizing when you're wrong makes things worse.

My advice to clients is always...

#1. Shut up. Do not make public statements. Do not engage anyone publicly. Anything you say can and will be used against you - both in and out of court.

#2. See point above.

In my opinion, the correct strategy for a social media company these days is to be as ethical as possible, and to say as little as possible publicly about ethics or internal policies.


Your lecture on jurisprudence and standards of evidence is misplaced. This isn't a courtroom. We are not judges presiding over a case. These are matters of politics and profession as much as law and here we are free to interpret behavior, state our views and consider our associations as our knowledge and experience inform us. Further, unlike Facebook which has long since squandered whatever good faith one might imagine they were owed, we are owed the assumption of good faith while doing it.


It applies even more outside of a courtroom. Most people get information via very agenda-driven news articles, and not having "making sense of weird situations" as their day job, they're even more likely to be misled than judges.

At least a court has lots of rules built in to try and provide some amount of fairness to both sides in a debate. The political and public sphere has no such rules, making it even more vulnerable to such railroading.


"It applies even more outside of a courtroom."

Perhaps inside the heads of some. In the actual external world people make decisions based on incomplete information, first impression, secondhand observation, self serving bias and other imperfect influences. Neither you nor I will live to see this reality 'corrected.'


Yes, that's why corporations, even the most perfectly good ones in the world, still value their ability to keep things private.


That is also why, when one witnesses a corporation hide that which was previously observable, we may suspect malice.


Transparency and privacy aren't antonyms.


Pretending that transparency always has only good effects is clearly nonsense; but I think that pretending that the lack of transparency ever has good effects is suspect at best.


Indeed. I've always been deeply influenced by the old saying:

Your Character Is What You Do When Nobody Is Watching

Your chances of having a problem from stuff being leaked is greatly reduced if you aren't doing shitty things to begin with.

Not that I think that's what motivated this "whistleblower".


Guilt should not be inferred by a person or company protecting its own privacy.

It is naive to think that a bad actor cannot craft a falsehood from a subset of truths they select that you've chosen to make public.


When there are motivated leakers from both sides of the political spectrum it’s impossible to “not do shitty things”; someone will always be disappointed: https://www.foxnews.com/media/facebook-whistleblower-reveals...


Diplomats have long argued that certain concessions can only be made if the specifics of their negotiations are kept private.


Since transparency is the opposite of lack of transparency, I don't understand how transparency can have bad effects without the opposite (lack of transparency) having the opposite (good) effect.

It can't both be the case that transparency ever has bad effects and lack of transparency never has good effects: pick one of the bad effects of transparency and introduce "lack of transparency" in its place. Isn't that bad effect now gone? Isn't that a good effect of the lack of transparency?


I think it's a question of whether we're playing a single round of a game or an iterated game; or, to phrase it differently, whether we're discussing single acts or culture.

Granted, any time that transparency leads to bad effects, lack of transparency would presumably have led to the lack of those bad effects, and so be good, at least indirectly. A one-time lack of transparency can perhaps be appropriate.

However, a culture of lack of transparency is not the opposite of a culture of transparency, and those are the two poles I really see us discussing. A culture of transparency can lead to problems, but can also lead to good things (or at least to the exposure of bad things so that they can be addressed); whereas a culture of lack of transparency always will lead to problems, and any temporary good results from its imposition will be overwhelmed by its long-term ill effects.


It is also a question of whether the transparency allows dissenting views. Often, it doesn't.

Which isn't to claim that all dissenting views are good. But when you are an industry that is having a hard time with minority views, forcing the minority to take center/public stage at all times is actually somewhat counter productive.

Put differently, if you are going to encourage transparency for communication and decisions, you have to take extra care to not punish good faith mistakes.


The whole point of 1:1 is that some privacy allows people to say things they would not otherwise.

So yes, lack of transparency can have good effect.


I want to believe this. I also believe that 1:1s is a bit of an aggressive tactic to force someone to talk when they may not want to. :(


It's a hard problem. Some people don't want to talk for good, well-founded reasons; others (and I'm often guilty of this myself) don't want to talk because they just instinctively flinch away from hard but necessary discussions.


1:1 is a tool that requires a certain level of skill to be used successfully. it can be used for many purposes and in many ways.

if you're describing your past experiences, you should definitely level that skill, especially if it makes you uncomfortable.


Well said.


Is there an example you have in mind?


Yep: https://www.amazon.com/Google-Leaks-Whistleblowers-Expos%C3%...

I'll save you the read. A Google employee downloaded 100k documents from the company-wide available internal wiki, identified all the ones that seemed anywhere from highly-to-mildly offensive to anyone in the world, and then published it all as a book on Amazon. The book has been out for about a year and continues to sell well (not to mention the hundreds of paid media appearances for the author to speak in front of like-minded audiences).

Here's the fun part: Google considered suing, but recognized that this would trigger the Streisand Effect and let him off the hook. Obviously, this was a calculated risk this dude took and that paid off (at least from the legal perspective).

Did Google act ethically 100% of the time? God no! But did it hurt them to have everything out in the open? You betcha.

So clearly there's a market out there for juicy internal documents. I would still say the first step is not to do anything wrong, ever. But once you have enough people working for you and you can't be in every meeting at all times, shit is going to happen, and people are going to want to pay to find out about it.

EDIT: replaced "filtered out" with "identified"


>>> Sometimes transparency encourages bad actors.

> I'll save you the read. A Google employee downloaded 100k documents from the company-wide available internal wiki, identified all the ones that seemed anywhere from highly-to-mildly offensive to anyone in the world, and then published it all as a book on Amazon. The book has been out for about a year and continues to sell well (not to mention the hundreds of paid media appearances for the author to speak in front of like-minded audiences).

You also have to keep in mind that a "company-wide available internal wiki" is going to be almost entirely full of boring crap that people literally have to be paid to read.

I mean, the internal wiki pages for nearly every system I've worked on in my company don't even describe what that system is supposed to do in terms someone from outside the team would understand.

> So clearly there's a market out there for juicy internal documents. I would still say the first step is not to do anything wrong, ever. But once you have enough people working for you and you can't be in every meeting at all times, shit is going to happen, and people are going to want to pay to find out about it.

It's not a "bad actor" thing to leak things like that. "Shit" that happens at low levels is still shit, even if it's not company policy coming from the top.


Google a decade ago wasn't like that. I learned so much about software development from reading internal wiki pages for things like MapReduce, GFS, Colossus, their search serving system, search algorithms in general, their authentication/ID system, GMail storage, Google Reader feed ingestion, Google Moderator voting algorithms, etc. Not to mention snooping on code reviews from folks like Jeff Dean, Rob Pike, and Guido van Rossum.

It's not like that now - things are so much more locked down. The first time I was there I could basically treat it like a Ph.D that I got paid for, in terms of learning new things. This time, it's a job where I do my tasks and receive lots of money in return. I think this is kind of a shame - I liked the environment much more when most people were there to learn and invent rather than collect a paycheck - but perhaps this is the inevitable result of a company growing past 100K employees.


>"Shit" that happens at low levels is still shit, even if it's not company policy coming from the top.

In some cases, it can be worse. If something happens at a low level that is unambiguously bad and no one at the top does anything about it despite knowing about it, then to the public, it's as if upper-management is condoning the behavior.

Case in point: the Redskins practicing human trafficking with no strong public denouncement from the team's front office.


> Here's the fun part: Google considered suing, but recognized that this would trigger the Streisand Effect and let him off the hook.

Well, their strategy worked because I’ve never heard about this book.


> But did it hurt them to have everything out in the open? You betcha.

I hear this often and find it a bit puzzling. Perhaps it's part of believing that speech is violence - thinking that critical speech or angry feelings are really hurting Google.

So how do you think it hurt them? Did they lose customers, and if so, how many? Did they miss a quarter? Have to lay people off? Cancel product initiatives?


If any of these calamities eventually happen, we'll have to blame GP post.


Honestly this doesn't happen if you have a good corporate culture to begin with (which Google has not had for quite a while).

I always go back to my experience at HP when Bill and Dave were still alive: the leadership decision to trust employees was pretty much reciprocated so openness worked just fine.

So lack of leadership skill (especially an inability to "lead by example") is usually what causes this type of situation.


This is folly because it only takes ONE person to leak documents, and there are always disgruntled employees somewhere in a company at the scale we are talking about.

You cannot assume good behavior of literally tens of thousands of employees because you have an overall "good" company culture.

That's just impossible.


Best Sellers Rank: #48,190 in Books (See Top 100 in Books)

#2 in Tribology Mechanical Engineering

#4 in Content Management

#10 in Online Internet Searching

what the hell


Is your argument that the world would be a better place if those wiki pages weren’t public?


Not at all. Most companies are not in the business of making the world a better place, and therefore they are not going to be optimizing their info access policies against that target, but instead they will be optimizing for shareholder value or what have you.

Don't confuse this statement with an endorsement - I wish it all worked differently, but this is the reality.


Most companies put a lot of money and reputation on the line advertising and taking stances on current events in such a way to imply that they are "in the business of making the world a better place". Either they need to step up and practice what they preach, or a regulatory agency like the FTC should step in when they make false claims about corporate governance.


I think this comment is giving an (interesting, IMO) example of the claim "Sometimes transparency encourages bad actors" and not making a value judgment on whether it is worth the tradeoff to limit internal transparency just to have stopped this one bad actor.


Interesting! I'm going to have to get a hard-copy of this book.


If it's the same stuff on his website it is incredibly underwhelming https://www.zachvorhies.com/google_leaks/


> But did it hurt them to have everything out in the open? You betcha.

Good. Google should not be able to hide anything from anyone, especially the public whose privacy they invade. The more leaks, the better off we are. Especially the kind of leaks that hurt them. Those are the ones we want most. If they didn't want us to find out, maybe they shouldn't have been doing it, right?


So they should make all their code open source? Have no intellectual property rights?

You are suggesting they just go bankrupt.


> You are suggesting they just go bankrupt.

I'm suggesting they stop surveilling the world's entire population. Until they stop violating our privacy, I won't feel sorry for them when others violate theirs. Total lack of privacy is exactly what these big tech companies deserve.


"Paid media appearances" - how would you know he got paid?

I'm not on his level, but I have a book and I've done "media appearances" (BBC stations in Berkshire & Bristol). You don't get paid for promoting your book.

Maybe on a TV show, the union forces them to pay you the minimum, which I believe is not very much.


This book is only ~200 pages long. What am I missing?


When the GP says "filtered out" the offensive ones, I think he means he filtered out the inoffensive ones and kept the offensive ones, which makes for a much shorter book.


Yep, sorry for the logic brain fart. Updated the parent comment accordingly.


Google used to (c. 2005-2008) let you freely download your contacts & email from GMail, either as a file or an API. Facebook used this to seed Facebook. When you signed up with an @gmail.com account for Facebook, it'd prompt you to import all your contacts, and then send them all an e-mail inviting them to sign up for Facebook.

Google's requests for some reciprocity from Facebook, where users would own their own social graph through technologies like OpenSocial, naturally fell on deaf ears. Google walled off their own platform and decided to abandon open standards for proprietary systems a couple years later, and this is a large reason why we can't have nice things.

I believe GDPR enforces the ability to download all your data and both Google and Facebook offer it, but it's practically useless now. GDPR also makes the regulatory burden of starting a new social network impractical, so it's not like we could get an ecosystem of interoperable competitors.


> I believe GDPR enforces the ability to download all your data

Only data you have provided, both directly (entered name) and indirectly (e.g. page views). Analyses generated by the company based on that data aren't included in the insight right. I would presume both facebook and google have a lot of data that fall in the exempt category.


Not sure if this applies but the Obama administration has spoken previously about how releasing/publishing their visitor list was something they regretted. They got mild accolades at the time of doing it (in a move to be more transparent that other administrations) and then the right wing (mainly) used the visitor log to make hay for 8 years. Then the Trump admin stopped releasing the info and no one made a big deal about it (relative to the number of stories written about who showed up on the lists the Obama admin released).


That FB whistleblower with PR and legal team


Being that Whistleblower not having legal and PR teams would have been rather stupid.


She is not whistleblower, she is a front


Even if this was true, should I care? If the leaked documents are real then it doesn't matter.


For who?


Limited Hangout https://en.wikipedia.org/wiki/Limited_hangout

Not necessarily saying that is the case but if you wanted to steer regulation in a direction....


FB saying "please oh please don't regulate social networks!" sure would seem like Br'er Rabbit. Most prospective regulations in this area would have a bigger impact on e.g. a completely open community-driven network, for example.


Interesting. I was thinking more in line of ppl simply siphoning money and power from fb. I don’t see a grand scheme here, just some blood in the water.


Are you suggesting in a roundabout way that we should get rid of all the things that can encourage bad actors?


Facebook's existence encourages bad actors.


“Facebook CEO Mark Zuckerberg said in a statement following Haugen's testimony that the company's work had been "taken out of context and used to construct a false narrative."”

Funny how Zuck blames a whistleblower for doing the exact same thing his company does to make billions.


It is not funny, it is just boring at this point. Every single response from the mega corps to anything (leaks, whistleblowers, fines, hacks...whatever) is predictable, soulless, boring, useless. Or some combination of those.

Not one of the responses is constructive, useful or accepting responsibility. They don't even say something might be wrong and that they will work on it, they don't even pretend to care, even that is too much these days


Zuckerberg is ... kind of right though, and fb's response to the whole scandal seems mostly reasonable. Fb internal research was 1) of poor quality and 2) selectively presented. Scientific literature on the whole is divided wrt impact of social media use on mental health.


This might just be the japanophile in me, but I get the impression they handle things very differently.

From executives taking ownership & consequences for failure, first DDG example: https://hbswk.hbs.edu/item/why-japan-s-businesses-are-so-goo...

to longer term thinking: "a lot of Japanese companies think about 100 or 200 years from now and envision the kind of future they want to create. During the tsunami disaster, the key mindset of executives was: We have to empathize with others. And companies ought to do the same thing now, during the current crisis, empathizing with those who are suffering and trying to figure out how to help.”

https://hbswk.hbs.edu/item/why-japan-s-businesses-are-so-goo...


> the exact same thing his company does to make billions.

Is that what his company does? Or is that what some of his users do?

Facebook fundamentally makes money from its users liking stuff, thus signalling preferences, so it can then advertise to users with those preferences.

I dont think that has anything to do with the "malignant social" behaviour of the human species. It is very hard to provide a social communication platform to a species addicted to sharing stories, and not to "science".


Do you really believe that Facebook is actively tailoring posts to create a false narrative rather than just people posting things and Facebook showing you those you might engage with more?


Facebook's newsfeed algorithms are actively tailoring the content that is surfaced in your newsfeed in a manner that encourages 'engagement', but this quest for engagement creates false narratives and an inaccurate understanding of reality. People engage with combative and controversial content that supports their biases or triggers a few discrete emotional centers like disgust, outrage, and strong in-group/out-group distinctions. I also do more that just 'believe' this, I know it to be true for a fact.


> Facebook's newsfeed algorithms are actively tailoring the content that is surfaced in your newsfeed in a manner that encourages 'engagement', but this quest for engagement creates false narratives and an inaccurate understanding of reality

Except that the “algorithms” are executed by humans and not always concretely defined, this has been true of every ad-supported commercial news medium ever, for the same reason as it is for Facebook: the business model is securing attention of an audience to sell ads to them.


The humans running the news agencies used to at least pretend to exercise due diligence when selecting their stories. Without humans reviewing the stories that run, the paperclip/engagement-maximizer is happy to run garbage.

Facebook seems happy to be the Weekly World News, amplifying the modern BatBoy stories.


Basically this, but it even goes a level deeper. Facebook engineers to not know they are working at the Weekly World News. At a high-level the algorithms treat the actual content as a black box; what matters is how people respond to and interact with that lump of content and this response is what is measured and optimized. A human editor in a newsroom at least applies some conscious thought to the nature of the story and the facts claimed within. The people designing and operating the paperclip/engagement-maximizer do not really try to understand what they are maximizing other than the fact that it is considered a revenue-generating target and their paperclip production gets published in the quarterly reports.


Isn't this website, post, and comment an example as well? Content ranked by engagement, and a comment on controversial content that supports bias.


All the content is still plainly visible with no extra clicks or screens. In the standard interface I often miss hackernews comment rankings unless they are highly negative (causing the comment to be grey).

The comments are also broadly divided by topic (the post itself) whereas a facebook feed is a firehose of 'anything that might engage the user'.

These two facts alone heavily dispute your argument in my opinion.


>Do you really believe that Facebook is actively tailoring posts to create a false narrative

They have done something similar before[0]. So any denials on their part now will necessarily be taken with a truckload of salt.

    In a study with academics from Cornell and the University of California, Facebook filtered users' news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users' exposure to their friends' "positive emotional content", resulting in fewer positive posts of their own. Another test reduced exposure to "negative emotional content" and the opposite happened.
[0]https://www.theguardian.com/technology/2014/jun/29/facebook-...


This sounds like an academic study with willing volunteers.


"Willing" is not enough. You need informed consent.


How were they not sued for even attempting to cover their butts for human subjects research standards?


Sounds like a distinction without a difference to me. If I make a robot and instruct it to "unobtrusively retrieve bank notes" I'm still picking pockets.


You don't have to manufacture individual posts if you can just chain existing posts together to create narratives. I'm not sure if Facebook tries to create a specific agenda, but it is no secret that they certainly try to paint a specific picture: The one that creates the most outrage, and they probably just don't care what that picture looks like.


I think you are right, let’s just blame the users and all our problems are over.

“Hey, I just sell guns! Guns don’t kill people, people do!”


well... There is some truth to that, honestly.

If I sell guns to someone living in the middle of Alaska, those guns help keep them safe from bear attacks, and hunt to find food. If I sell guns to bank robbers in NYC, those guns are used to hurt people.

Similarly, Facebook helps people from across the world keep in touch with each other. It also helps people say very mean things to a wide audience.

Nudging your users to use the tool you created only for good is not easy.


Facebook doesn't "help people from across the world keep in touch".

In it's quest to "be the internet" it has become a stream of trash, that actively hinders keeping in tout with other people.

If it focused on user interaction and personal connections - we'd have a different conversation about Facebook


> Facebook doesn't "help people from across the world keep in touch".

Well, that's exactly my personal use case for FB, as well as that of many people I personally know irl. It genuinely does help me and my friends to stay in touch. We don't browse the feed for hours or anything (I open FB maybe twice a week for 20-30 mins tops, and that's only because I am also checking up on a few hobby-related FB groups I am in).

Could it be the case that this is NOT how most FB users use the service? I mean, sure. Still would be wrong to say that it doesn't "help people from across the world keep in touch," because there is a non-insignificant number of people who use FB for this exact stated purpose, me included.


Facebook makes it hard for me to keep in touch with people, since I have to manually pick through a mountain of bullshit to find any posts that are actually from my friends and actually about their lives

At first I could set up a list of “Actual Friends” and see all their posts, but now it’ll only show a couple days’ worth, so I have to manually click each profile and check them one by one

If they had RSS-enabled blogs or kept up with an email chain or used a shared subreddit, there’d be no problem


I mean... They are actively trying to make you follow the common practice of following news sources, favorite teams, groups, etc.

I have less than 200 "friends", Reuters, Economist, DeWalt and a few local groups on my profile. And it's a stream of complete garbage....

Because I get served the article posts, where my friends comment on(that's default) or interact with.

That's the normal use case for Facebook, to get you dragged into the cesspool.

My sister is paranoid and it was facebook's desire to get more interaction out of her that literally resulted in her becoming a paranoid COVID anti-vaxxer. Which is infinitely infuriating...

At this point even Reddit is better than Facebook.


It would indeed seem odd to me to not assign at least some blame to the people doing the actual shooting.


“We can’t change the facts so let’s focus on changing what words mean”


Dear Facebook, remember when you said only criminals have something to hide?

But thanks for the argument! Now, whenever I have to explain why privacy matters to someone I will say that I don't want facts about me taken „out of context“ because my life is „complex“ and that „might decrease morale“.


When did Facebook say only criminals have something to hide?


Yeah, I think that was Google and Eric Schmidt.

Facebook doesn't believe anyone should have anything to hide.


Confirmed, it was Eric Schmidt (2009):

If you're concerned about Google retaining your personal data, then you must be doing something you shouldn't be doing. At least that's the word from Google CEO Eric Schmidt.

"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place," Schmidt tells CNBC. . ."

https://www.theregister.com/2009/12/07/schmidt_on_privacy/


I remember another meme where govt says "you don't have anything to hide" and then the other side saying "then you don't mind me publishing all of this secret government info then?" and then getting very angry at the other person.


I don’t think I’ve ever seen executives do something to prevent “decreased morale” that hasn’t immediately resulted in morale falling right off a cliff.


The title suggests an irony that isn't present. The change in policy was immediately leaked, but the policy was designed to keep the discussions on those boards from leaking (rather than the policy itself).


Full title : Facebook told staff it would seal off some internal message boards to prevent leaking. The change was immediately leaked.


> protecting elections

Why is a private company involved in this? Shouldn't this be the responsibility of a government agency? If I make a company as large as FB can I start getting involved in controlling elections?


Because the US needed to protect its election from its own government, checks and balances had all failed and so, in time honored american tradition everyone is looking for the market to fix it instead of the elected people


Because everyone blamed them for not doing it. It's been hard to miss all the articles about misinformation or whatever by users on Facebook regarding the US election and the numerous calls for them to do something about it.


That's an uncharitable view of "controlling". Is the guardrail on a highway controlling you?

Any form of media past a certain size can be used by enemies of a country to influence elections. The internet just makes it easier to do so without a physical presence.

I don't see why Facebook wouldn't be expected to make some attempt to avoid that.


>> protecting elections

> Why is a private company involved in this? Shouldn't this be the responsibility of a government agency? If I make a company as large as FB can I start getting involved in controlling elections?

Yes... in an authoritarian country that doesn't have free or fair elections.

In a democracy that kind of responsibility is distributed. Sure, the government has a role, but so does everyone else. That includes both you, me, Facebook, etc. It doesn't work to unload the responsibility on the government and wash your hands of it.


Protecting your currency against fraudulent activities was the responsibility of the government too, until they put the responsibility onto the credit card/processor companies.

So many things that should be ran by a government agency are being ran by companies themselves now. It's only a few more decades until governments are supplanted by corporations.


> Why is a private company involved in this?

Because capitalism.

> Shouldn't this be the responsibility of a government agency?

If it was the responsibility of a government agency in the US, private companies would still be involved, both in executing the government functions and in compliance. See literally every other government responsibility for examples.

And government agencies are involved in the U.S., though given the social harms Facebook has historically been used for, that it might now engage in prevention beyond what it is compelled to by government action is, in and of itself, not a bad thing.


Because if it protects elections well enough, it gains tremendous political power than it can later profit from.

What politician will propose legislation hurting FB when FB is the gatekeeper to a fair election ? The FB election protection group can just slip for a while - they have so much to do - and foreign actors spread fake news that messes up the politician's re-election. Just letting fake news drop the politician's polling numbers for a few weeks will rattle them enough. What a shame, couldn't be helped.

These may sound like a fever dream now.


Because this private company is pumping content directly into the eyeballs of 2 billion people every day. There are all kinds of regulations on private companies about how they can and cannot act with regards to elections, why would Facebook be any different?


The term is loosely defined, and others seem to be biting on the bait of insulting facebook here.

There are different ways the election process can be "attacked". Voter fraud is individual voters/real people committing fraud by, say, voting multiple times. Voting system fraud would be someone hacking the machines or ballot stuffing. There is gerrymandering, which is its own type of attack on the system.

What Facebook is probably referencing, and what the Russia/Trump investigation was largely about, was disinformation/misinformation campaigns intended to produce a particular result in an election. While government can play a role in protecting people from this by debunking objectively false data and "hitting back" at state-sponsored campaigns, it shouldn't be shocking that the larger social media platforms have the insight and capability to detect and rebut *isinformation on their sites.


I think the answer at this point is a resounding yes.


Perhaps the argument you would like best is that it's a form of self-regulation designed to prevent the need for actual regulation via laws that would be more expensive:

https://en.wikipedia.org/wiki/Industry_self-regulation


But hasn't Zuckerberg asked to be regulated?

I think Facebook would like to be have regulations put up on the social media industry to further entrench their position and create a regulatory moat against new entrants.

Network effects are good, but even better when coupled with regulatory burdens.


Here's a shorthand way of knowing when a bad actor got caught:

They say "taken out of context, leading to our work being mischaracterized" or words very similar.


This statement seems overly broad.

I suspect you would object if someone took your words out of context, for example. Would that therefore mean you are a bad actor?


Salute the simple wisdom. Only that todays media indeed likes clickbaity missquotes taken out of context.


But good actors can also have words/actions taken out of context, no?


To everyone jumping on this: Yes, good actors can have words taken out of context.

FFRefresh had an example of, not so much "taking out of context" as "selective reframing." An entire document or posting pretty much IS "the context." A few words out of a sentence, coupled with writer-provided context, is something entirely different.

I think the giveaway is when they say something is "untrue and taken out of context" without providing any further details. It's kinda like the lawyer's boilerplate statement "the allegations are without merit and we look forward to vigorously defending ourselves in court." It means nothing.

If FB had said "we have robust debate and the leaked documents show ONE opinion, but that doesn't represent what we ultimately decided." then I wouldn't be so quick to dismiss it.

Lastly, I've had personal experience with journalists, and most of them really are honorable people. The scum at BuzzFeed or Gawker or CNN who only care about advancing their agenda, rather than reporting the news, are the ones dragging down the reputation of the media.


AlbertCory said recently, regarding a piece on Zuckerberg,

[He] was a "bad actor" who "got caught".


It's funny that trying to mischaractarize you managed to express the exact thing that Albert said.

Thus proving his point


When someone says that a story "took that out of context", we should always ask "what 'context' would lead us to view that differently?"

Or is it just that they don't like the story?


(this is not a personal attack on you, but merely an attempt to use journalistic tactics (such as taking things out of context) with your words to fit meta-narratives that many news consumers like to read.)

The alleged suspect, AlbertCory, has been associated with internet hacking sites (hackernews.com), which feature many posts from Russian sources. On these hacking sites, AlbertCory has boasted about his knowledge of other 'bad actors', rising suspicions amongst the sources we talked to that he is a member of a Russian operative group.

In a likely recent disinformation campaign, AlbertCory sought to discredit vaccine confidence for Americans by claiming "Fauci has done more damage to "science" than any 100 religious authorities ever did."[1]

AlbertCory has not denied the allegations and declined to comment when we reached out to them.

[1] https://news.ycombinator.com/reply?id=28856201&goto=threads%...


I don't recall when anyone "reached out" to me for comment.


Perhaps my hypothetical news organization sent you an email minutes before posting the hypothetical article, or we sent an email to an address you were unlikely to use, or we called you at a time we knew you wouldn't pick up? Or maybe it went to spam?

All we know is that we did reach out, and you didn't respond, so we standby the statement in the article.


Such a bummer - was really enjoying the "Subvert Democracy" and "Russian Propaganda Ad Sales" message boards. :(



If you’ll permit the analogy, the headline makes it sound like that they installed a fireproof safe and the safe immediately caught fire. Stupid Facebook!

Except it’s more like they installed a fireproof safe and hours later the fire alarm went off.

These too things are quite different :)


There’s a difference between surveillance and sousveillance. Powerful entities should be subject to the latter.

https://en.wikipedia.org/wiki/Sousveillance



If leaking internal news/information, it should seem like a trivial exercise to disseminate an email with subtle unique identifiers in it and fire leakers.


Surely that only works if the leaker supplies exact text, and the leakee publishes exact text; if either step in the chain summarises, then subtle unique identifiers will be lost.


The Intercept probably got Reality Winner nabbed this way; they sent the files to the NSA for comment. Whoops.

https://en.wikipedia.org/wiki/Reality_Winner#Role_of_The_Int...


It sounded like the set of documents was what was used in that case (look for all people who have accessed every single document in the leak, then investigate those people more thoroughly)


"Both journalists and security experts have suggested that The Intercept's handling of the reporting, which included publishing the documents unredacted and including the printer tracking dots, was used to identify Winner as the leaker."


Yeah it's possible that was it, but also "Through an internal audit, the NSA determined that Winner was one of six workers who had accessed the particular documents on its classified system".

The lesson to take away is that metadata of all kinds is powerful. Even if The Intercept had just provided paraphrases to the NSA it might have still given away Reality Winner's identity.


I mean sure, it's possible that the dots were available and made the investigation much easier, but it sounds like they would have probably caught her regardless. Especially considering her computer had connected to The Intercept. Amazingly bad opsec on her part, not to mention that she didn't remove the tracking dots herself.


Sometimes this firing action causes leakers to be recognized as martyrs, which results in new willing leakers emerging to replace the ones that were fired. The chilling effect of firing is not sufficient to fix all leaks.


The people smart enough to build that are smart enough not to build that.


If you believe that everyone smart values the same things as you, sure.


Counter example: This was used fairly commonly to discover spies in EVE Online.


these whistleblowers are going to turn every company into a surveillance operation

every company will soon adopt mandatory spyware, bossware, keyboard logging etc etc just to prevent the next Haugen

hope it was worth it


No, the people that design, implement, or purchase spyware are going to turn companies into surveillance organizations. It's weird to blame whistleblowers here.


How is that the whistleblower’s fault?

“Damn these pedestrians dying to drunk drivers forcing governments to make drunk driving illegal” what?


It’s their fault if the alleged behaviour isn’t illegal enough.

No one’s going to blame a whistleblower who uncovers someone harming children.

It becomes much more of a gray area if the whistleblower uncovers someone trying to find out what effects, positive and negative if any kind, harmful, long-term, short-term or otherwise, that their product might be having.

Is a leak worthy if no one is arrested or taken to court, and all we get to do is tut-tut a little more sanctimoniously?


> No one’s going to blame a whistleblower who uncovers someone harming children.

Of course they are. The idea that "no one", even in the vernacular sense of the term, will blame a whistleblower for literally anything is naive; there are too many people who are terminally incapable of faulting a C corporation for acting against the body politic.

> It becomes much more of a gray area if the whistleblower uncovers someone trying to find out what effects, positive and negative if any kind, harmful, long-term, short-term or otherwise, that their product might be having.

Not when those people doing that investigation find something they don't like and bury it because it's inconvenient for their bottom line. Which Facebook did. It's not a gray area at all. Not in the slightest.

> Is a leak worthy if no one is arrested or taken to court, and all we get to do is tut-tut a little more sanctimoniously?

Yes, it is an unmistakable good to have the operations of megacorporations with more effective power than many governments as transparent as possible. People matter more than companies, and the behaviors of those companies are fundamentally relevant to the behavior of people in their polities.

A company at Facebook's scale (or Apple's, etc.) should have no secrets not directly related to user or employee privacy. If they don't want to have eyes on them, don't be megacorporations. We've seen far too many diseconomies of scale with regards to fundamental and foundational morality to assume anything else.


That’s pretty common in the industry already. Don’t assume you have any privacy when using company’s devices.


Not every company. I would never run my company with sociopath-ware and will proudly tell every candidate that we actually trust them, unlike FB, Google, Apple...

This will have the effect of weakening these paranoid, power-tripping companies, as high end employees flee to companies that respect them and their time.


Can you think of anything that can't be misused? E.g. if a kid eats enough salad they're going to get fat. Does that mean we should "blow the whistle" on lettuce producers and implement dramatic regulations? Or should we expect parents to be parents and monitor and regulate their child's eating habits? Or in this case, monitor and regulate their child's facebook/instagram use?

(Also, we should try and recognize that not agreeing with someone and leaking their internal communications about something that is fully legal, is not whistleblowing.)


It's in Facebook's interest to increase engagement, even if it destroys people's lives.

A morbidly obese depressed kid is much more valuable to Facebook, than an active and mentally stable kid.

We should realize that and stop pretending that Facebook is anything but a leech on our mental health.

I realized that a while ago and had to literally tell Facebook recruiters to GF themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: