Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A “Silicon Valley” actor is terrified by what’s happening in Silicon Valley (qz.com)
123 points by vogon_laureate on Nov 2, 2017 | hide | past | favorite | 100 comments


In the vein of Kumail's critique, if it's useful to anyone, I wrote some thoughts about software engineering and ethics from my own experiences:

A serial tech entrepreneur in Silicon Valley once asked me to design a “social stockade” for his financial services customers. It would lock people out of their social media accounts and tweet out/FB share to their friends when they hadn’t paid a loan. He pitched it to prospective employees as meaningful work that would reduce the cost of loans for the needy.

I was horrified that his product was being built and that many others would likely take the role I was turning down. And he was hardly the first to pitch his “innovation” as providing only good.

Every software engineer I’ve worked with has had a strong sense of personal values and ethics, but the organizations we work for can take actions that are at odds with these. I’d like to highlight a few of the key challenges you’ll face and provide feedback for living your personal values. Most importantly, it’s critical that you think about the impact of your work and consciously set your personal values in advance of inevitable future challenges.

(Always available to discuss if valuable to anyone, feel free to email me encrypted messages, see my HN profile)

https://www.nemil.com/musings/software-engineers-and-ethics....


>Every software engineer I’ve worked with has had a strong sense of personal values and ethics, but the organizations we work for can take actions that are at odds with these. I’d like to highlight a few of the key challenges you’ll face and provide feedback for living your personal values. Most importantly, it’s critical that you think about the impact of your work and consciously set your personal values in advance of inevitable future challenges.

I'd agree with this assessment of developers. However I've come into these exact moral dilemmas myself in the line of working for various companies doing web programming. How did I resolve them? By making the decision that affected my own bottom line favorably. When looked at in isolation, it's impossible for a single developer to say no to the incentives of doing "evil" work. It won't be until the field of software engineering really matures and develops a set of standards bodies like other professions that engineers will have some sort of protection. Physicians can refuse to perform a procedure they're told to in the name of protecting their medical license. A software engineer should likewise be held to the same standard.


> I'd agree with this assessment of developers. However I've come into these exact moral dilemmas myself in the line of working for various companies doing web programming. How did I resolve them? By making the decision that affected my own bottom line favorably.

That would be _not_ having a strong sense of personal values and ethics. If your good morals always lose out to the promise of more money, that's you not having good morals. Just own up to it instead of pretending the real culprit here is a lack of governing bodies that should be protecting you from all the evil money out there.


You cannot build the world based on the strength of individuals' moral spine, because a) there's plenty of people with weak ones who will happily outcompete individuals refusing unethical work, and - the more important - b) people respond to incentives. For instance, we have the law and the police even though people in general seem naturally good - they exist to counterbalance incentives that make people do bad things. Saying "just own up to it instead of pretending the real culprit here is a lack of governing bodies that should be protecting you from all the evil money out there" ignores the most basic fact about humans - people do respond to incentives.

Moral dilemmas don't look like "do I accept this high salary for doing evil, or prefer a slightly smaller salary for not doing evil". Moral dilemma is what a hypothetical Jane faces, after one of her parents died and the second is unemployed, when she has to provide for the parent and her younger siblings, while also trying to start her own life with her fiancée. For her, abandoning a high-paying salary that is actually eaten up entirely to let two families live in stability and with dignity, is not a trivial problem. The idea of having support of governing bodies is to turn such life-quality-threatening situations into easy choices, so that people don't get severely punished for doing the right thing.

Also: ethics can be, and is, used to abuse people. In Poland, we have this situation with paramedics, nurses and doctors on residency, who are severely overworked (to the point it's a serious health threat, and causes an occasional suicide), severely underpaid, and on top of that they're told by everyone that they can't protest because that would mean providing worse care for patients, which is obviously unethical.


This is a great post. However, it seems also that it is impossible to build the world based solely on official policy and law. We need people to continually make moral decisions in their own lives, to the best of their ability. This involves making sacrifices and ignoring incentives at times.

How do we find this balance? I'm not sure. I think we may always be searching for it.


This is were abstractions like religion come into play. Unfortunately it's taking awhile for them to modernize and adjust to the point where they're culturally relevant again so we can stop socially shaming people who participate in religious groups/orginaizations. But I do believe we as a society have, of late, over-zealously crusaded against the religious archetype in a rather naive stance that science will lead the way...


I agree. What I mean is recognition of human nature. We do respond to incentives, there's no working around that (at least for now; maybe we'll have some mind-engineering tech in the future). We want people to do good, and to achieve this we need to both strengthen individual sense of morality and shape the incentives and engineer systems that protect from pressure - because we recognize no human is immune to incentives.

Will we find the right balance? I don't know, but I certainly hope so. The balance we have now is dynamic - people do bad things, but not enough of them to cause a societal collapse. Incentives are recursive, too - institutions are formed not out of abstract thinking, but because there were enough bad things happening on that some people had a powerful incentive to stop them.


Fully agree, I just wanted to stress both aspects. I do think the systemic and incentive based side of things is more difficult to understand, so it's good to articulate those concepts. Either way, I think it's important to keep having these conversations.


It is possible to say no to the incentives of doing evil work; I would describe those who say no as the people with "a strong sense of personal values and ethics". This phrase has no meaning if you apply it to people who routinely violate their own beliefs in order to make more money, especially in a highly-compensated profession with a booming job market and plenty of available positions that don't require unethical behavior.


Every job I've had wanted to work me to the bone. I'd violate my personal ethics for stress-free work that'd pay my mortgage.


That's fine, but ultimately how can you look down on people like Trump, Ajit Pai, or the CEO of Comcast? They're sacrificing ethical decisions for money, same as you.

Only difference is they're getting "fuck you" money and you're barely getting enough to buy a house. Kinda lame to sell out for so little IMO.

As long you don't complain about these types of people, then whatever, but if you do you need to take a harder look in the mirror!


Ah, I get it. The "my leaders should have more integrity than me" mentality. Well guess what? One day you may be a leader...and you'll still have your crap, anti-social mentality.


Nuclear missiles, land mines, biological weapons, torture research, etc has all been carried out by willing and often motivated engineers or doctors.

I'm skeptical something that hasn't worked in other fields is going to fix our woes in software.


Probably because it was interesting and cutting edge, and so they rationalized it to themselves, like humans are so good at doing.


As a profession, we need a set of ethical standards.

I tell companies I "Won't build products that hurt people."

You wouldn't be surprised how many doors this closes.


Every software engineer I’ve worked with has had a strong sense of personal values and ethics, but the organizations we work for can take actions that are at odds with these.

Do they though? I've noticed a trend where the strength of some developers' beliefs are proportional to how unethical the products they're paid to build are.

This makes me suspect that for many people, the ethical stance they take is a coping mechanism for excusing their own actions with the Yuppie Nuremberg Defense.


This sounds like the new govt policy being introduced in China to publicly shame loan defaulters

http://fortune.com/2017/10/11/china-debt-evaders-loans-black...


I am curious as to what your issue is with the loan setup. Inn my mind out is an alternative to a traditional loan, how is that bad?


It felt like a modern day scarlet letter, instead of a innovative new way to deliver cheaper loans. There are other techniques to deliver cheaper loans, which don't lead to social ostracism (and which don't violate any social network terms of services).


because your tricking poor investors into thinking the returns would be better. "Most" people that don't pay usually don't pay because they can't pay. And people that don't live up to their obligations and can pay will probably sue you for your predatory lending and cruel loan terms.


It is bad for the same reason that 100% interest loans are bad.

They are predatory on poor people, and they trick people into accepting something when they don't really understand the consequences or "believe" them.

A person may incorrectly accept a loan like this, while being under the mistaken assumption that they aren't at risk of the consequences. Sometimes people just make bad decisions.

It is really immoral to take advantage of someone like that.


Social connections (social capital?) is also how poor people survive everyday setbacks that others would pay their way out of, a “social stockade” would be debilitating.

Debilitating your debtor IMHO seems counterproductive for getting a loan repaid. But is the harm it would cause even an unforeseen coincidence - the guy might have heard some of the studies on how crucial social connections are in poverty and gone "aha! Here's something that truly matters to them, it could be wielded as a cheap big stick in loan repayments"


I love Stafford Beer's "The purpose of a system is what it does" here. It separates fact from intention when reasoning about systems. The engineers that Kumail catches offguard are (in my opinion) confused because the issues he raises are so far from their intentions.

https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...


I really like this. Mirrors Wittgenstein's "the meaning of a word is its use" maxim (see, e.g. http://www.maciejratajski.com/work/the-meaning-of-a-word-is-...) in philosophy of language.


I love wittgenstein and agree with your gist.

Being trite - Wittgenstein also said 'The world is all that is the case' ... a precept that a mission to mars for example certainly flexes, of course as soon as it came to pass, the meaning of 'the world' would reconfigure itself.

To the point though ... another maxim that I often find myself employing is: 'The road to hell is paved with good intentions'.


I think that a better look at this subject is a recent piece in The Atlantic titled, "Are Facebook, Twitter, and Google American Companies?"

https://www.theatlantic.com/technology/archive/2017/11/are-f...

tl;dr, this back-and-forth is unsettling to a Federalist:

>In response to a tough line of questions from Senator Tom Cotton of Arkansas, Twitter’s acting general counsel, Sean Edgett, gave two conflicting answers within a couple of minutes. Cotton pressed Edgett on Twitter’s decision to cut off the CIA’s access to alerts derived from the Twitter-data fire hose, which is provided through a company it partially owns, Dataminr, while the companies reportedly still allowed the Russian media outlet RT to continue using the service for some time.

>“Do you see an equivalency between the Central Intelligence Agency and the Russian intelligence services?,” Cotton asked.

>“We’re not offering our service for surveillance to any government,” Edgett responded.

>“So you will apply the same policy to our intelligence community that you’d apply to an adversary’s intelligence services?,” Cotton asked again.

>“As a global company, we have to apply our policies consistently,” Edgett replied. “We’re trying to be unbiased around the world.”

>Cotton then turned to WikiLeaks, which the Intelligence Committee has designated as a nonstate hostile intelligence agency, asking why it had been operating “uninhibited” on Twitter.

>“Is it bias to side with America over our adversaries?,” Cotton demanded.

>“We’re trying to be unbiased around the world,” Edgett said. “We’re obviously an American company and care deeply about the issues we’re talking about today, but as it relates to WikiLeaks or other accounts like it, we make sure they are in compliance with our policies just like every other account.”

I guess it is kind of cyberpunk-y. All we need now is for US Marshals to lay siege to some SF headquarters, facing contracted PMCs denying the validity of their warrants.


But Kumail's point isn't about whether these companies are patriotic, he's saying they lack any ethical framework of operation and presumes it's because people aren't challenging each other with ethical questions preferring to condone any behavior in favor of flexing engineering muscles and proving that, "yes, they can".

I don't think it's unrelated, and perhaps your point is that it's doubly clear no one has thought about ethics when you examine Sean's contradictory responses. But I think the lack of patriotism is more of an example.

Anyway, at the end of the day we're all people and we are the ones that depend on these companies and services. If you don't believe a company is behaving ethically, stop using their services. Make a damn sacrifice and live your virtue. Sure maybe we could argue that to some extent and in certain cases some companies have crossed the line and are harmfully exploiting customers to the extent that there should be legal ramifications, but the government does not exist to provide you an ethically sterile society. It exist to maintain and protect basic human rights--at least in the US as far as the formal documentation is concerned.

As far as I can gather based on armchair political philosophy over drinks.. most people don't actually care about privacy and rank convenience or safety (at the expense of said privacy) higher in their priorities. I find that unfortunate but at the same time I still haven't worked up the courage to totally disband my Facebook account and switch completely over to my proton mail accounts. So it's complicated.

If we could all agree that privacy is a fundamental human right which you would think would be supported by the constitution in the US then, we should actually act/rally to legislate such as other countries have done. Yet at the same time because these companies operate globally, they serve a diverse spectrum of users who may or may not share the same political and social values as we do in the US. This extends to the very employees who are "failing to ask the ethical questions" that Kumail can so obviously see are overlooked. This is globalization.

TL;DR: they do this because they can, and they can because you let them.


>TL;DR: they do this because they can, and they can because you let them.

You may be passionate and well informed enough to have the strength of conviction to take appropriate action, but most people are not.

Nothing will change unless you can affect the behavior of others. You will be fought every step of the way: for the company, this is a fight for survival.

>they can because you let them.

What does this statement reveal about your internal dialogue? You can't expect people who don't understand or who aren't involved to take action. You are delegating your responsibility. Perhaps it is good for your ego, but I can tell you how it turns out.

A more appropriate statement would be, "because I let them."


I'm not trying to deflect personal responsibility and I fortunately have not been in a position where I've had to make an ethical decision that would contradict the wishes of my employer (nor do I work for an employer in the realm of data hungry social media/advertisement) but I have come close WRT intellectual property and software patents. My comment which admitttedly digresses slightly at the end is more about the fact that I find it hard for a company or a group of people to agree on _any_ sort of ethical framework these days when we have such a wide array of cultural backgrounds. I may not like giving up privacy in exchange for auto-gifs of my photo albums, but plenty of people love it, and is it even my place to tell you how you should feel about it (different from telling you how I feel about it and why)? I and other individuals and digital rights advocacy groups do have the conversations about the fallout of letting companies run around unchecked with all the data. And I support the other individuals in arguments and the organizations financially (and as I mentioned try to lead by example). My take away from those conversations however is that privacy is not as much a concern as you or I or Kumail might agree it is right here right now. I don't see how I can have more stake in the game short of shouting louder or deciding to pivot my personal career into public advocacy or civil service... Am I really responsible for indoctrinating others into my own privacy conscious worldview? At the end of the day you have to educate yourself and form your own worldview and live out a set of virtues. If you don't do that or choose a different worldview or set of virtues than me, is that my fault? And if a group of individuals who don't care about the same virtues as I do are at the helm of a company, it's a failure on my part to have prevented that situation from happening? Maybe I'm not 100% clear on your point.


Yes, we're on the same page. It's great that you're supporting organizations and people with this.

>deciding to pivot my personal career into public advocacy or civil service

There have been worse ideas. It's worth fighting. :)

Organizations will engage in behavior if the expected risk-adjusted payoff is sufficiently large, from eroding privacy to cheating emissions tests. Unless we can change that equation we should expect to lose in the long run. This behavior is emergent, based on laws of power and organizational structure.

Organizations stand to win far more than lose, and a victory for us does not provide the same millions of dollars. They get stronger as we get weaker, and they use that strength to win people over to their position.


The social media companies (FB and Twitter) in particular seem to care nothing about the quality of their service otherwise they would be working to keep fake news and astroturfing campaigns off their platforms. I find that weird, or at least telling about how they feel about their relationship to users of the platform (yes I know 'the user is the product'). The lack of ethics need not be narrowed to a lack of patriotism as the lack of ethics is a problem globally for these companies. A sick individual could seriously fuck up any community through disinformation with enough time and resources. FB and Twitter are fully engaged enabling advertisers and care nothing about the messaging, content, or end result of any campaign as long as they get a hit off the impact.


I think user engagement is much more important to these companies than any ethical qualm about truth and fairness. It was pretty obvious when Facebook killed its human news editor program because they kept spiking viral but fake news stories from sites like Infowars and Breitbart and conservative media outlets complained about bias against them.


This is the second time that I read that WikiLeaks is classified as "hostile intelligence agency". Isn't the purpose of intelligence to gather information to gain advantage over adversaries? It sounds to me like a redundant term like "hostile military". And BTW WikiLeaks leaks are public so all other intelligence agencies and citizens can learn while state agencies try to keep things obscure as much as possible. Who's hostile?


If an American spy plane showed up over the Soviet Union, the plane would get shot at. If an American spy plane showed up over France, they wouldn't get shot at.

Of course there's an understanding that intelligence agencies are all ultimately looking at everything, but there's a degree of cooperation. Even in the current state of affairs Russian and American intel agencies work together on anti-terrorism stuff.

However, if the other group is clearly actively against you, then it's harder to work with them. It's safe to say at this point, given the evidence out ther, that WikiLeaks does work with an objective of embarassing American interests.


> WikiLeaks does work with an objective of embarassing American interests.

I think it's a contrarian media organization. I recall when the conservatives were all "kill this guy" during the Bush administration, now they they think he's the rescuer of Western Civilization.

If you have any belief in 'checks and balances' as an anti-corruption mechanism then this is a positive.

Who could regulate the 4th Estate, the academies? Not the government, they would be accused of corruption or despotism. Enter WL.

Think about it: who is 'the establishment'. It can't be the President/White House can it, not when the majority of pundits, journalists and intellectuals were against him.


The "hostile" framing device is a fairly naked attempt to build an ongoing narrative.

I doubt it needs pointing out that the reason WikiLeaks exists is because many (most, all?) organs of "public " service are anything but.


It doesn't seem that hard to understand? In the context of a U.S. Congressional hearing, "hostile" probably means "hostile to the U.S."


Good point! But depending on your interests/POW you could consider NSA as "hostile to the U.S." given they're spying on their own citizens.


Wikileaks leaks are selective. "Public" only describes the things they release. The hostility is in the choice of releases.


They only publish true stuff, yes, but that's not normally what one means by 'selective'. They're not a hacking organization, they don't go around hacking people to obtain stuff. And yes, someone gave them some Russian stuff not too long ago.


There's no denying they promote a certain narrative in their PR, but are there known examples of leaks withheld by Wiki Leaks, where the whistleblower had to go elsewhere?


Here is one example, where they refused to publish leaks from/about Russia:

http://foreignpolicy.com/2017/08/17/wikileaks-turned-down-le...

I recall another case when Wikileaks itself advertised a Russia leak and then never released anything.


"Hostile" as in "currently harmful to my political party".


>I guess it is kind of cyberpunk-y. All we need now is for US Marshals to lay siege to some SF headquarters, facing contracted PMCs denying the validity of their warrants.

I think the turning point was the SOPA/PIPA thing. After that event, it felt like the tech industry seemed to collectively figure out that just through sheer technical dominance, they actually wield tons of practical political power. Possibly more then the savviest Washington insiders.


I hate wikileaks, but I honestly don't see the contradiction in Twitter's behavior on this point.

Perhaps they were naive on whether RT was actually an aperture of Russian intelligence services. OK. Their KYC game could use some upping. OTOH, would anyone be surprised if the CIA were paying some little hedge fund somewhere to let them piggyback on access to Twitter's feed?

As far as Wikileaks' uninhibited operation goes, as far as I can tell, the CIA has not had its account banned from Twitter, just its firehose access. Does Wikileaks have access to that? What would they do with it if they did?

I mean it just seems like the line of questioning is, "Hey is it fair that you let them do X and won't let us do Y?" Maybe I can't see it because my silicon valley ass has no ethics.


the core issue is the activity focused on influencing a US election, which foreign nationals and foreign corporations are prohibited from doing [1]

if Twitter cut off CIA from firehose access and then tried to sell Twitter advertising and analytic services to RT for a US election then that seems to be some poorly constructed PR staging by Twitter

1. https://www.washingtonpost.com/news/post-politics/wp/2017/09...


"I guess it is kind of cyberpunk-y. All we need now is for US Marshals to lay siege to some SF headquarters, facing contracted PMCs denying the validity of their warrants."

More surreal things have happened...

https://en.wikipedia.org/wiki/Steve_Jackson_Games,_Inc._v._U...


>No compensatory damages were awarded. The judge said that Steve Jackson had little involvement in SJG at the time of the raid, and the company was close to Chapter 11 bankruptcy, and that Jackson's renewed involvement in the wake of the raid had turned the company's fortunes around.

Ah, the 'pockets full of fish' defense. "While my client did shove the claimant into a lake, they came out with their pockets full of fish to a net benefit. Therefore, my client is innocent."


I think this was all trash talk, nothing but an attempt to intimidate. If companies are following letters of the law then it doesn't matter who thinks/interprets what. and they better know they would be challenged in court of law.


That's a pretty ridiculous conversation. No wonder the Twitter guy was confused. Neither Wikileaks nor RT are intelligence agencies. The Intelligence Committee pretending they are doesn't change that.


Corporations only worry about ethics if they think it'll affect their reputation or bottom line.

Even then many of them are happy as long as there's public perception that they're acting ethically, even if in reality they're not.


Corporations, yes. But employees of those corporations, less so. As a developer making <insert ethically dubious thing here>, you could have qualms about it.


Unfortunately, many engineers like to pretend their work is apolitical, but there is "No Neutral Ground in a Burning World"[1]. Everything is political, eventually; technology patently has political consequences that are large in scope, highly complex, and difficult to understand fully.

When you are implementing a new technology, please take some time to actively consider all of the potential consequences you can think of. Maybe your new tech just needs simple safety-considerations similar to power tools (dangerous, if misused). Maybe your new tech will disrupt existing structures that many people rely on; do you have a plan to handle that? Can someone easily use your new tech to undermine civil rights or basic freedoms?

Obviously we can't foresee everything, but we can at least try to consider the potential consequences to the world, society, and people outside of tech. I know it's hard to walk away from a cool idea. It's even harder to do the right thing when it risks your salary - or your safety. For better or worse, this kind of question needs to be part of the new-tech development process. As Dan Geer said[4][5] in a recent keynote, "You have not picked a career. You have picked a crusade."

[1] I strongly encourage everyone to watch this 30c3 talk[2] by Quinn Norton and Eleanor Saitta (or read the transcript[3]).

[2] https://media.ccc.de/v/30C3_-_5491_-_en_-_saal_1_-_201312272...

[3] http://opentranscripts.org/transcript/no-neutral-ground-burn...

[4] http://geer.tinho.net/geer.source.27iv17.txt

[5] https://www.youtube.com/watch?v=hcIiD4UUDE8


The most overtly political act most software engineers are likely to make is to choose GPL over BSD or vice-versa.

Walk away from a cool idea for ethical reasons? Risk their salary or their safety? I'm sure such engineers must exist, but they're not among any I've ever known.

Unfortunately, I am rather pessimistic. We're in this Orwellian/Huxlean dystopia for a reason.


> they're not among any I've ever known

You just met one. In the mid 90s, when I was writing firmware and drivers for HIPPI[1] networking equipment, the Malaysian government bought a bunch of out hardware (indirectly), as part of a big system that was supposed to receive a bunch of TV channels from satellite, buffer it for a couple minutes on disk, and then re-broadcast the streams on local TV channels. The buffer, of course, allowed for the "live" TV streams to be censored on the fly.

This was before flash was cheap an ubiquitous, so the firmware for the FPGAs was on a socketed EEPROM. Changing the firmware required removing the chip and a computer with PROM programmer hardware. Bugs happened[2], and my boss decided I needed to make "our biggest customer" happy by flying to Malaysia with an --undeclared-- bag of EEPROMs with better firmware and personally updating their hardware.

It was my first "real" job, but I was still 3 more years of classes away from my bachelors, so I was getting crap intern wages of $11/hr. When I started to suggest I didn't really want to smuggle chips into an notoriously authoritarian state to help them censor their TV, my boss immediately offered me a 6-figure salary and a $60,000 cash bonus. I was seriously tempted. Instead, I gave them my 2-weeks notice the next day...

(side note: a few years earlier, the NSA offered me a full 4-year scholarship (inc. housing and cash stipend), but the "you just need to work for us for 150% time (6-years) after you graduate" price tag made that offer a lot easier to turn down)

[1] https://hsi.web.cern.ch/HSI/hippi/procintf/pcihippi/pcihipde...

[2] actually, it turns out our PCI interface chip was faulty[3]

[3] https://news.ycombinator.com/item?id=10609033


Bravo to you for sticking with your principles. It's interesting how he was willing to keep paying you intern wages until you expressed concern.


That sounds very interesting! If you were working with their government, why did you need to smuggle them in?


We - as manufacturers of HIPPI NICs and crossbar switches - we were technically just supplying OEM parts to a Canadian company. They were the people trying to sell a giant turn-key censorship solution to the Malaysian government. I believe we were simply trying to "help our biggest customer" with an IBM-style field service tech.

Also, this was a rush fix. Flying me on the next flight was supposedly faster than the usual shipping/courier services? At least that's what the boss believed? I never got the full explanation, but I suspect that the Canadian company was ~30-50 hours away from losing the entire deal, which was supposed to be our main (only?) source of revenue until we could ship an important new product[4] a few months later. With the benefit of hindsight, instead of a rational, sound plan, the whole situation smells more like a panic about losing VC funding.

[4] We had re-purposed 8 & 16 port crossbar switches where each port could be (mix-n-match): HIPPI, FC, ATM, 2xFDDI, 2xSCSI3 (!), 1000BASE-T, or 8x(100BASE-TX/FX) with arbitrary layer-3 routing and arbitrary tunneling. Two switches with SCSI and ATM ports did something similar to FCoE/iSCSI... in 1996. RAID 1 over 10km of fiber was awesome. Except it never shipped, because everyone was focused on our broken NIC. --sigh--


Hopefully you can spread discussion about ethics enough to where engineers can make these decisions even before joining the companies.

I imagine many of us would _not_ want to work at a multi-level marketing outfit, considering how they mostly all work through exploitation. By having these discussions we can make sure that companies with dubious ethics will have an even harder time recruiting.

There was a story a couple years back about a guy who wanted to form a "white supremacist village" in the countryside, and couldn't get anybody to join him. You talk enough about something, and it does have effects eventually (even if only temporary).


We exist. It's not a thriving lifestyle.


As an owner of a software development business, I've ejected my fair share of clients who have asked me to do ethically questionable things. But I wonder if a line employee who has to make rent every week and put food on the table can make the call.

I don't know - I've always made the call myself, and informed my employees so I've never had an employee come to me with concerns about working with a particular client or on a particular project?


> I've ejected my fair share of clients

Thank you!

> a line employee who has to make rent every week

Some sort of software development guild or professional association could help. Like a PE's duty to disobey their boss when a project isn't safe, explicitly defined external obligations can give the employee some power (and organizational support).


Professional associations do have codes of ethics and ethics-focused committees and societies, but they don't seem to do a very good job of publicizing them, because I see comments like yours here with some regularity. Heck, I have been a member of IEEE since 1997 and yet I only learned of their Society on the Social Implications of Technology a few years ago. The Order of the Engineer, which is entirely focused on professional ethics, seems to be equally unknown.


Except when things are convenient and we're benefitting from it, it's easy to explain away the negative consequences that befall others.

Think of all the engineering effort to create dark patterns or make it easier for people to buy stuff. I won't work on ads, but a lot of other engineers are okay with it, especially if the money is good.


All it takes for a 22 year old to write invasive ad tracking software for FB/Google is ~150k combined compensation. Also a software developer wrote this [1] at some point...

Why do software developers have a presumed sense of moral superiority? Because some guy named Linus distributed an operating system for free? We're for hire just like everybody else.

[1]: https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...


It's scary (Stanford prison experiment-like) how easy it is to convince a regular joe employee that what he's working on is "for the betterment of the world", while that statement is very very fuzzy to say the least (and as recent events have shown) where one man's Utopia is another man's surveillance state. We put waaay too much impetus on "world changing ideas" while completely ignoring the real life implications of them, as the article rightfully points out.

Reminds me of the statement that Noam Chomsky ended his recent talk at Google with (which I consider one of the nicest "burn" moments ever):

Interviewer: It's not everyday that a non-google gets to sit in a room full of people who work at google, and are s/w engineers, and are advertising experts, and are market experts in different fields. Do you have anything that you'd like to ask us? Chomsky: <shrugs> Why not do some of the serious things?

https://youtu.be/2C-zWrhFqpM?t=59m16s


This just in: actor discovers corporations don't have a soul.

Why don't you come over here to Europe, we have strong data protection laws. Oh btw you'll have to cut your your salary by 50%, because no Silicon Valley, and because strong data protection laws. Hows thats sound to you?


Most data protection laws just force ineffective bureaucratic processes on companies in the name of "doing something", usually in the form of outdated checklists. Meanwhile intelligence agencies, hackers, and ad companies are still consuming vast quantities of data uninhibited.

It really does largely just slow companies down and forces gov agencies, doctors, banks, and other critical industries to use old insecure software and inefficient corporate processes. RFP'ing a new system is suddenly 2x the cost.

I'm all for strong privacy and property rights at the consumer level, plenty of law in this area in most countries is very outdated and from another era. But the tendencies of european and modern US/UK/Canada/etc countries are to go well beyond the courts and intervene directly in company operations.

I've never heard anyone praise the fact they followed gov-mandated checklists as for why they prevented a hack.

It's always keeping up to date with the latest industry best practices and caring because there are real costs. Strengthened courts and a caring media will create real costs and incentivized best practices.


The heart of the problem is that popular communication networks are centralized and unencrypted.


Good protocols don't cure bad ethics.

Mind: I would like to see decentralisation and better encryption protections, generally, but those by themselves won't solve the problem (you still need to address ethics, rights, and power imbalances), and still leave certain of present attacks in place.

David Gerard, author of Attack of the 50 Foot Blockchain, makes the exceedingly good point in an FT interview, that Bitcoin and Blockchain themselves are attempts to tech around the trust problem, and that this raises challenges because trust itself buys you an immense amount of efficiency.

https://www.ft.com/content/61cdc5c8-370e-11e7-bce4-9023f8c0f...

I'm starting to come to the view that information technology itself directly attacks* interpersonal trust, at the macro scale. I may not be articulating the argument well at this point, though I've tried:

https://www.reddit.com/r/dredmorbius/comments/6jqakv/communi...


Not sure if the technology itself attacks interpersonal trust - but I do see a lot work in tech being done exactly towards that. In particular, crypto folks from the blockchain sphere do their damnest to invent ways in which you don't have to trust anybody to do things. I do not feel it's a good idea, at least not for basing society and its infrastructure on that.

The core of my argument would be: trust is something we're good at, as humans. It's efficient, and it's flexible. By flexibility I mean that it handles new/weird situations well. Trustless solutions, on the other hand, burn a lot of compute to replace the need to trust another person, and they do not handle corner cases in any way.

A similar thing is bureaucracy - it exists as a way to reduce dependence on trust, at a serious loss of inefficiency. Frankly, I'd argue that today's bureaucracies are so inefficient that without hidden trust acting as a grease on the wheels, they'd grind to a complete halt. They're also inflexible, and they're usually only saved - again - through trust that leaves room for individual bureaucrats to "cheat the system" here and there.

All in all, I feel the problem is with scale of societies we're trying to create. When it was just 100 people in your village tribe, everyone knew everyone else, and trust-based society just worked. As we grew our societies by many orders of magnitude, we've lost the interpersonal mechanisms of establishing and maintaining trust (which includes punishing actors for breaking it). And it seems to me that instead of trying to find new mechanisms for enabling trust at scale, we're just giving up on the whole idea and replacing it with burning physical resources. This, I feel, is a wrong way to go.

(I'm probably not articulating the argument well either.)


Take a look at the Reddit item I linked, it's my own (uncharacteristically brief, you're welcome) essay.

I've been digging into literate on this further, and ... there's some general support, though the overall picture's somewhat mixed.

My starting observation was that, while the ability for two parties to voluntarily communicate might increase their trust, the ability for a party (or parties) to unilaterally surveil others, particularly at scale, is almost certainly corrosive to trust. Issues such as comprehensive systems surveillance (user logging and monitoring on computers), cameras, microphones, etc., come to mind.

In reading on the topic, there seems to be a great deal in sociology, particularly Durkheim and Weber, addressing the point. I'd observed that every major empire has had a strong religious component (with the possible exception of the Mongol Horde), and that religion didn't start breaking down strongly until ~18th and 19th centuries, under the onslaught of both science and reason, and improved communications.

JoAnne Yates and James Beniger have both written extensively on the nature of communications technologies and practices especially within commercial contexts. Beniger's The Control Revolution in particular details the evolution of commercial and commodities practices in the Americas starting before the American Revolution. A significant feature especially prior to the 1830s / 1840s, at which point the telegraph provided instant communications especially of prices was that movements of goods required agents, working at a distance, and with considerable independence and autonomy. Beniger describes the counterflows of goods and financial records, generated at each transshippment point (each triggering a transfer of goods, a financial transaction, and a trust relationship between originator, buyer, and agents).

I'm not saying that the trust was always well-placed, but that there was no real alternative, other than setting up rules, perhaps some form of non-realtime checks (multiple agents rather than one, say, such that outlier behaviour might be observed), and a very strong reliance on reputation. This is reflected even in the language of business communications, which is based on establishing and maintaining trust -- later streamlining of external and internal communications in the early 20th century removed much of the ornate 19th century language and replaced it with the no-nonsense, strictly-business correspondence of the 20th century. Email, texts, and Tweets have stripped that further still.

Beniger specifically mentions bureaucracy as a technology, by the way.

And yes, scale has a ton to do with this. Another notion I'm looking at is that networks (I'm looking at interpersonal / social, as well as technological) directly affect information flows through size, topology, cost functions, cohort selection, protocols, and more. (Yates appears to be studying a fair bit of this, though I'm not sure to what extent her views are as technical or information-theoretic as mine.)

Bureaucracies are, by the way, generally very efficient, so long as they can be subject to efficiency constraints. They do get hidebound, in that a bureaucracy is a mechanism for formalising information flows (literally "creating forms" -- a major class of business correspondence), and that there's always a problem between a tightly-specified limited grammar and the Real World. (See: things programmers believe about time, time zones, names, places, etc., etc., etc.)

There's also the matter of dealing with information at scale, for which I find powers of ten a useful-if-rough organising concept. Treat these as ranges, with [previous bound] ~< n ~<= [stated upper bound], and yes, with fuzzy edges.

10^0: Full focus. An item.

10^1: Reasonable scope of daily focus. A short list.

10^2: An information-gathering scope, it's possible to be at least generally aware of each item. A long list.

10^3: Stressing the bounds of even skilled individual un-managed awareness. A compilation of lists, a book. Pre-Gutenberg Europe had about 30,000 total books (volumes, not titles), and few libraries exceeded 1,000 total volumes.

10^4 - 10^5: Some sort of organised management system is necessary, can be paper-based. About 300k - 1 million books are published annually ("traditional" and "self-published" respectively).

10^6 - 10^8: A formalised and electronic system is probably necessary at this point. Statistical treatments quite helpful. The largest libraries contain ~24 million volumes.

10^9 - 10^11: Statistical treatments almost certainly necessary.

10^12+: Big data, machine-learning. Multiple levels of abstraction required to humanly comprehend scales. A galaxy's worth of stars. Total number of cells in the human body.

I use books, notes, and index cards considerably. The sense of information scale (and tractability) as I range from 1 card to 10 to 100 to 1,000s is interesting and visceral.

Now make those people and organisations and businesses and systems, and try to work with them ....

See:

Beniger: http://www.worldcat.org/title/control-revolution-technologic...

Yates: http://www.worldcat.org/title/information-technology-and-org...


> cure bad ethics.

I don't see "bad ethics" as a disease, but a difference of opinion.

Why should I compel others to share my opinion?


How would you make a decentralized, encrypted twitter? How would it be better for society?


It would not be Twitter. Twitter is inherently public. The issue is that people flock to public platforms to share personal data. What we need is the equivalent of Email and GPG, but actually user-friendly.



I've read the article and most of the comments here. Still not sure what ethically questionable things we're talking about. Is serving content related to your personal interests questionable? Or is collection of personal data for commercial (but benign and legal) purposes questionable?

I can see that selling and wide-scale distribution of personal data can be a problem unless explicit consent is obtained. Can someone clarify who's doing that and for what reason?


Seems like he didn't bother to specify exactly what he found so objectionable. Given that, it's impossible to evaluate his claim that executives were "confused" because "nobody ever asked them the questions". Maybe nobody ever asked them because the questions he was asking were stupid or ridiculous?

After all, it may sound dismissive, but I'm reminded of a quote by Patrick Stuart. "That's the biggest danger, you see: believing that you really are more important than everyone else. We're not, you know. We're just actors". Maybe if Kumail spent his life building things instead of pretending to be someone who builds things, he'd have a different perspective on technology and ethics.


FWIW Kumail has undergraduate degrees in computer science and philosophy. We went to the same college. The comp-sci department and college in general has a strong sense of ethical responsibility. In fact the computer science department makes it a point to have students make an ethical pledge as part of the major, optional of course. Please don't get the idea that because Kumail has chosen a career in acting that he has no business commenting on other industries. Even though I agree we should spend time in others' shoes before judging, I don't think his statements here are unwarranted. Your comment feels a little uninformed and defensive.


Yes, but so what? It's very easy for people who don't actually make things - like actors and academics - people who never face tradeoffs in their life, to get up on their high horses about people who do. If Kumail wants to comment on the ethics of the computing industry he should man up and write an essay in which he lays out his case, and shows why his preferred tradeoffs are better than those chosen by industry. Like anyone else would have to do, if they wanted influence.

Instead we get this clickbaity junk based on a series of tweets, which nobody would have seen or cared about if he wasn't a famous actor.


I don't think it's Kumail's fault how this got published. I got the impression this was a report on comments he made not any intention of an exhaustive essay on the state of computing. I mean I am professionally in the industry so let me say: I agree that computing across the board lacks an ethical framework and when I talk about this I usually am met with similar responses: apathy or ignorance.


I think you get that response because the computing industry has been constantly attacked with very flimsy, weak and agenda-driven accusations of unethical behaviour, for a very long time. So these sorts of accusations have lost their power, there were too many boys crying wolf.

A large part of this was driven by the decline of the newspaper industry. At some point Murdoch decided that Google was evil because of Google News, and the future of news was the iPad. This from a guy who never even uses email. So he gave some speeches and the orders went out and the Murdoch press immediately started attacking Google with very dubious stories, alleging unethical behaviour. I remember this inflection point quite well. For instance the WSJ paid someone to go digging and they found that the behaviour of Safari had changed with respect to third party cookies, in ways that weren't following the specs, and now some old code Google used was setting cookies too widely or something. This regression had of course not been noticed by anyone because everything still worked. So they blew it up into a major drama and claimed it was all an evil conspiracy, instead of a bug in Safari. Meanwhile Apple got glowing praise and a free pass.

It wasn't just Murdoch of course. The whole industry started attacking internet companies and it was all about money. See the "Google Tax" in Germany, Spain etc. So the supposedly unethical behaviour they were trumping up was very often not unethical at all, or only unethical by some totally meaningless redefinition of the word (e.g. all advertising being considered unethical).

So when Kumail tweets - and the purpose of tweeting is to get noticed and spread a message, so I give him no slack for that - and this gets picked up and inflated by the media, my impression of Kumail goes from neutral to bad. Twitter is not a reasonable place to start a debate about the ethics of technology and if he was an engineer and not an actor, he'd know that. But he's just an actor. So why give a shit?


I got into a conversation once that I think frames this well. We had WILDLY different views on whether going to Mars was a good idea (like, violently polar opposite views). The strength of our disagreement surprised me.

I was arguing that we should “race to Mars”, mainly because the value of having a second planet dramatically increases the odds of survival for our the human race, as a whole. Thus Mars is one of the highest importance activities we could possibly be focusing on.

My friend was countering that all the rich people escaping to Mars made them not care about Earth and their fellow citizens, and was just about the most abhorant act he could think of.

He’d rather see everyone die together on Earth than a small group people live (at least it’d be fair). I’d be happy to sacrifice 95% of the humans alive today as long as some live somewhere (I’m ambivolent about who they are, I presume there’s a formula to be found?).

I think my view is more that of Silicon Valley/entrepreneur/programmer types. These people (me, etc) want the best long term outcome and will take big actions towards that, even knowing it’ll cause some short term pain.

I don’t think ‘regular’ people think like that. They generally care more about the people around them, their own pain, their tribes, their cities, etc. (But not at all about trillions of as yet unborn humans.)

I think it’s easy to label Silicon Valley as “unethical monsters” who’re out for themselves. But Assuming we’re talking about the crime of “innovation without regard for effects” (as opposed to ACTUAL rulebreaking, like theft, assault, fraud, which I assume the Valley is no worse for than anywhere else in the world), the intentions of entrepreneurs aren’t evil or unethical, they just care more about the survival of the entire race than, today’s people.

I also think a lot of hackers/builders/entrepreneurs, for all the optimism they have about growth and innovation, are simultaneously very realistic/pessimistic about all the various ways our race is fundamentally screwed, and kinda recognises they need to be more powerful and have more resources in order to do anything about that.

Startups are more about “getting us out of this fine mess were in” than money.


There's enough straw in your comment to make an army of men, and almost too much cringe to handle.

For starters, you're putting a mountain of words in Kumail Nanjiani's mouth. It's odd that you must be told this, but when you use quotes to summarize your opposition's arguments, you're supposed to actually wrap them around words that were actually used. A quick Ctrl+F on that page leaves one having to guess at whether you're dishonest, careless, projecting your own securities, or all at once.

Then you go and draw a line in the sand and separate yourself from "regular" people who lack the capacity to see beyond their small and petty concerns. You applaud Silicon Valley entrepreneurs as holy warriors fighting for humanity's "best long term outcome" without providing a single example. You claim that people start businesses not to make money, but to save mankind itself.

All this grandiose talk while you fervently pat yourself (and your kind) on the back, but your only tangible anchors are imagined words and flimsy analogies to humans colonizing Mars.

You are the exact personality that Silicon Valley team expertly mocked in the first season.

"We're making the world a better place!"

And for the record, if you managed to get 5% of the human population on Mars, that would be ~350 million, more than the entire population of the United States. If we put that much energy into moving people to a planet that's currently capable of supporting life for zero humans, you'd think we could've built a pretty damn good defense system to knock asteroids off a collision course with Earth.

Why is it that so many people who describe themselves as forward-thinking are more attracted to the idea of terraforming a planet with a poisonous atmosphere than lifting a finger to keep our little oasis in decent condition? I'm not saying the entire human race should rise and fall on a single planet, but don't try to paint your Mars fantasies as an altruistic plan to save all the coarse-minded sheeple from themselves.


It never stops astonishing me how many people claim the ends justify the means without the slightest indication of ever having considered the moral or ethical implications of such a stance. Sacrificing 95% of the population on earth for 5% to survive on mars also shows how fundamentally perverted the mindset is. Yes there are ethically grey areas but it's all about the journey because that's all there is. Anyway thanks for utterly destroying the GP so eloquently.


Tough crowd.


>Startups are more about “getting us out of this fine mess were in” than money.

I think this is a very naive view; novel new ways to share vapid content on the internet and unnecessarily internet-connected junk are getting nobody out of any sort of mess.

There are a few aspirational startups, but the vast majority follow The Cartman Plan: http://static6.businessinsider.com/image/5457c9ae6bb3f7d33da...


> My friend was countering that all the rich people escaping to Mars made them not care about Earth and their fellow citizens, and was just about the most abhorant act he could think of.

I'd argue that, far from being an escape, it's a great sacrifice to migrate from fertile earth to a sterile planet as an insurance policy against the possibility of the extinction of the human species (that is unlikely).


Yeah, I don't see how you can put a good spin on saying that you don't care if 95% of humanity dies.

Treating people equally means it only solves 5% of the problem. Probably far less than that. It rounds to zero.


Yeah, I totally should’ve phrased that bit more carefully

The clearer, fuller argument would would be: if something terrible was to happen to Earth it would be objectively better (though obviously still a tragic and nightmarish event) if we only lost 95% of living humans vs losing 100%.

Also though it might also be worth a teensy bit more risk of near-complete wipeout to gain a backup. (e.g. the risk of migration to Mars causing economic or political issues back here that cause a terrible war or something). It’d be crazy hard to do the maths on that though.

You would also kind of assume though that as long as a tiny percentage lived, they would turn a lot more people than are currently alive over a few 1000s of years? So even if the worst happened the ethics would eventually balance out.

Given the amount of really really bad stuff that could happen to humans on Eart (disease, asteroids, super volcanos, sea level change, nuclear war, etc) having 5% safe somewhere else sounds like a great situation to be in. Doubt we’ll get there though.


To be fair, "5% rounds, rounds to 0" is still strictly more than 0, which was what the other person proposed.


> I’d be happy to sacrifice 95% of the humans alive today as long as some live somewhere

But...that 95% already live here?

What do you think you'd be making that sacrifice for?


I'm with you, but I feel you still have a crush on startup ecosystem. I used to have it too, ~5 years ago. What I've learned since then is, the kind of idealistic forward-looking people you seek do not form the majority of startups.

Technology makes shitloads of money now. This attracted all kinds of people - the regular ones just looking to live their lives in comfort; the greedy assholes looking to become rich by scamming (er, advertising to) others or profiting off offloading externalities on the society. There are more of them than idealists. For each "actually help the world" startup, you have 10 "get $$$ through screwing the society" ones, and 50 "build things people will buy" ones.

Also, I feel most idealists have realized by now that startups are not necessarily a good vehicle for change, because by their nature, you trade control for money. Which means that even if you have good intentions and a great long-term idea, your investors may not share it, they need their shorter-term profits, and they just gave you money for control, so you'd better do what they want.

All in all, do seek out people who want to actually help everyone, instead of forever living in the world of tribalism and petty soap-operish nonproblems. But do not thing startups are where they gather - startups are just another flavour of mundane business world, no matter what the copy on their webistes says.


>I think my view is more that of Silicon Valley/entrepreneur/programmer types. These people (me, etc) want the best long term outcome and will take big actions towards that, even knowing it’ll cause some short term pain.

>I don’t think ‘regular’ people think like that. They generally care more about the people around them, their own pain, their tribes, their cities, etc. (But not at all about trillions of as yet unborn humans.)

And I think that both of you have a very peculiar taste for bullets, since you like biting them so much when there was otherwise no actual need. We can invent cool technologies, colonize space, and have an egalitarian society. In fact, in my view, those things go together: you can't really get a stable multiplanetary civilization going when people are constantly trying to tear out each-other's throats over socioeconomic inequality.


Its hard to believe people actually think this shit.


> These people (me, etc) want the best long term outcome and will take big actions towards that, even knowing it’ll cause some short term pain. > I don’t think ‘regular’ people think like that. > (But not at all about trillions of as yet unborn humans.) > the intentions of entrepreneurs aren’t evil or unethical, they just care more about the survival of the entire race than, today’s people. > Startups are more about “getting us out of this fine mess were in” than money.

I'm sorry, but too much of this leaves me scratching my head as to whether this is satire or not, and I will explain my position and not just be snarky about it, but truthfully after reading through the post having previously only glossed over it, I find some of the statements just very curious.

I take issue with the statement that SV/Entrepreneuer/programmer types just want what is the best long term outcome because more or less what they want is a long term profitable outcome most of the time. We can see examples of software and services which are produced for a better long term outcome; the Linux Kernel, software like ffmpeg, cURL, World Wide Web, etc. Not only are the statements from the founder clear on the goals and intentions of the software, but the software and services live up to their delcaration of intentions and look to solve a problem in a focused and sustainable way. There's always a lot of talk about "nothing wrong with making a little bit of profit while doing something great", but this is a pretty thin line to walk most of the time - Microsoft, for example, does have some very useful software solutions, but there's no doubt that everything about the procurement and design is meant to lock you into using it without question - it's not software to better the human, it's software to lock you in. Microsoft isn't the only guilty party here, they're just an easy example. When I see startups, when I hear about entrepreneuers and SV programmers, and heck I'll outright say it, when I hear about side-projects on HN, a lot of the times it's not software to enhance or improve life, it's software taking a stab at a share of the market.

Not everything has to be F/OSS; we don't all need to be Stallmanites with regards to our data and privacy, and I'm happy to pay out for software that does what it says on the box. I happily dropped $10 on DaisyDisk for macOS because it does exactly what it says it does without trying to lock me in further; I pay and it's done, no subscriptions, no limited functionality, no restrictions on what I can use with it. It serves its purpose very well. Sublime Text is much of the same, and it's such a good program that I've seen people here wish that new versions would require a new purchase just for another excuse to give the authors more money. The difference between these projects and most of the non-sense that gets released is that they're trying to serve an actual goal; they fulfill a need instead of creating one, and they do as promised.

This is what long-term betterment looks like; not subscription models, nag campaigns, constant notifications on what you're not getting, but instead providing a functional tool that makes your life better instead of trying to figure out more ways to get you to put out your credit card.

> I don’t think ‘regular’ people think like that.

I wonder if Doug Evans thinks much of the same thing and wonders why people don't understand he was just trying ot better their future. I take from statements like these a lot of hubris, that such Entrepreneuers know better than everyone else. Everyone at some time is guilty of thinking "if everyone just thought like me it'd be perfect", but it should be pretty obvious this just isn't how the world works. I think that indeed many of the 'regular' people do think very hard about the future, but they also think about how their 'now' will affect their and their children's future. It's not that they aren't trying to help the trillions of yet unborn humans, it's that they see a different way of getting there. For example, a programmer like you're describing wants to write a service to better the future of humanity - my friends in Seattle think we need trees and gardens everywhere. Whose solution now is going to be more important in 10 years? In 1000? In 1000? I'm not sure that you can confidently say the software is going to be impactful and important in 10 years, much less in 1 year, when more and more it seems we just get flashes in a pan.

Your conclusion that startups are concerned about survival and the human race, not just about money, isn't really supported by what the start ups are trying to do. They consolidate power instead of distributing it; they hoard information instead of sharing it. They try to lock you in instead of giving you freedom and options. This is what SV has come to represent with many of the startups you see; a new, more benevolent master, instead of a new tool to help you. There are dozens of new [Something]aaS every week, each one just fighting to lock you in to whatever cycle they have and to wring out a bit of money. These aren't there for the long term betterment of humanity, they're there for the quick buck and to make promises they can't deliver on.


[flagged]


He's not going around destroying looms to protect jobs. He's not even complaining about the tech per se - just about the lack of any ethical consideration in some areas of the industry.

The problem is not unique to SV or to tech world. This is just the good old phenomenon of "not caring about ethics makes you better at making money". The market does not care either, it just promotes people and companies who are good at making money. Complaining is one form of creating back pressure, of making it more difficult for unethical actors to make money. We desperately need more of that back pressure.


Keep in mind that we have it good in part because we've avoided a nuclear war so far, and the bigger environmental and climate change issues haven't impacted us yet.

In the future, we will potentially have designer viruses, weaponized nanotech and AI to concern ourselves with. So it's probably good that a few people express their worries about unethical uses of technology.


[flagged]


Him and Elon Musk, and many others as well get caught in that aspersion you just cast out there. Way too many for you to haul in.


TBH it's the press who needs him more than he needs the press. If anything, I see this article as a cheap trick to write something generating pageviews and ad revenue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: