Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All the statements in that post are trade offs. In all cases you are sacrificing something to gain something else. Therefore in a way you are always "destroying" something.

Sometimes it is valid to not reinvent the wheel. Sometimes wheel needs to be reinvented to learn. Both actions are done. Sometimes the decision was right. Sometimes not.

Overall as a whole we are creating things, more than we are destroying. I don't see the need to take a negative stance.



"Destroying software" is broader than the creation of new, working software artifacts in the moment. The phrase refers to changes in engineering culture in software and it's long term effects, not the immediate successes.

Writing a new green field project using 10.000 npm dependencies for an electron based front end is shockingly easy. But how do you keep that running for the next 15 years? Did the project need npm? Or a web browser? How do all the layers between the lamguage of choice and the bare metal actually behave and can you reason about that aggregate accurately?

The field has come to a point where a lot of projects are set up with too many complexities that are expedient in the short term and liabilities in the long term.

The current generation of junior devs grows up in this environment. They learn that these mistakes as "the right thing to do" when they are questionable and require constant self-reflection and reevaluation. We do not propagate a hacking culture enough that values efficiency and simplicity in a way that leads to simple, efficient, stable, reliable and maintainable software. On a spectrum of high quality craftsmanship to mass-produced single use crap, software is trending too much to the latter. It's always a spectrum, not a bunary choice. But as a profession, we aren't keeping the right balance overall.


I've been a backend engineer for about 10 years, with my last job doing an aws lambda stack.

I started a job in manufacturing a few months ago and having to think that this has to work for the next 20 years has been a completely different challenge. I don't even trust npm to be able to survive that so web stuff has been been an extra challenge. I landed on lit web components and just bringing it in via a local CDN.


World is full of abstractions on many different levels. Something being on a lower level doesn't inherently mean superior. You can go in any direction on the scale or spectrum. Do you know how exactly atoms behave that computers are made out of? There are plenty of people working on all different sorts of abstractions, new abstractions appear and demand for lower level increases when it is needed. You could say that as more abstractions are built on top of lower level the balance of all the field will go higher in abstraction level on average, but that is the natural way to evolve. Abstractions allow you to build faster and the abstractions are possible because of lower level elements. In the end if you are measuring what an average level of abstraction for current industry is you can draw the line arbitrarily. You could include the people who use website builders and you can calculate the average to be even higher. We need people working on at all different levels of abstraction. We could divide the groups with 2 different naming convention for lower level engineers and higher level, then technically you could go back to calculating that average is still where it used to be.

I definitely use npm (or rather pnpm) because I know it will allow me to build whatever I want much faster.


Abstractions are only part of the whole issue. Maybe I focused too much on that. But I'll argue that point once more.

How much complexity is actually required? What changed in software in the last 20 years so that the additional bloat and complexity is actually required? Hardware has become more powerful. This should make software less reliant on complicated optimizations and thus simpler. The opposite is happening. Why? What groundbreaking new features are we adding to software today that we didn't 20 years ago? User experience hasn't improved that much on average. In fact, measurements show that systems are responding more sluggishly on average.

Intrinsic complexity of the problems that software can solve hasn't really changed much as far as I can see. We add towers of accidental complexity on top that mostly aren't helpful. Those need to be questioned constantly. That isn't happening to the extent that it should. Web-based stuff is the poster child of that culture and it's hugely detrimental.


> What changed in software in the last 20 years

Backends handling tens / hundreds of thousands or more of concurrent users rather than locally deployed software on a single machine or a small server with a few 10s of users?

Mobile?

Integration with other software / ecosystems?

Real time colaboration amoung users rather than single user document based models?

Security?

Cryptography?

Constant upgrades over the web rather than shipping CDs once a year?

I'll pass on AI for the moment as it's probably a bit too recent.


Why is a single, scaled up backend required in products effectively have onky multi-tennancy?

Software can be distributed onto client machines and be kept up to date. That was first solved with Linux packages managers more than 25 years ago.

Before mobile we had a wide range of desktop operating systems with their own warts.

TLS 1.0 was introduced in 1999. So cryptography already a concern back then.

So what is really new?


Cant do all on clients. All the 5m client machines daily are connecting to "my" one authorization server


> Something being on a lower level doesn't inherently mean superior. You can go in any direction on the scale or spectrum. Do you know how exactly atoms behave that computers are made out of?

This is a false equivalency; no one is suggesting that it’s necessary or even useful to understand electron drift to write programs. It can, however, be extremely useful to understand memory allocation and retrieval, and how your languages’ abstractions over it work.

Take UUID generation, for example. I needed to generate millions of them for a tool I’m building, and I found that it was massively slower on Linux than Mac, even taking raw CPU performance into account. I eventually tracked this down [0] to BSD using arc4random, while until a fairly recent glibc release, Linux was using ul_random_get_bytes. Most people are never going to need or notice this difference, because for a single UUID, it simply doesn’t matter. But it did for me, and the only reason I was able to figure it out and solve it was by having a passable understanding of implementations buried behind abstractions.

[0]: https://gist.github.com/stephanGarland/f6b7a13585c0caf9eb64b...


It's not a personal value judgment, it's a debugging issue.


Agreed and well said. Furthermore, a lot of the statements in the post are making opposing tradeoffs when you put them together. A bunch of them value experimenting and breaking things, and a bunch of others value using what we already have and not breaking things.

A few of them aren’t decisions any individuals have control over. Most coders aren’t jumping onto new languages and frameworks all the time; that’s an emergent group behavior, a result of there being a very large and growing number of programmers. There’s no reason to think it will ever change, nor that it’s a bad thing. And regardless, there’s no way to control it.

There are multiple reasons people write software fast rather than high quality. Because it’s a time/utility tradeoff, and time is valuable. It’s just a fact that software quality sometimes does not matter. It may not matter when learning or doing research, it may not matter for solo projects, it may not matter for one-off results, and it may not matter when software errors have low or no consequence. Often it’s a business decision, not an engineering decision; to a business, time really is money and the business wants engineering to maximize the utility/time ratio and not rabbit hole on the minutiae of craftsmanship that will not affect customers or sales.

Sometimes quality matters and time is well spent. Sometimes individuals and businesses get it wrong. But not always.


I guess the rant should be renamed "business is destroying software" because several of the tradeoffs he mentions can be root caused to a commercial entity cutting corners and sacrificing everything on the altar of "developer time" in order to save money. Only a business would come up with the madness of "Move Fast And Break Things."


I mean, I hate business as much as any other engineer, but what’s the point of software without a business? (excl. the beauty of open source)


> Overall as a whole we are creating things, more than we are destroying. I don't see the need to take a negative stance.

Fair point: each one of us can think about the balance and understand if it's positive or negative. But an important exercise must be accomplished about this: totally removing AI from the complexity side.

Most of the results that neural networks gave us, given the hardware, could be recreated with a handful lines of code. It is evident every day that small teams can rewrite training / inference engines from scratch and so forth. So AI must be removed from the positive (if you believe it's positive, I do) output of the complexities of recent software.

So if you remove AI since it belongs to the other side, the "complicated software world" what gave us, exactly, in recent times?


If we discard the AI, which I don't think we should, but if we do - my life has been enriched a lot in terms of doing things I want to do vs things I don't want to. Very quick deliveries, I never have to go to a physical store, digital government online services, never having to wait in any queue, ability to search and find answers without having to go to libraries or know specific people. Online films, tv shows on demand, without ads. There are tons of those things that I feel have made my life so much easier.


The services that enable the things you desire also create harm (Amazon's problems are well documented, digital government services are often a divide that sometimes exclude freedom-minded indivuduals who don't use a "mainstream" OS, to name a couple).

AI has the potential to make the situation much worse, as many laypeople confer it an air of "authority" or "correctness" that it's not really owed. If we're not careful, we'll have an AI-driven Idiocracy, where people become so moronic that nobody can do anything about the system when it takes a harmful action.


Sure, there are trade offs and risks to everything and everything new. Cars made us move faster, but can pollute and cause injury or death. But summing all of those things together, I would not pick any other time before now to live. And same with software development.


I'm sure factory owners said the same thing in England in the early 1800s.

It needs to be noted that the average person's lot didn't improve until 150 years later. There's no reason why technology can't be decided by democratic means rather than shoved in our faces by people that just want to accumulate wealth and power.


What more could someone want than instantaneous consumption and on-demand video? We’re truly living in a frictionless utopia.


I may have worded it poorly, but everyone can choose the content they consume. And activities they do. You can choose mindless things or things that allow you to learn about the World and understand. Both are easier.


Why this is a result of software complexity? I'm not in favor of less capable software.


I am not sure I understand you then. The post was saying we are destroying something, but I feel like we are constantly gaining and that things are getting better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: