It's not a fundamental difference, it's a consequence of the fact that hardware people are at the bottom of a very big stack and have a massive financial incentive to be as solid and predictable as possible. Higher up the stack everyone prefers to use relatively cheap programmers and build stuff quickly.
The problem is not having to deal with software APIs, the problem is the sheer size of the stack and the sheer number of accumulated assumptions that are built into it. Moving more pieces into hardware might improve the stack's overall integrity and reduce bugs, but it won't do much to reduce the size.
The real issue, IMHO, is that no one wants to admit that the general reuse problem is hideously, horrifyingly difficult. The biggest problems it causes are diffuse and long term, and in the short term everyone can do things faster by hacking together their old code with someone else's 95% solution library, so that's what everyone does. Putting enough thought into each new application to really do it right tends to be hard to justify on a business level, and most programmers have neither the inclination nor the skill to do it anyway. It's so ingrained that even people who are frustrated with the way things are think that a different operating system or language would solve the problem. It wouldn't - it would only start the process again, with at best a percentage reduction in stack size and corresponding percentage increase in time to frustration. I think it boils down to the fact that code reuse is basically a 2^n problem, and the bigger and more opaque the stack gets the harder it is to cheat that 2^n.
The only potential solution I've seen is what Chuck Moore is doing with Forth chips. He's now at the point where he can design and manufacture chips that are cheap and simple in design but are very good at running Forth very quickly. Of course the tradeoffs are (perhaps necessarily) as horrifying as the reuse problem in that it demands a lot more from programmers in general, and in particular requires them to learn a radically different way of doing things than they are used to while at the same time strongly discouraging reuse at the binary and source levels. In other words, he's spent decades designing a small core language and hardware to run it, and that's really all you should be reusing (along with data transfer protocols). Needless to say, no desktop or web or server programmer (or said programmer's boss or company) is ever going to go for this unless problems with reuse become far worse than they are now. (Even then the supply of cheap programmers might grow fast enough to keep masking the problem for a long time.) Most programmers are not very good, managers like it that way, and most of the smarter programmers are nibbling around the edges or looking for a silver bullet.
In short, there are no easy solutions. If you don't like the direction software is going, think about becoming an embedded systems programmer.