To me, the interesting thing is that the entire culture of usability seems to be focused on widening this gap. Perhaps the biggest rule of user interface is "the user shouldn't have to care how it works". This extends even down into programming. You bring in a library, you look at the function specs, call a function, and if it works without you having to grok a single thing happening beneath the surface, it's resounding success. The same thing but worse happens at the application level.
Of course, good luck competing in a market where you're the only guy trying to (gasp) make the user understand what's going on. I don't even know if I expect the problem to get worse (as interfaces become more and more abstracted) or better (as users increasingly grow up with computers in their lives) as time goes on.
A mental model is essential to successfully use _any_ machine.
You don't need to have a correct mental model of how it works, just a mental model that given the most common input, predicts the output.
From your comment, I can see you have a mental model of how the car works. You press on the gas, and expect the car to go faster. You press on the brakes, and expect the car to slow down until it eventually stops. You turn the steering wheel to the right and you know that the front wheels point rigthwards, making the car go in that direction. This mental model allows you to predict the outcome of pressing the gas and the brakes at the same time, and therefore know that you should not do it.
You do not need to know the inner workings of the car. You might as well think there are midgets under the hood doing all the work.
The problem is that the mental model that most people form about computers is so wrong, that it doesn't predict anything. So they can't use a computer properly.
And this is mostly the fault of interface "designers". An interface to _anything_ should allow the user to form a mental model that predicts the outcome of the operations that users will need to perform. This does not mean exposing the inner workings of the machine, but also not over-simplifying and use metaphors excessively. Like I said, the mental model does not need to be accurate, it just needs to help the user do whatever it needs to do.
Car Analogies? Even Stephenson's book that the GP references doesn't pull that off.
Why do cars need to be refilled with gas? Why do their engines require regular oil changes? Why do their doors and ignition devices have keyed locks? Why do you need to learn to drive it, coordinating your arms, legs, eyes, and ears? Why do you need to learn the legal and social rules of traffic?
There's essential complexity here. The "mental model of how a computer works" is learning the UI paradigm (rules of the road), not busses and syscalls (drive shafts and carburetors).
The iPad doesn't have a 35-year legacy of physical removable media for user data. It also requires syncing to iTunes as the sole means of initialization and backup.
Of course, good luck competing in a market where you're the only guy trying to (gasp) make the user understand what's going on. I don't even know if I expect the problem to get worse (as interfaces become more and more abstracted) or better (as users increasingly grow up with computers in their lives) as time goes on.