People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
So may be a charitable interpretation could be that quality does not matter because LLMs can deal with any complexity that comes with the reduction in quality...
It seems that we are getting bitten by the law that says things that can be measured trumps things that cannot be.
How fast it was to create an initial version of a piece of software can be easily measured.
But how efficient it is, how easy it is to make changes to it, how easy it is to debug it, how easy it is to extend in the direction that the domain requires...all these cannot be easily measured or quantified, but is 10 times more important than that initial creation time....For a software that has to run and maintained for decades delivering value all that time, it does not really matter if the initial version was created in 5 minutes or 1 month, if the 5 minute version does not contribute negatively to all those non-measurable, non-marketable traits of the software.
It is like how camera marketing was mostly around the megapixel value, instead of something vastly more important like low light performance, dynamic range, or fast auto-focus. Because the LCD of the market won't be able to grasp the relevance, and would not act on it. So it was all about megapixel, but at least that does not have a lot of negative consequence unlike the marketing around AI...
That's an issue I have with Claude actually. I found it very good at breaking abstractions to get the job done. This is what I'd call slope (more so than the class internals).
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.