You’re being a tad pedantic and more annoyingly also wrong.
Big-o notation can also describe space complexity which is a property of the data structure itself and not the algorithmic operations. What I said was a catch all.
As for trivia, it 100% is when literally ever good data structure documentation says the big-o of that operation (+ a lot more details that can sometimes be more relevant). It’s trivia worth knowing for arrays and hash tables but not worth remembering for the various trees because there’s so many of them and they’re used so infrequently (+ implementation decisions can matter which is why reading docs is more important).
I don’t know what problem domains you are working with but algorithmic complexity is not the most common slow code reason I’ve seen new devs run across because the sane generic defaults tend to be relied on for that exact reason (+ code review by someone with experience will catch obvious problems)
The addendum about network calls in a loop makes it sound like his domain uses some kind of SDK with a bad API design. There are some notorious examples which are known to fool new and uninitiated developers who haven't read the fine print into writing code that looks perfectly reasonable on the surface, but does dumb things under the hood.
My experience with new devs is more towards obsessive concern with respect to time complexity, at least where isn't hidden behind a bad API design, fearing an algorithm with high complexity that when measured under its practical use would be just fine. It can be difficult for new (and even old) developers to grasp just how fast computers actually are.
I work in multiple problem domains as I lead teams on multiple projects across orgs. My comment about network requests is because one of the most common issues I see in inexperienced engineers is writing database code locally where they add a lookup and insert on some dataset inside of a loop. Regardless of language this is unacceptably slow when a real network is involved.
Yeah, a lot of database management systems have really bad API designs, often mirroring the underlying database API directly to the user over the network without any further consideration towards the network having different constraints.
Said database APIs are typically not unreasonable when the database is running in the same memory space as the application, where latency is unnoticeable, as historically was the case when a lot of these APIs were designed. But, as you know, the model breaks as soon as you find yourself in a high latency environment, like over a network.
So then you get some weird bulk operators bolted onto the system to work around the latency issue, but they don't fit the mental model of the rest of the API designed around the idea of single unit operations. Save catching it in the fine print, they go unnoticed. Indeed, the naive programmer who hasn't yet been burned will not be able to fathom that another developer could design such a bad API and will put faith in the idea that the underlying system will somehow automatically mitigate the problem you describe. And in small scale testing it works fine, so there is no reason to doubt that notion. That is, until it is too late...
The experienced developer has learned to stop trusting other developers and bring a heavy skepticism when using another's API. This is a blessing as it means they (usually) stop making those kinds of mistakes when they encounter a bad API, but it is curse as it means they see no reason to fix the problem. "Why don't the junior devs just know better?!" they say. And so, the cycle repeats.
A dev being able to tell you the time complexity of their code isn’t trivia it’s a very important tool to be able to document, discuss trade offs and generally plan for time complexity at scale.
I’ve worked and led teams in multiple problem domains. Devs not understanding how to optimize to avoid slow time complexity is definitely one of the first nuts you have to crack in a new engineer.
I don’t know. I generally haven’t seen these be a problem. I think I’ve seen accidental n^2 complexity a couple of times at most and it wasn’t code written by juniors but rather just an oversight in adding nested loops somewhere they didn’t need to be. Now I have seen juniors who couldn’t figure out what was n^2 and how to fix it when given it in an interview, but slowness and big-o being related is generally rare. As you mention poorly considered sequences of network calls are more common). Additionally, the relation between the big o of a data structure and the big o of the overall algorithm solving some higher level problem tends to be more decoupled except for some very specific problem domains. I’m a systems engineer having worked in OS, embedded, mobile, and distributed systems and have seen big-O analysis of the high level code documented about 0 times (and it would have been helpful about as many).
Again, not saying it’s irrelevant and it might be highly relevant in some problem domains. But in general big-O of the common data structures should be memorized with the rest being trivia that should be accessible in the documentation for the data structure (or by looking at the implementation and doing it yourself).
> A dev being able to tell you the time complexity of their code isn’t trivia
I take it that you didn't bother to read the thread you are replying to? There is no reason why you need to have the time complexity of any given function memorized should someone ask. You can – better yet, they can – read the documentation or, failing that, read the code to answer the question. If you happen to have it on hand, that's great and all, but it is only necessary if the asking party see themselves as a trivia master.
> Devs not understanding how to optimize to avoid slow time complexity is definitely one of the first nuts you have to crack in a new engineer.
You never really crack that nut, no matter how experienced, as many problems have no known faster time complexity solution to provide the 'how'. To begin to try to crack that nut you're going to down the road of researcher, not writing day-to-day software, and even then your research is not guaranteed to yield results.
Big-o notation can also describe space complexity which is a property of the data structure itself and not the algorithmic operations. What I said was a catch all.
As for trivia, it 100% is when literally ever good data structure documentation says the big-o of that operation (+ a lot more details that can sometimes be more relevant). It’s trivia worth knowing for arrays and hash tables but not worth remembering for the various trees because there’s so many of them and they’re used so infrequently (+ implementation decisions can matter which is why reading docs is more important).
I don’t know what problem domains you are working with but algorithmic complexity is not the most common slow code reason I’ve seen new devs run across because the sane generic defaults tend to be relied on for that exact reason (+ code review by someone with experience will catch obvious problems)