> But I've also asked Stack Overflow questions that drowned under a tsunami of premature optimization tut-tutting from people who I suspect didn't even fully understand the question, and then sighed and proceeded to take a couple days to figure it out myself and improve application performance in a critical section by 50%.
Me too. It's infuriating how presumptuous people can be.
Here is someone telling me that my attempts to avoid a virtual function call "definitely fall under the category of premature optimization": http://stackoverflow.com/a/5100072/77070
It's interesting to me that people will shout "premature optimization" before they have any idea what you are doing or how long you have worked on this problem before trying to optimize.
I'm also deeply skeptical of the pre-canned response that people should always profile - in the spirit that it's hopeless to optimize without a profiler.
In my experience, profilers are really low-level tools that often highlight the wrong problems, aren't very good at telling you where the optimization opportunities are, and tend to encourage microoptimization.
It's not that profilers are bad, it's that I get the feeling the people parroting this advice have no idea what they're talking about, and somehow believe that a profiler is "the solution", when profilers are neither necessary nor (most problematically) sufficient to fix most performance problems.
1. quickly seeing which high level functions or methods take the longest time,
2. which methods are called a lot.
Even 2. can be of marginal use. A lot of times, it's not surprising a method is called a lot, and not clear whether those calls are indicative of a performance problem. There are times, though, where a method is called a lot more than you expected, in which case it might be the clue to solving the performance problem.
For 1., it's usually click down through the call stack for the most expensive top level method, while there is still a single method taking up a good fraction of the overall time.
Hopefully the stopping point will still be a method you wrote, and not a library call. If it's your code, you likely have the ability to fix the performance problem by changing your code. If it's a library call, you might need to be a little more creative, in finding an alternative approach, or better understanding what that library call is doing under the hood.
So for me, the profiler just tells me I'm looking at the right general part of the code when trying to improve performance, and that's about it.
FWIW, both 1 and 2 can still lead you in the wrong direction sometimes. For example, if you're working in a garbage-collected language most profilers (that I've used, anyway) won't give you good insight into how much time you're wasting on excessive GC load because you're creating too $#@$% many unnecessary objects (see: boxing), or creating objects with just about exactly the wrong life span. If you're working on parallel code, many profilers are darn near useless for figuring out how much time you're losing to blocking on synchronization.
I just replaced a SQL view that was performing 24,000 reads with a procedure that performs 45 reads. Yes, it's a little different to use, but overall don't listen to the premature optimization people.
I could wait until the data in those tables grows to a crazy size and the reads are out of control (and spilling to disk) or I could just fix it now. Hmmm...
Me too. It's infuriating how presumptuous people can be.
Here is someone insisting to me over and over that virtual function call overhead is not measurable: http://stackoverflow.com/a/16603295/77070
Here is someone telling me that my attempts to avoid a virtual function call "definitely fall under the category of premature optimization": http://stackoverflow.com/a/5100072/77070
It's interesting to me that people will shout "premature optimization" before they have any idea what you are doing or how long you have worked on this problem before trying to optimize.