- Joined
- Aug 8, 2004
- Messages
- 3,892
- Reaction score
- 20
That's not really accurate. If you're writing system or performance critical software, the algorithms become just a small part of it (unless the performance is purely CPU intensive, in which case the best algorithm does a great deal for you). However, in these types of systems, (perhaps a game rendering engine for example), optimizations can have a HUGE impact on performance (non-algorithm related optimizations, rather resource optimizations).
Just read the link I posted. No, (micro) optimizations do not make a huge impact on performance. If you do a text comparison for example and from your ASM experience you know one way is 10% faster than the other, you will have made 10% difference on something completely insignificant. If you know how to code well so that you can use a 1000 comparisons instead of 2000 comparisons (which is a minor feat generally) you will already have saved 50% performance.
For example, in PHP the stringhandler is faster when you use single-quoted ('text') type strings compared to double-quoted ("text") strings. Reason for this is the way they are both parsed - a typical micro-optimization.
On the other hand, I also know how to write an caching engine so certain dynamic parts of my website for instance do not have to be rendered for each pageview - a typical macro-optimization.
How much do you think each of these impacts performance?
Exactly: the string optimization does not affect performance in any measurable way - page generation time is about 0.12 seconds with or without it. Sure, it might have saved a 1000th of that, so instead of 0.120 it will now take 0.119 seconds - whoohoo! Compare that to the effect the global caching mechanism has: from 0.120 seconds page generation time drops to a whooping 0.025 seconds, roughly 480% faster. Yes, assembly-level optimization may, in some cases, improve performance a few procent. But the gains at the design level are usually far, far greater. That is why new programmers should always focus on overal design first and low-level implementations later.
Of course, you do not have to take my word for it; there is a good Wikipedia article on
You must be registered to see links
, which pretty much states the same:Versus the gain on assembly-level optimization:The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design.
With more modern optimizing compilers and the greater complexity of recent CPUs, it is more difficult to write code that is optimized better than the compiler itself generates, and few projects need resort to this 'ultimate' optimization step.
Once again, I do not deny the fact that it is beneficial for a programmer to know the basics behind memory allocation, CPU utilization and garbage collection. But saying it will lead to more efficient programs passes by that huge, incredibly important and difficult step which is the design process itself. Furthermore almost all applications to date do not need micro-optimization. If you get the overal design right, and run it on a fairly modern machine, almost anything will be fast enough. There is not a company in the world that will pay for you to run through thousands of lines of assembly when they can simply buy a machine that's twice as fast for half the money. Don't believe me? There is a funny
You must be registered to see links
on the DailyWTF about just that :laugh: