I came across this as McBryan’s Law in the famous Akin’s Laws of Spacecraft Design, which is worth reading even if you are not in that field as myself.
This is something that has come up a lot during the years when creating software, and is something that has consistently been around in various forms in the industry. So much so that this is a famous quote:
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
Donald Knuth, Computer Programming as an Art (1974)
There are caveats to this, but I want to provide what I consider useful guidance in how to be a conscious developer or engineer in thinking about optimization in how we perform our work daily.
Think about value
There is usually a time investment required to do a given optimization. If you start thinking about overall value for that piece of work, it might very well to turn out to be a bad endeavor. But you can see this if you zoom out of the problem in front of you, and try to consider as much as the whole picture as possible.
Once you take into account maintainability, complexity and even the opportunity cost the result might be a strong «Don’t to it». There is actually an extended quote about this:
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%
Donald Knuth, Structured Programming with Goto Statements (1974)
«Does the user care? That much?» This would probably be a concise rule of thumb to know if it’s a must, or if it might be considered for a backlog.
You probably need data
To understand where you need to optimize and find that critical 3%, you will probably need data. Sample usage is a great way, but there are others that can guide you if needed.
Investment in having clear logs and metrics to collect usage data and identify opportunities, is usually a solid foundation to do run optimizations after.
Avoid stuff that does not work
While you shouldn’t run to optimize on early stages, you should definitively not run with anti-patterns . Sounds pretty obvious, but it ain’t.
Also, «making it work» has different meaning in different contextes. While it might make sense not to consider geo distributed DB caching settings in a small user base application, it might make total sense if launching something completely different.
This should go hand in hand with whatever QA and rollout plan is in place.
Invest in failsafes and mitigations
While you might not do in depth optimizations, you would probably go around doing the fundamental ones and the low cost ones – avoiding stuff that does not work.
That being said, it is a great exercise if you are concerned about something in trying to understand how you would respond to a given issue.
Designing a scalable system and cloud usually brings a lot of flexibility and solutions that can be spin up in no time with low complexity. While a bit expensive sometimes, it is a much more rational decision to throw some money at the problem if it happens while you work on a solution.
One other important aspect worth mentioning, is having a robust and fast deployment procedure. When in trouble, you want to have zero friction in trying to get incremental solutions to your users.
