Never theorize before you have data. Invariably, you end up twisting facts to suit theories, instead of theories to suit facts.
It is often useful to learn from our own experience and narrow down problems based on similar things we have seen in the past. However, there are times when we can jump to a conclusion too quickly because something looks very similar to what we've seen before. If we're not careful, we can waste significant time and effort chasing a certain possibility only to realize that we've wasted our time and, in the worst cases, perhaps made things worse trying to "fix" something that wasn't really the problem.
The quote referenced above from Sherlock Holmes is a reminder of the dangers of assuming too much before gathering sufficient data. Without sufficient data, it is easy to start seeing what we want to see. If we think something is the problem, we want symptoms to fit our preconceived notion of the proper solution or fix.
This effect is not limited to software development. A similar effect is often seen in a wide variety of things: politicization of science, the phrase "love is blind," and parents' sometimes inability to think their kids can do any wrong.
In most situations, including software development, the best decisions are made with better data. There are times when the cost of obtaining additional data is so high that we're better off starting to make decisions without the additional data, but in cases where it is relatively inexpensive to gain more data about a problem, that small price is usually well worth it. Making a proper decision early in design, in refactoring, and in fixing a problem can often result in a much quicker implementation or resolution.
As alluded to above, the dangers of jumping to conclusions without sufficient data are not limited merely to wasting time on a wild goose chase. In even more costly situations, grasping at straws or desperate fixes for theorized problems can make code unnecessarily verbose, worse performing, needlessly complex, or just plain wrong. Anytime code is changed, there is some potential for breakage. Changing code for no reason other than we think we're fixing a problem that's not really a problem introduces this risk with no benefit.
Making decisions based on previous experiences and applying past experience to current and future work are often beneficial practices that improve or productivity and efficiency. However, these tactics can be detrimental when we decide to design a new system or troubleshoot the existing system solely on previous experience even when the new problem is only remotely related or not related to our previous experience.
The best way to deal with this is to understand well the lessons learned from the past (how and where they apply) and to learn as much as we can about the given problem to know how it's the same and how it's different than our previous encounters with a seemingly similar issue.
No comments:
Post a Comment