Quality has a price

Software quality is a bit like world peace. Everyone wants it, but when issues get close to you, there are always excuses. Schedule pressure, complexity, beliefs, blaming it on someone else, you name it. Small deviations in the course of action to handle exceptional situations may not seem to matter, but they do. In the end, you get what you pay for and you can’t really expect to have quality software without paying attention to quality.

The base theory is really simple. Defects get injected in the software as part of the development effort. They are caused by human mistakes and oversights, and that won’t change any time soon. The goal is to remove them to make a “bug free” application. Every activity for quality will remove a certain amount of these injected defects. If the amount injected was known, it would be really easy to keep looking until it goes down to 0. The truth is, we never know how many were injected and it’s impossible to prove the software is completely defect free. All you can do is reach a certain confidence level. Sadly, raising the confidence level has exponential costs and unless you’re working on code for a nuclear central, you probably won’t be able to justify the costs.

Some people believe that the developers are responsible for the code quality. It’s true in a certain way, but soldiers are not responsible for wars. Leaders are. On an ethical point of view, the soldiers should refuse to engage in conflicts that kill innocent people, but they have to follow orders. In the same way, developers should stand their ground and request to be granted the time to do things right, but many will fear loosing their source of income.

Being meticulous and careful is one way to reduce the amount of defects injected, but you can’t hope that it will solve the quality issue. A client asked me if they should be using unit testing or checklists when performing QA. To me, the answer was obviously both, but it does not seem to obvious to everyone.

Let’s move on to a different analogy. If you cook a meal for guests, you might review the recipe, make sure you have all the ingredients, validate that their quality is good and be very meticulous while performing the procedure. The fact that you did everything well at the start certainly does not mean that you should not taste the resulting meal before serving it. On the other side, skipping the verification on the ingredients does not make sense either. If you taste in the end and realize the milk use was not fresh, you just made one hell of a waste. It might also be very difficult from the end result to understand why the food tastes so terrible. You would have to be one confident chef to serve a meal without ever tasting any of the components.

Being meticulous in development is like lining up the ingredients, verifying them and paying attention to every step you make. Unit testing is like tasting the sauce while it’s still cooking. QA is the final verification before you serve. Peer review does not really apply to cooking a meal, but having someone watch over you and make sure you don’t make mistakes isn’t a terrible idea in the first place. Prototyping is like accepting to waste some food to practice on a complex procedure.

A programmer cooking for himself alone, a mother cooking for her children and preparing a Christmas dinner for your whole family all require different levels of quality. The time it takes is widely different. The three examples here would be around 20 minutes, an hour and a full day. Notice how exponential it is? What affects the quality level required is the cost of failure. When I cook for myself alone, I don’t really care if some details are not completely right. The consequence is basically not enjoying my meal as much or a few dollars if I have to redo it. Think about the consequences for the two other scenarios. Is the time taken justified? Certainly is. It’s natural and everyone understands it.

The exact same thing is true with software. However, it’s abstract and invisible, so people don’t see it so quickly.

A lot of the techniques used to obtain higher confidence in software quality are about piling up redundant checks. Quality levels can generally be improved incrementally, but each check you add has a cost, and the cost is not only time. Final QA requires that you have some sort of specification or use cases. Peer review works a lot better if you have strong conventions and an agreement on what is OK and what’s not. Really not that easy to get. Unit testing is the most controversial because it affects how the software is designed. Code has to be written in a way that supports testability. It changes the design. It may have performance impacts. It adds complexity to the design and forces to put priorities on the quality attributes that are required.

Design decisions are a sensitive issue, but I think any reasoned decision is better than no decision at all. You can’t have all the quality attributes in your application. You need to prioritize what you need and live with those decisions. Having worked on projects for long periods, I tend to put maintainability and testability really high. Others may prefer flexibility and performance. It’s very hard to obtain a consensus.

Consequences have to be evaluated at the beginning and the quality objectives have to be determined from the start. Decisions have to be made. Impacts have to be know. Recovering from insufficient quality is hard and expensive. Failing can be catastrophic. Developers are accountable for quality, but so is the rest of the organization, including marketing, sales and management.

Leave a Reply

Your email address will not be published. Required fields are marked *