Let me begin with an example, from every days scenario at work. You are at the end of the sprint and have some errors reported. ImgYou think that the quality of code and tests are good enough for release. Based on this the reported bugs the failures are minor and won’t have big impact after fixing them.  But the reality may be different, that you really are on opposite ends of the quality. The code and test quality is low, and the reported tests are few or no bugs. Your worst case scenario could be that deployment/release will go very bad and can be very expensive in a technical or business nature.

It’s important to evaluate the possibility of finding more defects based on results you have gathered so far. The key is to analyze your rapport thoroughly because:

  • If you have just found a defect, this is a signal to keep testing. It may seem counter-intuitive but in general the more defects you find the more likely it is that there are additional defects.

  • If you have only exercised a small portion of the overall functionality and have found defects, then this is a signal to continue testing.

  • If you have been testing a particular piece of functionality for a while and are not finding any new defects, then this is a signal for you to stop testing. How much of the system’s functionality have you tested?

  • If there are significant features that are mostly or entirely not tested, then you will likely not be prepared to recommend it is good to go.

  • False confidence is a very real danger. This is especially true for developers. I have seen or heard of many who have an unrealistically high level of confidence that their code works, in some cases without any testing whatsoever. As a specific example, I have heard developers state that if their code compiles, it is good enough to be promoted to acceptance testing.

Testing by definition is comparing an expected result to an observed result. So it should be easy to answer the question how much testing is enough? But the answer could be based on number of factors such as:

  • Identifying risks early in your project on technical and business aspect.

  • The project constraints such as time or budget. If a team decides to enlarge the scope of a project the time will become larger as well along with the cost. If the time constraint is tighter the scope may be reduced but the cost will remain high. If the team decides to tighten the budget, the scope will become smaller but the time will increase. The time constraint deals with the time necessary to finish a project. To successfully complete a project, the time constraint should be comprised of a schedule. You should have a specific schedule related to the time that it will take you to finish the project.

  • Measure the complexity and risk of new features and how much of the product they impact.

  • Measure bugs found during the release and bugs found after the release and in what areas of the product.

By prioritizing risks early it will help to determine how much testing is considered to be enough. Once you have considered the risk, you should find the goal of testing. Once you have obtained a sufficiently high level of confidence in different levels of test such as Unit testing, Integration testing, System testing, Acceptance testing and Regression testing you are done testing.

It is possible to do enough testing but determining the how much is enough is difficult. Simply doing what is planned is not sufficient since it leaves the question as to how much should be planned? What is enough testing can only be confirmed by evaluating the results of testing? If lots of defects are found with a set of planned tests it is likely that more tests will be required to assure that the required level software quality is achieved. On the other hand if very few defects are found with the planned set of tests, then no more tests will be required.

Testing should provide information to the stakeholders of the system, so that they can make an informed decision about whether to release a system into production or to customers. Testers are not responsible for making that decision, they are responsible for providing the information so that the decision can be made in the light of good information.

So going back to my question, testing is done when its objectives have been achieved and more specifically, you are done testing when:

  • You are unlikely to find additional defects.

  • You have a sufficiently high level of confidence that the software is ready to be released.

Vivek Sharma Test Manager at FINN.no

Tags: quality testing