Testing principles

Principle 1 – Testing shows the presence of defects

Testing can show that defects are present, but cannot prove that they are not. Testing reduces the likelihood of defects in the software, but even if no defects are found, this does not prove its correctness.

Principle 2 – Exhaustive testing is impossible

Complete testing using all combinations of inputs and preconditions is not physically feasible except in trivial cases. Instead of exhaustive testing, risk analysis and prioritization should be used to more accurately focus testing efforts.

Principle 3 – Early testing

To find defects as early as possible, testing activities should start as early as possible in the software or system development life cycle, and should be focused on specific goals.

Principle 4 – Defects clustering

Testing efforts should be concentrated in proportion to the expected, and later the actual density of defects per module. As a rule, most of the defects found during testing or that caused the majority of system failures are contained in a small number of modules.

Principle 5 – The Pesticide Paradox

If the same tests are run many times, eventually this set of test cases will no longer find new defects. To overcome this “pesticide paradox”, test cases must be regularly reviewed and adjusted, new tests must be diversified to cover all software components, or system, and find as many defects as possible.

Principle 6 – Testing is concept depending

Testing is done differently depending on the context. For example, security-critical software is tested differently than an e-commerce site.

Principle 7 – Absence-of-errors fallacy

Finding and fixing defects will not help if the created system does not suit the user and does not meet his expectations and needs.

Project life cycle

Software development stages (Learn more) are the stages that software development teams go through before a program becomes available to a wide range of users. Software development begins with the initial development stage (the “pre-alpha” stage) and continues through the stages at which the product is finalized and modernized. The final step in this process is the release to the market of the final version of the software (“public release”).

The software product goes through the following stages:

  • analysis of project requirements;
  • design;
  • implementation;
  • product testing;
  • implementation and support.

Each stage of software development is assigned a specific serial number. Also, each stage has its own name, which characterizes the readiness of the product at this stage.

In order to better understand approaches to software testing, of course, you need to know what types and types of testing exist in principle. Let’s start by looking at the main types of testing that define the high-level classification of tests.

The highest level in the hierarchy of testing approaches will be the concept of type, which can cover several related testing techniques at once. That is, one type of testing can correspond to several of its types. Consider, to begin with, several types of testing, which differ in knowledge of the internal structure of the test object.

Static and Dynamic testing.

Static testing is a type of testing that assumes that the program code will not be executed during testing. At the same time, testing itself can be both manual and automated.

Static testing starts early in the software life cycle and is therefore part of the verification process. For this type of testing, in some cases you don’t even need a computer – for example, when checking requirements.

Most static techniques can be used to “test” any form of documentation, including code proofreading, design documentation inspections, functional specifications, and requirements. Even static testing can be automated – for example, you can use automatic syntax checkers for program code.

Types of static testing:

  • proofreading the source code of the program;
  • verification of requirements.

Dynamic testing is a type of testing that involves running program code. Thus, the behavior of the program during its operation is analyzed.

Dynamic testing requires that the code being tested be written, compiled, and run. At the same time, external parameters of the program operation can be checked: processor load, memory usage, response time, etc. – that is, its performance. Dynamic testing is part of the software validation process.

In addition, dynamic testing may include different subspecies, each of which depends on:

  • Code access (testing with black, white and gray boxes).
  • Level of testing (unit integration, system, and acceptance testing).
  • Areas of application use (functional, load, security testing, etc.).

Positive testing and negative testing.

When testing a certain functionality or a software product as a whole, we always ask two questions: what can this program or part of the program do and what can not? In these questions lies the main difference between positive and negative testing.

A positive test is the use of data or test scripts that are provided for the normal functioning of the application. As you may have guessed, positive testing serves to confirm that a software product can do what it was designed to do.

Negative testing is the opposite of positive testing. Its essence lies in the performance of functions by the program or the use of data that are not provided for by either the developers or the ideological creator of the application. For example, how the program will react if you enter letters in a numeric field.

Negative testing is used to determine application performance parameters such as checking exception handling, investigating the application’s behavior in an unplanned mode of operation, and other possible non-functional attributes.

Leave a Reply

Your email address will not be published.