Testing

Software companies can spend a lot of time and money developing their software.  They cannot afford to release poor quality or flawed software. It is therefore critical that software is tested thoroughly before release.

It seems a little obvious and simplistic to suggest that testing is only necessary to identify errors.   In particular, it checks that software is fit for purpose –  this involves evaluating functionality and performance:

  • Functionality – Does the software include all the features that it’s supposed to?
  • Performance – Is the software fast, reliable, robust and easy to use?

Systematic Testing

Systematic testing involves the creation of a test plan which specifies the testing activities to be undertaken throughout different phases of the development cycle.

Creating a  Test Plan

Shortly after the software specification is produced (early in the design phase) a test plan is written. It details every aspect of the testing activities to be undertaken during and after the development of the application. It typically includes the following:

  • Type of test
  • Input of test
  • Expected output
  • Actual output

Comprehensive Testing

Ensures that all (functional) requirements of the program are tested

It is impossible to test a piece of software under every possible condition and input.  However, it is possible to devise an extensive set of test cases which are representative of all conditions.

Debugging

No matter how careful you are when you write program code, your programs are likely to have errors, or bugs, that prevent them from running the way you intended. Debugging is the process of locating and fixing errors in programs.

Program errors can be classified in the following way:

  • Syntax errors
  • Execution Errors
  • Logic Errors

 

Debugging Tools

There are a variety of techniques used to identify execution and logic errors.  Some are manual methods used to check the design, while others use debugging tools available within the software development environment.

Manual Methods: Dry Runs and Trace Tables

Dry run testing is usually a ‘paper and pencil’ exercise carried out to identify logic errors.  The process involves manually stepping through an algorithm using purposely chosen sample data to record the values of variables.

Trace tables are frequently used when conducting dry runs on program components. A trace table is used to record the values of variables throughout the flow of  the program. The table record the value of each variable at a certain point in program flow.  Trace tables are particularly useful for seeing how variables change through iterations of a loop.

Environment Debugging Tools: Trace Facility, Breakpoints and Watches

 

Breakpoints

A breakpoint will halt execution of the code at a predefined point then the values of variable can be inspected to compare with trace table values/expected values

Watchpoint

Watchpoints are used to stop execution when the value of a specific variable changes/pre-determined conditions are met This allows the programmer to compare the value with the expected value.