Software Testing
Program Testing
• Testing is intended to
• show that a program does what it is intended to do (conform to
requirements) and
• to discover program defects before it is put into use.
• When you test software,
• you execute a program using artificial data.
• Can reveal the presence of errors NOT their absence.
• Validation means
• Testing, and
• Inspection like reviews, etc.
2
Three types of systematic Validation Technique
Static (non-execution)
• Inspection & Reviews
• Examination of documentation,
source code listings, etc.
Functional (Black Box Testing)
• Based on behaviour /
functionality of software
• Class Component System
Structural (White Box Testing)
• Based on structure
of software (code)
3
Black Box Testing Techniques
• Equivalence partitioning
• Boundary value analysis
• There are many other techniques….
4
Equivalence partitioning (EP)
• divide (partition) the inputs, outputs, etc. into areas which
are the same (equivalent)
• assumption: if one value works, all will work
• one from each partition better than all from one
invalid valid invalid
0 1 100 101
5
Boundary value analysis (BVA) guidelines
1. For each input/output condition specifies a range of values,
write test cases for the ends of the range, and invalid-input
test cases for situations just beyond the ends.
• For instance, if the valid domain of an input value is 1.0,
write test cases for the situations 1.0, 0.999, and 1.001.
2. If an input/output condition specifies a number of values,
write test cases for the minimum and maximum number of
values and one beneath and beyond these values.
• For instance, if an input file can contain 1–255 records, write
test cases for 0, 1, 255, and 256 records.
6
Why do both EP and BVA?
• If you do boundaries only, you
have covered all the partitions
as well
• technically correct and may be
OK if everything works correctly!
• if the test fails, is the whole
partition wrong, or is a boundary
in the wrong place - have to test
mid-partition anyway
• testing only extremes may not
give confidence for typical use
scenarios (especially for users)
• boundaries may be harder (more
costly) to set up
7
Decision tables
• Explore combinations of inputs, situations or events
• Add columns to the table for each unique combination of input
conditions.
• Each entry in the table may be either ‘T’ for true, ‘F’ for false.
Input Conditions
Valid username T T T T F F F F
Valid password T T F F T T F F
Account in credit T F T F T F T F
8
Rationalize input combinations
• Some combinations may be impossible or not of interest
• Some combinations may be ‘equivalent’
• use a hyphen to denote “don’t care”
Input Conditions
Valid username F T T T
Valid password - F T T
Account in credit - - F T
9
Determine test case groups
• Determine the expected output
conditions for each combination
of input conditions
• Each column is at least one test
case
10
Design test cases
• usually one test case for each column but can be none
or several
Test Description Expected Outcome Tag
1 Username BrbU Invalid username A
2 Username Invalid username A
usernametoolong
3 Username BobU Invalid password B
Password abcd
4 Valid user, no disc Restricted access C
space
5 Valid user with disc Unrestricted access D
space
11
White Box Testing Techniques
• Statement testing
• Branch / Decision testing
• There are many other techniques ….
12
Statement coverage
• percentage of executable statements exercised by a test suite
number of statements exercised
=
total number of statements
• example:
• program has 100 statements ?
• tests exercise 87 statements
• statement coverage = 87%
Statement coverage
is normally measured
by a software tool.
13
Example of statement coverage
1 read(a) Test Input Expected
2 IF a > 6 THEN case output
3 b=a 1 7 7
4 ENDIF
5 print b
As all 5 statements are ‘covered’ by
this test case, we have achieved
Statement 100% statement coverage
numbers
14
Decision coverage
(Branch coverage)
• percentage of decision outcomes
exercised by a test suite
number of decisions outcomes exercised
=
total number of decision outcomes
• example: ? False
• program has 120 decision outcomes True
• tests exercise 60 decision outcomes
• decision coverage = 50%
Decision coverage
is normally measured
by a software tool.
15
Decision coverage( Branch coverage)