Unit III
Software Testing
1
Testing
• Software Testing is evaluation of the software against requirements gathered from users and system specifications.
• Testing is conducted at the phase level in software development life cycle or at module level in program code.
• Software testing comprises of Validation and Verification.
2
Software Verification
• Verification is the process of confirming if the software is meeting the business requirements, and is
developed adhering to the proper specifications and methodologies.
• Verification ensures the product being developed is according to design specifications.
• Verification answers the question– "Are we developing this product by firmly following all design
specifications ?“
• Verifications concentrates on the design and system specifications.
3
Software Validation
• Validation is process of examining whether or not the software satisfies the user requirements. It is carried
out at the end of the SDLC. If the software matches requirements for which it was made, it is validated.
• Validation ensures the product under development is as per the user requirements.
• Validation answers the question – "Are we developing the product which attempts all that user needs from
this software ?".
• Validation emphasizes on user requirements.
4
Verification Vs Validation
VERIFICATION VALIDATION
•It includes checking documents, design, codes and programs.
Verification It includes testing and validating the actual product.
Verification is the static testing. Validation is the dynamic testing.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, walkthroughs, inspections and
Methods used in validation are Black Box Testing.
desk-checking.
It checks whether the software meets the requirements and expectations
It checks whether the software conforms to specifications or not.
of a customer or not.
It can only find the bugs that could not be found by the verification
It can find the bugs in the early stage of the development.
process.
The goal of verification is application and software architecture and
The goal of validation is an actual product.
specification.
Quality assurance team does verification. Validation is executed on software code with the help of testing team.
It comes before validation. It comes after verification.
5
Target of the test are
• Errors - These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.
• Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an error
which can cause system to fail.
• Failure - failure is said to be the inability of the system to perform the desired task. Failure
occurs when fault exists in the system.
6
Manual Vs Automated Testing
• Testing can either be done manually or using an automated testing tool:
• Manual - This testing is performed without taking help of automated testing tools. The software tester prepares test
cases for different sections and levels of the code, executes the tests and reports the result to the manager.
• Manual testing is time and resource consuming. The tester needs to confirm whether or not right test cases are used.
Major portion of testing involves manual testing.
• Automated This testing is a testing procedure done with aid of automated testing tools. The limitations with manual
testing can be overcome using automated test tools.
7
Testing Approaches
• Tests can be conducted based on two approaches –
• Functionality testing
• Implementation testing
• When functionality is being tested without taking the actual implementation in concern it is known as black-box testing.
The other side is known as white-box testing where not only functionality is tested but the way it is implemented is also
analyzed.
• Exhaustive tests are the best-desired method for a perfect testing. Every single possible value in the range of the input and
output values is tested. It is not possible to test each and every value in real world scenario if the range of values is large.
8
Black Box Testing
• It is carried out to test functionality of the program. It is also called ‘Behavioral’ testing. The tester in this case, has a set of input values and respective
desired results. On providing input, if the output matches with the desired results, the program is tested ‘ok’, and problematic otherwise.
• Black-box testing techniques:
• Equivalence class - The input is divided into similar classes. If one element of a class passes the test, it is assumed that all the class is passed.
• Boundary values - The input is divided into higher and lower end values. If these values pass the test, it is assumed that all values in between may
pass too.
• Cause-effect graphing - In both previous methods, only one input value at a time is tested. Cause (input) – Effect (output) is a testing technique
where combinations of input values are tested in a systematic way.
• Pair-wise Testing - The behavior of software depends on multiple parameters. In pairwise testing, the multiple parameters are tested pair-wise for their
different values.
9
White Box Testing
• It is conducted to test program and its implementation, in order to improve code efficiency or structure. It is also known as
‘Structural’ testing. In this testing method, the design and structure of the code are known to the tester. Programmers of
the code conduct this test on the code.
• The below are some White-box testing techniques:
• Control-flow testing - The purpose of the control-flow testing to set up test cases which covers all statements and branch
conditions. The branch conditions are tested for both being true and false, so that all statements can be covered.
• Data-flow testing - This testing technique emphasis to cover all the data variables included in the program. It tests where
the variables were declared and defined and where they were used or changed.
10
Testing Levels
• Testing itself may be defined at various levels of SDLC. The testing process runs parallel to software development. Before jumping on the next stage,
a stage is tested, validated and verified.
• Testing separately is done just to make sure that there are no hidden bugs or issues left in the software. Software is tested on various levels -
• Unit Testing
While coding, the programmer performs some tests on that unit of program to know if it is error free. Testing is performed under white-box testing
approach. Unit testing helps developers decide that individual units of the program are working as per requirement and are error free.
• Integration Testing
Even if the units of software are working fine individually, there is a need to find out if the units if integrated together would also work without errors.
For example, argument passing and data updation etc.
11
• System Testing
• The software is compiled as product and then it is tested as a whole. This can be accomplished using one or more
of the following tests:
• Functionality testing - Tests all functionalities of the software against the requirement.
• Performance testing - This test proves how efficient the software is. It tests the effectiveness and average time taken
by the software to do desired task. Performance testing is done by means of load testing and stress testing where
the software is put under high user and data load under various environment conditions.
• Security & Portability - These tests are done when the software is meant to work on various platforms and
accessed by number of persons.
12
• Acceptance Testing
• When the software is ready to hand over to the customer it has to go through last phase of testing where it is tested for user-interaction
and response. This is important because even if the software matches all user requirements and if user does not like the way it appears or
works, it may be rejected.
• Alpha testing - The team of developer themselves perform alpha testing by using the system as if it is being used in work environment.
They try to find out how user would react to some action in software and how the system should respond to inputs.
• Beta testing - After the software is tested internally, it is handed over to the users to use it under their production environment only for
testing purpose. This is not as yet the delivered product. Developers expect that users at this stage will bring minute problems, which
were skipped to attend.
• Regression Testing
• Whenever a software product is updated with new code, feature or functionality, it is tested thoroughly to detect if there is any negative
impact of the added code. This is known as regression testing.
13
Top Down and Bottom Up Testing
• In Top Down Integration Testing, testing takes place from top to bottom. High-level modules are tested first and
then low-level modules and finally integrating the low-level modules to a high level to ensure the system is
working as intended. In this type of testing, Stubs are used as temporary module if a module is not ready for
integration testing.
• It is a reciprocate of the Top-Down Approach. In Bottom Up Integration Testing, testing takes place from bottom
to up. Lowest level modules are tested first and then high-level modules and finally integrating the high-level
modules to a low level to ensure the system is working as intended. Drivers are used as a temporary module for
integration testing.
14
Test Stubs and Drivers
• What is a Stub?
• It is called by the Module under Test.
• What is a Driver?
• These terms (stub & driver) come into the picture while doing Integration Testing. While working on
integration, sometimes we face a situation where some of the functionalities are still under development. So the
functionalities which are under development will be replaced with some dummy programs. These dummy
programs are named as Stubs or Drivers.
15
Difference between Stubs and Drivers
STUBS DRIVERS
Stubs used in Top Down Drivers used in Bottom Up
Integration Testing Integration Testing
Stubs are used when sub Drivers are used when main
programs are under programs are under
development development
Top most module is tested first Lowest module is tested first
It can simulate the behavior of It can simulate the behavior of
lower level modules that are not upper level modules that are
integrated not integrated
Stubs are called programs Drivers are the calling programs
16
UNIT TESTING INTEGRATION TESTING
Unit testing is the first level of testing in the Software Testing Integration Testing is the second level of testing in Software Testing
Difference between Unit and Integration Testing
Considers each component, as a single system Integrated components are seen as a single system
Purpose is to test working of individual unit Purpose is to test the integration of multiple unit modules
It evaluates the each component or unit of the software product It examines the proper working, interface and reliability, after the integration
of the modules, and along with the external interfaces and system
Scope of Unit testing is limited to a particular unit under test Scope of Unit testing is wider in comparison to Unit testing. It covers two or
more modules
It has no further types It is divided into following approaches
• Bottom up integration approach
• Top down integration approach
• Big Bang approach
• Hybrid approach
It is also known as Component Testing It is also known as I&T or String Testing
It is performed at the code level It is performed at the communication level
It is carried out with the help of reusable test cases It is carried out with the help of stubs and drivers
It comes under White Box Testing It comes under both Black Box and White Box Testing
It is performed by developers It is performed by either testers or developers 17
INTEGRATION TESTING SYSTEM TESTING
It is a low level testing It is a high level testing
It is followed by System Testing It is followed by Acceptance Testing
It is performed after unit testing It is performed after integration testing
Different types of integration testing are: Different types of system testing are:
• Top bottom integration testing • Regression testing
• Bottom top integration testing • Sanity testing
• Big bang integration testing • Usability testing
• Sandwich integration testing • Retesting
• Load testing
• Performance testing
• Maintenance testing
Testers perform functional testing to validate the interaction of Testers perform both functional as well as non-functional
two modules testing to evaluate the functionality, usability, performance
testing etc.,
Performed to test whether two different modules interact Performed to test whether the product is performing as per
effectively with each other or not user expectations and the required specifications
It can be performed by both testers and developers It is performed by testers
Testing takes place on the interface of two individual modules Testing takes place on complete software application
18
Path Testing
• Path Testing is a method that is used to design the test cases. In path testing method, the control flow graph of a
program is designed to find a set of linearly independent paths of execution. In this method Cyclomatic
Complexity is used to determine the number of linearly independent paths and then test cases are generated for
each path.
• It give complete branch coverage but achieves that without covering all possible paths of the control flow graph.
McCabe’s Cyclomatic Complexity is used in path testing. It is a structural testing method that uses the source
code of a program to find every possible executable path.
19
Software Maintenance
• Software maintenance is widely accepted part of SDLC now a days. It stands for all the modifications and updates done after the
delivery of software product. There are number of reasons, why modifications are required, some of them are briefly mentioned below:
• Market Conditions - Policies, which changes over the time, such as taxation and newly introduced constraints like, how to maintain
bookkeeping, may trigger need for modification.
• Client Requirements - Over the time, customer may ask for new features or functions in the software.
• Host Modifications - If any of the hardware and/or platform (such as operating system) of the target host changes, software changes
are needed to keep adaptability.
• Organization Changes - If there is any business level change at client end, such as reduction of organization strength, acquiring
another company, organization venturing into new business, need to modify in the original software may arise.
20
Types
• In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine maintenance tasks as some bug
discovered by some user or it may be a large event in itself based on maintenance size or nature. Following are some types of
maintenance based on their characteristics:
• Corrective Maintenance - This includes modifications and updates done in order to correct or fix problems, which are either discovered
by user or concluded by user error reports.
• Adaptive Maintenance - This includes modifications and updations applied to keep the software product up-to date and tuned to the
ever changing world of technology and business environment.
• Perfective Maintenance - This includes modifications and updates done in order to keep the software usable over long period of time.
It includes new features, new user requirements for refining the software and improve its reliability and performance.
• Preventive Maintenance - This includes modifications and updations to prevent future problems of the software. It aims to attend
problems, which are not significant at this moment but may cause serious issues in future.
21
Maintenance Activities
• IEEE provides a framework for sequential maintenance process activities. It can be used in iterative manner and
can be extended so that customized items and processes can be included.
22
• Identification & Tracing - It involves activities pertaining to identification of requirement of modification or maintenance. It is generated by user or
system may itself report via logs or error messages. Here, the maintenance type is classified also.
• Analysis - The modification is analyzed for its impact on the system including safety and security implications. If probable impact is severe, alternative
solution is looked for. A set of required modifications is then materialized into requirement specifications. The cost of modification/maintenance is
analyzed and estimation is concluded.
• Design - New modules, which need to be replaced or modified, are designed against requirement specifications set in the previous stage. Test cases are
created for validation and verification.
• Implementation - The new modules are coded with the help of structured design created in the design [Link] programmer is expected to do unit
testing in parallel.
• System Testing - Integration testing is done among newly created modules. Integration testing is also carried out between new modules and the
system. Finally the system is tested as a whole, following regressive testing procedures.
• Acceptance Testing - After testing the system internally, it is tested for acceptance with the help of users. If at this state, user complaints some issues
they are addressed or noted to address in next iteration.
• Delivery - After acceptance test, the system is deployed all over the organization either by small update package or fresh installation of the system. The
final testing takes place at client end after the software is delivered.
• Training facility is provided if required, in addition to the hard copy of user manual.
• Maintenance management - Configuration management is an essential part of system maintenance. It is aided with version control tools to control
versions, semi-version or patch management.
23
COCOMO Model
• Constructive Cost Model (COCOMO) is a regression model based on LOC, i.e number of Lines of Code. It is a
procedural cost estimate model for software projects and often used as a process of reliably predicting the various
parameters associated with making a project such as size, effort, cost, time and quality.
• It was proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which make it one of the best-
documented models.
• The key parameters which define the quality of any software products, which are also an outcome of the COCOMO are
primarily Effort & Schedule:
• Effort: Amount of labour that will be required to complete a task. It is measured in person-months units.
• Schedule: Simply means the amount of time required for the completion of the job, which is, of course, proportional to the
effort put. It is measured in the units of time such as weeks, months.
24
Types
• COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. Any of the three forms can
be adopted according to our requirements. These are types of COCOMO model:
– Basic COCOMO Model
– Intermediate COCOMO Model
– Detailed COCOMO Model
25
Types of Systems
• Organic – A software project is said to be an organic type if the team size required is adequately small, the problem is well understood
and has been solved in the past and also the team members have a nominal experience regarding the problem.
• Semi-detached – A software project is said to be a Semi-detached type if the vital characteristics such as team-size, experience,
knowledge of the various programming environment lie in between that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop compared to the organic ones and require more experience and better
guidance and creativity. Eg: Compilers or different Embedded Systems can be considered of Semi-Detached type.
• Embedded – A software project with requiring the highest level of complexity, creativity, and experience requirement fall under this
category. Such software requires a larger team size than the other two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.
26
Basic COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough calculations of Software Costs.
Its accuracy is somewhat restricted due to the absence of sufficient factor considerations.
27
Intermediate Model
• The basic Cocomo model assumes that the effort is only a function of the number of lines of code and some constants evaluated according to
the different software system.
• However, in reality, no system’s effort and schedule can be solely calculated on the basis of Lines of Code. For that, various other factors
such as reliability, experience, Capability. These factors are known as Cost Drivers and the Intermediate Model utilizes 15 such drivers for
cost estimation
• E-Effort applied in person month
• Where Effort Adjustment Factor (EAF)
28
Detailed Model
Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment of the cost driver’s
impact on each step of the software engineering process. The detailed model uses different effort multipliers for each
cost driver attribute. In detailed cocomo, the whole software is divided into different modules and then we apply
COCOMO in different modules to estimate effort and then sum the effort.
• The Six phases of detailed COCOMO are:
• Planning and requirements
• System design
• Detailed design
• Module code and test
• Integration and test
• Cost Constructive model
• The effort is calculated as a function of program size and a set of cost drivers are given according to each phase of
the software lifecycle.
29
30
Software Re-Engineering
• Software Re-engineering is a process of software development which is done to
improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form. This
process encompasses a combination of sub-processes like reverse engineering,
forward engineering, reconstructing etc.
• Reduced Risk:
• As the software is already existing, the risk is less as compared to new software
development. Development problems, staffing problems and specification problems
are the lots of problems which may arise in new software development.
• Reduced Cost:
• The cost of re-engineering is less than the costs of developing new software.
31
Reverse Engineering
• Software Reverse Engineering is a process of recovering
the design, requirement specifications and functions of a
product from an analysis of its code. It builds a program
database and generates information from this.
• The purpose of reverse engineering is to facilitate the
maintenance work by improving the understandability of
a system and to produce the necessary documents for a
legacy system.
32
Software Quality Factors
McCall’s Factor Model
• This model classifies all software requirements into 11 software quality factors. The 11 factors are grouped into
three categories – product operation, product revision, and product transition factors.
• Product operation factors − Correctness, Reliability, Efficiency, Integrity, Usability.
• Product revision factors − Maintainability, Flexibility, Testability.
• Product transition factors − Portability, Reusability, Interoperability.
33
Product operation factors
• Reliability
• Reliability requirements deal with service failure. They determine the maximum allowed failure rate of the software system, and can refer to the entire
system or to one or more of its separate functions.
• Efficiency
• It deals with the hardware resources needed to perform the different functions of the software system. It includes processing capabilities (given in
MHz), its storage capacity (given in MB or GB) and the data communication capability (given in MBPS or GBPS).
• It also deals with the time between recharging of the system’s portable units, such as, information system units located in portable computers, or
meteorological units placed outdoors.
• Integrity
• This factor deals with the software system security, that is, to prevent access to unauthorized persons, also to distinguish between the group of people
to be given read as well as write permit.
• Usability
• Usability requirements deal with the staff resources needed to train a new employee and to operate the software system.
34
Product revision factors
• Maintainability
• This factor considers the efforts that will be needed by users and maintenance personnel to identify the reasons for software failures, to correct the
failures, and to verify the success of the corrections.
• Flexibility
• This factor deals with the capabilities and efforts required to support adaptive maintenance activities of the software. These include adapting the
current software to additional circumstances and customers without changing the software. This factor’s requirements also support perfective
maintenance activities, such as changes and additions to the software in order to improve its service and to adapt it to changes in the firm’s technical
or commercial environment.
• Testability
• Testability requirements deal with the testing of the software system as well as with its operation. It includes predefined intermediate results, log files,
and also the automatic diagnostics performed by the software system prior to starting the system, to find out whether all components of the system
are in working order and to obtain a report about the detected faults. Another type of these requirements deals with automatic diagnostic checks
applied by the maintenance technicians to detect the causes of software failures.
35
Product transition factors
• Portability
• Portability requirements tend to the adaptation of a software system to other environments consisting of different hardware, different
operating systems, and so forth. The software should be possible to continue using the same basic software in diverse situations.
• Reusability
• This factor deals with the use of software modules originally designed for one project in a new software project currently being
developed. They may also enable future projects to make use of a given module or a group of modules of the currently developed
software. The reuse of software is expected to save development resources, shorten the development period, and provide higher quality
modules.
• Interoperability
• Interoperability requirements focus on creating interfaces with other software systems or with other equipment firmware. For example,
the firmware of the production machinery and testing equipment interfaces with the production control software.
36
Quality Assurance and Control
Quality Assurance Quality Control Testing
QA includes activities that ensure the It includes activities that ensure the It includes activities that ensure the
implementation of processes, procedures and verification of a developed software with identification of bugs/error/defects in a
standards in context to verification of respect to documented (or not in some cases) software.
developed software and intended requirements.
requirements.
Focuses on processes and procedures rather Focuses on actual testing by executing the Focuses on actual testing.
than conducting actual testing on the system. software with an aim to identify bug/defect
through implementation of procedures and
process.
Process-oriented activities. Product-oriented activities. Product-oriented activities.
Preventive activities. It is a corrective process. It is a preventive process.
It is a subset of Software Test Life Cycle (STLC). QC can be considered as the subset of Quality Testing is the subset of Quality Control.
Assurance.
37
THANK YOU
38