0% found this document useful (0 votes)
88 views19 pages

Test Strategy Document for Project Release

The document outlines the Test Strategy for the <project name> Release, detailing the purpose, objectives, scope, and methodologies for testing. It includes a framework for various testing types, roles, responsibilities, and defect management processes, while also addressing assumptions, dependencies, and risks. This working document serves as a reference for project team members and will be updated as necessary.

Uploaded by

mandalrealty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views19 pages

Test Strategy Document for Project Release

The document outlines the Test Strategy for the <project name> Release, detailing the purpose, objectives, scope, and methodologies for testing. It includes a framework for various testing types, roles, responsibilities, and defect management processes, while also addressing assumptions, dependencies, and risks. This working document serves as a reference for project team members and will be updated as necessary.

Uploaded by

mandalrealty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Project Name

Test Strategy
Version: 1.0

Date: 22-07-2022
Version Control

Revision History

Date Version Description Author Reviewed by Approved by


22-07-2022 1.1 Test Plan

Document Review

Name Title Document Version Signature Date


Test Lead 1.0 22-07-2022

Sign-off Notes

Name Title Document Version Signature Date


Project
1.0
Manager/BU
Index
Version Control 1

1. Purpose 4

2. Objective 4

3. Scope 4
3.1 In Scope 5
3.2 Out Of Scope 5
3.3 Assumptions 5
3.4 Dependencies 6

4. Test Methodology and Approach 6


4.1 Test Stages 7
4.1.1 Test Planning 8
4.1.2 Test Preparation 8
4.1.3 Test Execution 9
4.1.4 Test Closure 10
4.2 Testing Types and Techniques 11
4.2.1 Functional Testing 11
4.2.2 User Interface Testing 11
4.3 Levels of Testing 11
4.3.1 Smoke Testing 11
4.3.2 Component Testing 12
4.3.3 Integration Testing 12
4.3.4 System Testing 12
4.3.5 Regression Testing 12
4.4 Test Reporting 12
4.4.1 Test Execution Reporting 13
4.4.2 Defect Reporting 13
4.4.3 Defect Life Cycle 13
4.4.4 Defect Severity and Priority 14

5. Test Estimation 15

6. Test Schedule 15

7. Roles and Responsibilities 16

8. Entry and Exit Criteria 16

9. Suspension Criteria 17

10. Resumption Criteria 18


11. Risks and Mitigation Plan 18

12. Test Tools 19

13. Test Deliverables 19

14. Meetings 19

15. Escalation Matrix 20


1. Purpose
This document sets out the overall testing strategy for <project name> Release. It covers all of the main aspects

of testing to be undertaken and deployment for functional testing.

● This document describes, for each type of testing

● The key objectives of the testing

● Testing approach and overall testing stages

● The key roles and responsibilities associated with testing

● The required environment and testing tools to be used

● Defect management process

● Risks, assumptions, issues and dependencies

This document does not provide a detailed plan associated with each type of testing as the steps involved in

each type of testing will vary. Instead, this document provides a framework of tasks and activities for completion

of each type of testing.

This document is intended to be used as a reference for project team members. This includes the Project

Management Team, Development Team, Testing Team, Business workstream, and Service Transition Team.

This is a ‘working’ document and will continue to be updated to reflect any changes in the current strategy and

approach. <name> will publish updated versions of this document as and when any agreed changes are

highlighted.

2. Objective
The objective of <project name> Release Test plan is to define the various Testing strategies and testing tools

used for the complete Testing life cycle of this project. This document describes the plan for testing the

architectural prototype of the <project name> Release Application System. This Test Plan document supports

the following objectives:


● Identify existing project information and the software that should be tested

● List the recommended test requirements (high level)

● Recommend and describe the testing strategies to be employed

● Identify the required resources and provide an estimate of the test efforts

● List the deliverable elements of the test activities

3. Scope
All the Functional and UI requirements which are documented in the finalized FRD

3.1 In Scope
Following are the list of components and features that will be tested in the release as mentioned in the signed-

off scope document from <project name> Release.

1. Component: SF Application

a. All the UI/UX requirements as scoped


b. All the functional requirements as scoped
2. Component: APK (Android)

a. All the UI/UX requirements as scoped


b. All the functional requirements as scoped

3.2 Out Of Scope


The following items are not part of the current scope and / or considered being deferred.

1. Any other product feature / functionality that is not in the scope of <project name> Release,

will not be considered as a part of this application testing.

2. The defects / issues that are raised as a part of product testing and that does not impact the current

scope of testing will not be included / targeted for QA sign off.

3. Third Party Interfaces Testing (TAS Integration, etc.)

4. All the de-scoped functional and non-functional requirements are considered as out of scope

5. The mobile / devices / roles not mentioned in the “In-Scope” section, will be out of scope for this project.

6. Compatibility Testing, Performance Testing and Vulnerability Testing


3.3 Assumptions
The following are the assumptions of the <project name> Release for the Testing Life Cycle

S No ASSUMPTION CONFIRMED BY DATE


Requirement changes will be informed by project manager as soon
1 Project Manager 22-07-2022
as decision taken
Scope and Schedule changes will be informed by project manager
2 Project Manager 22-07-2022
as soon as decision taken
Code delay against agreed code delivery schedule should be
3 Project Manager 22-07-2022
informed by development lead
BA will respond to queries on requirements within 1 – 2 days of
4 Project Manager 22-07-2022
raising queries
5 All necessary requirements KT should be provided by BA Project Manager 22-07-2022
Releases to QA are unit tested and code reviewed completely (well
6 Project Manager 22-07-2022
within the development phase / environment)
7 Releases to QA shall be as per planned delivery dates Project Manager 22-07-2022
Hierarchy and other configuration setup to be created in
Application by <name> DEV team for testing. Necessary application
8 Project Manager 22-07-2022
access, data upload layouts or utilities to be provided to <name>
QA team for data creation and verification
Environment problems should be addressed immediately if raised
9 Project Manager 22-07-2022
by any team
Data backup should be taken and loaded in the system again in
10 Project Manager 22-07-2022
case of Data loss (by Dev team)
Mobility and <project name> QA environments should be
11 Project Manager 22-07-2022
connected and in sync throughout the course of QA
All the teams should provide the required data and help for End to
12 Project Manager 22-07-2022
End testing.
Additional QA effort and time will be provided in case of any
13 change / addition to requirements, through a structured effort Project Manager 22-07-2022
estimation process

3.4 Dependencies
The following are the dependencies of the <project name> Release for the Testing Life Cycle

1. Test environment should have all the required configurations

2. Test environment should preloaded with valid master data

3. Any change in the functionality / scope during the testing phase is communicated at the right time
4. Test Methodology and Approach
The Test Strategy presents the recommended approach to the testing of the <project name> Release

application. The previous section on Test Scope described what will be tested; this describes how it will be

tested.

1. The main considerations for the test strategy are the techniques to be used and the criterion for

knowing when the testing is completed.

2. In addition to the considerations provided for each test below, testing should only be executed using

known, controlled databases, in secured environments.

3. The following test strategy is generic in nature and is meant to apply to the requirements listed in

Section 3 of this document.

4.1 Test Stages


Build will include four stages: Test Planning, Test Preparation, Test Execution and Test Closure. The testers will

prepare a set of test cases for each build based on the requirement document.

During build, each type of testing is defined in more detail in the following subsections. Every defect discovered

while testing must be raised, prioritized and assigned to the appropriate resolver. Defect management is

described in more detail in subsections below as well. User acceptance tests will be conducted by agreed client

representatives during build, in order to ensure that all agreed functionality has been correctly implemented

into the system.

4.1.1 Test Planning


There are four stages to testing that will be delivered

4.1.2 Test Preparation


The key activities during the test preparation stage are:
4.1.3 Test Execution
The purpose of the test execution process area is to test the code and configuration against the baselined

requirements

4.1.4 Test Closure


The key activities for test stage closure are:

4.2 Testing Types and Techniques


4.2.1 Functional Testing
Testing of the application should focus on any target requirements that can be traced directly to business

functions and business rules. The goals of these tests are to verify proper data acceptance, processing, and

retrieval, and the appropriate implementation of the business rules. This type of testing is based upon black box

techniques, that is, verifying the application (and its internal processes) by interacting with the application via

the GUI and analyzing the output (results). Identified following are the outline of the testing recommended for

each application,

a. Ensuring proper application navigation, data entry, processing, and retrieval

b. Ensure Database access methods and processes function properly and without data corruption.

4.2.2 User Interface Testing


User Interface testing verifies a user’s interaction with the web and Mobile Applications. The goal of UI Testing is

to ensure that the User Interface provides the user with the appropriate access and navigation through the

functions of the applications. In addition, UI Testing ensures that the objects within the UI function as expected

and conform to industry standards.

4.3 Levels of Testing


The overall Test Strategy is to verify the future state design, business requirements, system set-up, and system

output of the “To-Be” solution prior to go-live. Various testing levels or phases will be used to validate each of

these requirements. While each phase is considered a distinct event, there may be areas in which testing

phases will overlap (e.g., Integration and Non-functional). Functional testing will involve testing the “To Be”

Solution against business requirements. The main aim of this type of testing will be to verify that the solution is
fully functional and that all specified requirements have been incorporated. Non-functional testing is concerned

with the solution's non-functional requirements and is designed specifically to evaluate the overall readiness of

the solution according to the criteria not covered by functional testing. Both of these stages will combine to

ensure a comprehensive approach to testing the overall “To Be” Solution. The individual test stages are

described in more detail in the following sections of this document.

Following are the levels of testing that will be performed for testing the given scope:

4.3.1 Smoke Testing


Smoke Testing would be performed on the application prior to actual testing. This testing will be performed on

the testing environment, when a new build has been deployed for testing. The testing is performed to ensure

that the deployed build is stable and is good to proceed with the required functional testing. Pass criteria will be

based on actual testing on the application and test case documents will not be published.

4.3.2 Component Testing

The release plan to QA is based on the User Stories (components). The component testing will be performed on

each and every component / module in the application / sprint. Each component will be tested individually on a

high level to ensure that the requirements for that component have been met.

As a part of component testing, the following components will be covered as applicable:

● Web component testing: This will include testing the application to incorporate required configurations

as per the given scope. This also targets to the initial upload factors in terms of masters and final

resultant factors in terms of reports (at varied hierarchy levels).

● Mobility component testing: This will include testing the given scope / requirement as per the device

specification model referred from the Client end. product supports device models based out of IOS as

the OS types. Depending on the specification received, appropriate devices or its equivalent will be used

for testing.

● Sync component testing: This will include testing the complete scope of the project in terms of a full

flow / end to end. The flow of functionality is based out from / to the portal application to the device and

vice versa.

4.3.3 Integration Testing


Integration Testing will be performed as a part of integration between various components / modules in the

application. This testing ensures that the interfacing and connectivity within modules exists as required.

4.3.4 System Testing


System Testing will be performed considering the entire application as one single system / component. This

testing ensures that the entire application is working as it is expected to. The result of testing ensures that the

requirements defined in the scope of testing are met.

4.3.5 Regression Testing


Regression Testing will be performed as a final testing level so as to ensure that any changes made in any

module does not affect the same and other existing modules, at any time. This testing will be considered as

the final round of testing the entire application in scope.

4.4 Test Reporting


Test Report provides a consolidated report on the various activities performed as part of testing of the
application

4.4.1 Test Execution Reporting

As a part of test execution reporting, a test execution report will be published by the testing team to the Project

Management Team. The test execution report will include details of what has been tested in the application, the

current status of the application and other related details.

4.4.2 Defect Reporting

‘JIRA’ will be used for logging and tracking defects. The functionality that is not working as is expected to will be

reported as a defect in the tool. The tool also provides other details of the defect as is relevant to the application

and the stability of the same. As a part of defect reporting, the list of defects that had been filed, the current

status of defects will be published.

4.4.3 Defect Life Cycle

Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle, which a defect goes through during

its lifetime. It includes following stages:

● Submitted: When a defect is logged and posted for the first time. Its state is given as new.
● Open: Its state when the developer starts analyzing and working on the defect fix.

● Need More Info: Developer can request more information about the defect to a tester

● Fixed: At this stage the tester does the retesting of the changed code which the developer has given

to him to check whether the defect got fixed or not.

● Once the latest build is pushed to the environment, Dev lead moves all the Fixed defects to Retest. It

is an indication to the testing team that the defects are ready to test.

● Retesting: Retesting is testing of a particular bug after it has been fixed. It ensures that the bug has

been fixed and is working as expected.

● Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the

status to “reopened”. The bug goes through the life cycle once again.

● Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer

exists in the software, the tester changes the status of the bug to “closed”. This state means that the

bug is fixed, tested and approved.

4.4.4 Defect Severity and Priority


The degree of impact that a defect has on the development or operation of a component or system

1. CRITICAL: The system is unusable, and the testing is completely blocked from the activity that the software is

intended to perform, and It does not have a workaround. Example: Unsuccessful installation, system

unavailable and Not able to login

● Defect fix SLA (Project) : Must be fixed within 4 hours

● Defect fix SLA (Product) : Must be fixed within 8 hours

2. HIGH: Major functionality of the system is not available. The software can be used for at least some of the

activities that it was intended to perform; however, without this functionality the system fails to meet one or

more of the project objectives. It has a workaround but is not obvious and is difficult. Example: A feature is not

functional from one module, but the task is doable if 10 complicated indirect steps are followed in another

module/s.

● Defect fix SLA: Must be fixed within 8 hours

● Defect fix SLA (Product) : Must be fixed within 16 hours

3. MEDIUM: Some functionality is not available, but the system can still be used to meet project objectives. The

defect affects the functionality that is directly implied by the requirements. It has an easy workaround. Example:

A minor feature that is not functional in one module, but the same task is easily doable from another module.

● Defect fix SLA: Must be fixed within 12 hours

● Defect fix SLA (Product) : Must be fixed within 20 hours

4. LOW: The system can be used for the majority of its purposes. The defect affects some minor functionality of

the system that is not directly implied by the requirements. It does not even need a workaround. It does not

impact productivity or efficiency. It is merely an inconvenience. Example: Petty layout discrepancies,

spelling/grammatical errors.

● Defect fix SLA: Must be fixed within 16 hours

● Defect fix SLA (Product) : Must be fixed within 24 hours


5. Test Estimation
Effort estimation and planning for testing the given scope for <project name> Release is captured as per the
test management process followed for <name>suite of products.

The QA WBS will be shared in the google drive for better visibility to the project stakeholders. The high level QA
estimation planning is as below:

TEST ESTIMATION
S NO TESTING ACTIVITIES
in Hours in Days

1 Understanding the Requirements 4 0.5

2 Test Documentation 8 1

3 Test Environment Setup 4 0.5

4 Functional Testing 200 25

5 Regression Testing 8 1

6 Test Evolution & Report 4 0.5


Grand Total 228 Hours 29 Days

6. Test Schedule
Testing of the <project name> Release incorporates test activities for each of the test efforts identified in the
previous section. Separate project milestones are identified to communicate project status and
accomplishments.

Refer to the Software Development Plan [WBS] and the Iteration Plan [Sprint Plan] for the overall phases of the
project schedule.

TESTING ACTIVITIES START DATE END DATE

Test Strategy/Plan 20-07-2022 20-07-2022

Test Design 20-07-2022 21-07-2022

Test Execution - Functional 21-07-2022 25-08-2022

Test Execution - Regression 26-08-2022 26-08-2022


Test Evaluation 27-08-2022 27-08-2022

UAT Support 28-08-2022 TBD

Production Support TBD TBD

Note: If any deviation in the resource allocation like less resource count, No knowledge on the <name>SF
application, the required training needs to be conducted at right time. And this cause the resources to be
extended themselves to reach the testing milestones

7. Roles and Responsibilities


Following are the testing team structure and their responsibilities,

RESOURCE ROLE ACCOUNT RESPONSIBILITY CONTACT


i. Test Team Management & Test
Planning
ii. Test Design & Execution
Test Management
QA Sign-Off
Lead iii. Defect Management & Test
Report
iv. Generate Test Metrics &
Submission
i. Understanding requirements
and design test cases
Test
Testing ii. Functional/Regression Test
Engineer
Execution
iii. Defect Reporting & Tracking

8. Entry and Exit Criteria


Following will be the entry and exit criteria from the initial phase to the system testing sign off / closure to User

Acceptance Phase to subsequent phases

TESTING PHASE ENTRY CRITERIA EXIT CRITERIA


Test Planning Project Kick Off Test plan should be signed off
Scope document is defined / base
Requirements Understanding Scope document is base lined
lined and signed off
Test Designing - Test case Scope document is clear and
Test case document is base lined
preparation / review / rework complete
i. Required hardware and
i. Test data is loaded
software are installed and set
Environment Setup ii. Sanity Testing is signed off (as
ii. Application is deployed in
environment stable)
testing environment
i. System testing and regression
testing should be complete
Test Execution (System ii. Zero defects / minimal low
Application is deployed and stable
Testing) severity defects to be fixed
iii. System testing should be signed
off
i. Application is deployed on UAT
environment
ii. Configuration is all set as per Smoke testing will be performed by
User Acceptance Testing
the required scope of the project the QA team and signed off
Iii. Sanity testing is complete and
the environment is stable
i. Application is deployed on
production i. Application deployed on
Production / Go Live ii. Sanity testing is complete production for live users
Iii. All the requirements are ii. No severity defects found
addressed as required

9. Suspension Criteria
Following are the criteria that can result in suspension of the testing activities:

1. 60% of smoke test cases are failed in QA environment

2. Requirements have been updated / changed during the course of testing or if they are unclear to

proceed further and significant change in requirements suggested by client

3. Release plan to QA has a (known) deviation / break from the development end

4. Any severe / show-stopper defect identified in the application, resulting in crash

5. During hardware crashes (on systems / devices)

6. Data corruption due to virus / intentional data delete

7. Network failure

8. Update from the senior management / PMO group


10. Resumption Criteria
Testing activities would be resumed when the problem that caused the suspension had been resolved.

Following are the criteria that can be considered for resuming the testing activities:

1. Build is stable on the environment

2. Requirements are clarified and clear

3. All critical errors have been addressed

4. Valid / Required test data is loaded or does not create / force any virus to the testing system /

application any more

5. Hardware crashes had been resolved

6. Network issues have been addressed

11. Risks and Mitigation Plan


The following risks have been identified and the appropriate action identified to mitigate their impact on the
project. The impact (or severity) of the risk is based on how the project would be affected if the risk was
triggered. The trigger is what milestone or event would cause the risk to become an issue to be dealt with.

S
RISKS MITIGATION
NO
Delay in planned releases to Any delay in release plan to QA will have subsequent delay in the
1
QA delivery of the scope items
Any existing or new defect identified in the product will / might have
Any existing or new defect an impact on project specific scope. Depending on the criticality of
identified in the product – the defect (in terms of business need and functional aspects), defect
2
impacting project specific fixes / retesting of the same will be included.
scope Else, the defect will be parked / deferred for the current scope of
items into consideration
Availability of resources until Replacement within the team (Required Training will be given to the
3
testing closure resource)
Performance issue with Support from IT / Admin team and other stakeholders (as applicable)
4
network / test server will be required
Changes in the functionality If any functionality changes in the mid off project development, it
5 may negate the test cases causes the subsequent delay in the delivery of the scope items due to
already written rework in the Test Cases and Test Execution
12. Test Tools
The following tools will be employed for testing of the architectural prototype:

TEST ACTIVITIES TOOLS VERSION

Test Management MS Office Google Sheet

Test Design MS Excel Google Sheet

Defect Tracking JIRA TBD

Functional Testing Mail

13. Test Deliverables


The deliverables of the test activities as defined in this Test Plan are outlined in the table below.

DELIVERABLES OWNER DISTRIBUTION

Test Strategy/Plan QA Lead Project Manager

Test Cases Test Engineer QA Lead and Business Analyst

Test Report QA Lead Project Manager and Team

Test Exit Report QA Lead Project Manager and BU Head

14. Meetings
A Meeting is a regular occurrence where any people involved with in a project convene to: Report on progress,
Propose or generate ideas, Discuss issues, Approve or reject ideas

MEETINGS FREQUENCY RUN BY Participants


Start of
Kick Start Project Manager Project Team
release
Stand Up Daily Project Manager Project Team
Status Weekly QA Lead Within QA Team
Defects Triage Daily QA Lead With BA, Tech Lead, Project Manager
Retrospective End of release Project Manager Project Team
15. Escalation Matrix
Escalation matrix allows you to specify multiple user contacts to be notified in the event of critical issues. These
contact details are presented to the service delivery NOC while creating or updating a service ticket.

LEVEL ESCALATE TO E-MAIL ID


I QA Lead

II Project Manager/BU

You might also like