0% found this document useful (0 votes)
71 views9 pages

Understanding Dry Run Testing

The document discusses software testing. It explains that software may contain bugs due to programmer mistakes, incorrect specifications, design errors, or interface issues. Thorough testing is important to find errors early in the development process. There are different types of errors like syntax, logic, and runtime errors. Testing methods include black box, white box, dry run, walkthrough, and integration testing. The objectives of testing are reliability, quality, assurance, and performance. Test data includes normal, abnormal, and extreme values. A test plan should be created to test all possible inputs and outputs.

Uploaded by

chex
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Topics covered

  • Test Review,
  • Manual Testing,
  • Test Reporting,
  • Testing Methods,
  • Test Environment,
  • Stub Testing,
  • Software Testing,
  • Test Schedule,
  • Test Cases,
  • Regression Testing
0% found this document useful (0 votes)
71 views9 pages

Understanding Dry Run Testing

The document discusses software testing. It explains that software may contain bugs due to programmer mistakes, incorrect specifications, design errors, or interface issues. Thorough testing is important to find errors early in the development process. There are different types of errors like syntax, logic, and runtime errors. Testing methods include black box, white box, dry run, walkthrough, and integration testing. The objectives of testing are reliability, quality, assurance, and performance. Test data includes normal, abnormal, and extreme values. A test plan should be created to test all possible inputs and outputs.

Uploaded by

chex
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Topics covered

  • Test Review,
  • Manual Testing,
  • Test Reporting,
  • Testing Methods,
  • Test Environment,
  • Stub Testing,
  • Software Testing,
  • Test Schedule,
  • Test Cases,
  • Regression Testing

Unit 12 SOFTWARE DEVELOPMENT

TESTING

INTRODUCTION

Like all human creations, computer programs are often less than perfect. Computer code may
contain various types of bugs (errors).

1 - WHY ERRORS OCCUR?


Software may not perform as expected for a number of reasons, such as:
 the programmer has made a coding mistake.
 The requirement specification was not drawn up correctly.
 The software designer has made a design error.
 The user interface is poorly designed and the user makes mistakes
 Computer hardware experiences failure.
HOW TO FIND ERRORS?
How are errors found? The end user might report an error. This is not good for the reputation of the
software developer.
Testing software before it is released for general use is essential. Research has shown that the earlier an
error can be found, the cheaper it is to fix it.

2 - TYPE OF ERRORS
 Syntax errors
 Logic errors
 Run-time errors
Syntax Errors: A syntax error in which a program statement does not follow the rules of the language.
Some syntax errors might only become apparent when you are using an interpreter or compiler to
translate you program.
Logic Errors: An error in the logic of the solution that causes it not to behave as intended.
Run-time Errors: An error that causes program execution to crash or freeze
Both logic and runtime errors can only be found by careful testing. The danger of such errors is that
they may only show up under certain circumstances.
The only way to detect logic errors is by testing your program, manually or automatically, and
verifying that the output is what you expected. Testing should be an integral part of your software
development process. Unfortunately, while testing can show you that the output of your program is
incorrect, it usually leaves you without a clue as to what part of your code actually caused the
problem. This is where debugging comes in.
Once we have created our solution, we need to test that the whole system functions effectively. To
do this should be easy, as all we need to do is compare the finished product next to the objectives
that we set out in the Analysis. There are several ways of testing a system; you need to know them

TESTING 1
all and the types of data that might be used.

3 - OBJECTIVES OF TESTING
Testing is essential because of:
 Software reliability
 Software quality
 System assurance
 Optimum performance and capacity utilization

4 - TEST DATA
Normal, Abnormal, Extreme
There are three types of test data we can use. What are they? The answer lies mostly in their name;
let's take a look at this example where someone has created a secondary school registration system
which lets students register themselves. We don't want people who are too young attending, and
we don't want students who are too old. In fact we are looking for students between 11 and 16
years old.
(A Normal Student will be 12, 13, 14 or 15)
(An Abnormal (or wrong) aged student will be 45, 6 or any age outside those allowed.)
(An Extreme (or boundary) aged student has just started or is just about to leave, they will be 11 or
16)
If you are testing data that has Normal, Abnormal and Extreme data, it is best to show tests for all
three. Some tests might only have Normal and abnormal, for example entering a correct password
might only have a Normal and an abnormal value. Some things might only have Normal testing, such
as if a button to a next page works or not, or if a calculation is correct.

Example: Electronic Crafts Test Data


Imagine that the following objectives were set:
The maximum number of yellow cards a player can receive in a game is 2
Normal : 0,1,2 (most likely to list 1)
Abnormal: 3, -6
Extreme: 0,2
There should be no more than 15 minutes extra time in a game
Normal : 0,1m45,9m23
Abnormal: -6, 15m01
Extreme: 0,15m00
The name of a team should be no longer than 20 characters
Normal : Monster United
Abnormal: Monster and Dagrington League of Gentlefolk
Extreme: Moincicestier United (20 characters!)
TESTING 2
Exercise: Test Data
List the normal, abnormal and extreme data for the following:

The number of cigarettes currently in a 20 cigarette carton:


Answer:
Normal: 5, 16,18
Abnormal: 21, -7
Extreme: 0 or 20

The username for a system that must be of the form “<letter><letter><number><number>”


Answer:
Normal: GH24
Abnormal: G678
Extreme: AA00, ZZ99

The age of a college teacher:


Answer:
Normal: 28, 56, 32
Abnormal: 16, 86
Extreme: 21, 68

The date for someone's birthday:


Answer:
Normal : 12/07/1987
Abnormal: 31/09/1987, 31/02/1987
Extreme: 31/12/1999, 01/01/2001

Someone's hair color:


Answer:
Normal: brown, red, black
Abnormal: bicycle
Extreme: N/A (we're not here to judge people!)

Does the following calculation work: 14 * 2


Answer:
Normal: 28
Abnormal: N/A
Extreme: N/A

Number of pages in a book


Answer:
Normal: 24, 500
Abnormal: -9
Extreme: 1

TESTING 3
5 - TEST PLAN
When a system is designed it is important that some consideration is given to making sure that no
mistakes have been made. A schedule should be drawn up which contains a test for every type of
input that could be made, methods of testing and test data. This schedule is known as the test plan.
Note that it is produced before the system is produced.

An outline plan is designed for example:

TESTING 4
5 – TESTING METHODS
There are a number of ways of testing a program.

1. BLACK BOX TESTING

Black Box testing model


Consider the box to contain the program source code, you don't have access to it and you don't have
to be aware of how it works. All you do is input data and test to see if the output is as expected. The
internal workings are unknown; they are in a black box. Examples of Black Box testing would be if
you were working as a games tester for a new console game. You wouldn't have been involved in the
design or coding of the system, and all you will be asked to do is to input commands to see if the
desired results are output.

2. WHITE BOX TESTING

White Box testing model showing various routes


through the code being put to test

With white box testing you understand the coding structure that makes up the program. All the tests
that you perform will exercise the different routes through the program, checking to see that the
correct results are output.

TESTING 5
3. DRY RUN TESTING

A dry run is a mental run of a computer program, where the computer programmer examines the
source code one step at a time and determines what it will do when run. In theoretical computer
science, a dry run is a mental run of an algorithm, sometimes expressed in pseudocode, where the
computer scientist examines the algorithm's procedures one step at a time. In both uses, the dry run
is frequently assisted by a trace table. And whilst we are here we might as well get some more
practice in:

TESTING 6
4. WALKTHROUGH TESTING
Walkthrough in software testing is used to review documents with peers, managers, and
fellow team members who are guided by the author of the document to gather feedback
and reach a consensus. A walkthrough can be pre-planned or organised based on the
needs.
 It is not a formal process/review
 It is led by the authors of the documents e.g. hardware and software requirements
 Author guide the participants through the document according to his or her thought process to
achieve a common understanding and to gather feedback
 Useful for the people if they are not from the software discipline, who are not used to or cannot
easily understand software development process.
 Is especially useful for higher level documents like requirement specification, etc

The goals of a walkthrough:


 To present the documents both within and outside the software discipline in order to gather the
information regarding the topic under documentation.
 To explain or do the knowledge transfer and evaluate the contents of the document
 To achieve a common understanding and to gather feedback
 To examine and discuss the validity of the proposed solutions

5. INTEGRATION TESTING
Integration testing is a logical extension of unit testing. In its simplest form, two units that have
already been tested are combined into a component and the interface between them is tested. A
component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic
scenario, many units are combined into components, which are in turn aggregated into even larger
parts of the program. The idea is to test combinations of pieces and eventually expand the process
to test your modules with those of other groups. Eventually all the modules making up a process are
tested together. Beyond that, if the program is composed of more than one process, they should be
tested in pairs rather than all at once.
Integration testing identifies problems that occur when units are combined. By using a test plan that
requires you to test each unit and ensure the viability of each before combining units, you know that
any errors discovered when combining units are likely related to the interface between units. This
method reduces the number of possibilities to a far simpler level of analysis.
You can do integration testing in a variety of ways but the following are three common strategies:
 The top-down approach to integration testing requires the highest-level modules be test and
integrated first. This allows high-level logic and data flow to be tested early in the process and it
tends to minimize the need for drivers. However, the need for stubs complicates test
management and low-level utilities are tested relatively late in the development cycle. Another
disadvantage of top-down integration testing is its poor support for early release of limited
functionality.

 The bottom-up approach requires the lowest-level units be tested and integrated first. These
units are frequently referred to as utility modules. By using this approach, utility modules are
tested early in the development process and the need for stubs is minimized. The downside,
however, is that the need for drivers complicates test management and high-level logic and data
flow are tested late. Like the top-down approach, the bottom-up approach also provides poor
support for early release of limited functionality.

TESTING 7
 The third approach, sometimes referred to as the umbrella approach, requires testing along
functional data and control-flow paths. First, the inputs for functions are integrated in the
bottom-up pattern discussed above. The outputs for each function are then integrated in the
top-down manner. The primary advantage of this approach is the degree of support for early
release of limited functionality. It also helps minimize the need for stubs and drivers. The
potential weaknesses of this approach are significant, however, in that it can be less systematic
than the other two approaches, leading to the need for more regression testing.

6. ALPHA AND BETA TESTING


When you have written a program and you sit down to test it, you have a certain advantage because
you know what to expect. After all, you wrote the program. This can be extended to the whole
software company, as the employees are all computer-minded people. Testing carried out by people
like this is known as alpha testing.

Eventually, the company will want ordinary users to test the program because they are likely to find
errors that the software specialists did not find. Testing carried out by the users of the program is
called beta testing.

7. ACCEPTANCE TESTING
Testing is often the final step before rolling out the application. Usually, the end users who will be
using the applications test the application before ‘accepting’ the application. This type of testing
gives the end users the confidence that the application being delivered to them meets their
requirements. This testing also helps nail bugs related to usability of the application.

8. STUB TESTING
Stubs are the modules that act as temporary replacement for a called module and give
the same output as that of the actual product.

Stubs are dummy modules created within an application when actual modules are not available/ready
for integration.

Imagine you are testing Google Search program and typical steps that will be followed are:

1. Launch browser, Navigate to Google search home page


2. Input Keyword, click search button
3. Verify if results match your search query
However, let’s say your actual search algorithm is not implemented yet but you want to test whether
search button is working properly. In this case, a dummy program is configured in the back-end instead
of actual search algorithm. So when you click on search button, a dummy page will be displayed.
Component level testing for search button can be treated as Pass if dummy page is displayed upon
clicking search button. Here dummy program is called Stub.

Stub Testing is conducted just to gain enough confidence on the components & also to ensure
connectivity between components is working so that issues can be identified early.

TESTING 8
MAINTENANCE

TESTING 9

Common questions

Powered by AI

Integration testing is the process of testing combined units or components, focusing on the interactions between them. It helps identify issues in the interfaces and aggregated parts of a program . The top-down approach tests high-level modules first, which facilitates early logic and data flow testing but delays testing of low-level utilities and complicates need for stubs . Bottom-up testing involves testing the lowest-level units first, minimizing the need for stubs and allowing early utility testing, but it complicates test management with drivers and delays high-level testing . The umbrella approach, combining elements of both top-down and bottom-up methods, supports early release of functionality, minimizing stubs and drivers. However, it can be less systematic, necessitating more regression testing .

The primary objectives of software testing include ensuring software reliability, quality, system assurance, and optimal performance and capacity utilization . These objectives impact software development by guiding the design of tests to catch errors early, reducing cost and time spent on fixes. High reliability and quality ensure user satisfaction and system consistency, while system assurance confirms that the software meets all specified requirements and operates under expected conditions . Achieving optimum performance and capacity utilization means the software makes efficient use of resources, which is critical for scalability and stability.

A test plan should be developed by identifying all types of input the system might encounter, methods of testing, and test data before system production begins . This involves outlining objectives, specifying resources, and preparing schedules and criteria for success. By having a comprehensive test plan beforehand, developers can ensure comprehensive coverage, which reduces the likelihood of overlooked errors and inefficiencies. This preparatory step is crucial for anticipating potential issues, facilitating last-minute modifications, and maintaining systematic testing procedures . It enables efficient detection and resolution of defects during post-development testing phases, which minimizes costs and enhances product quality.

Errors in computer programs can occur due to coding mistakes by programmers, incorrect requirement specifications, design errors by software designers, poorly designed user interfaces, and computer hardware failures . Early detection of these errors is crucial because it is generally cheaper to fix errors when they are identified early in the development process rather than later on . Early error detection helps maintain software reliability and quality, ensuring optimal performance and system assurance .

Debugging plays a crucial role in the software development process by helping developers locate and fix errors uncovered during testing. While testing can reveal that output is incorrect, it often does not pinpoint the causes of the problems within the code . As such, debugging complements testing by allowing developers to trace the source of errors, analyze them, and make necessary corrections. This process improves the overall reliability and quality of the software as the corrected code should lead to the desired outputs when retested .

Alpha testing is conducted internally by the software company, often by people closely associated with the development of the program who understand its intended functionality . Beta testing, however, involves releasing the software to a select group of external users who are more representative of the general user base. These users are likely to encounter and report issues not identified internally due to their diverse perspectives and lack of preconceived notions about the software's operation . Together, these testing phases help ensure that a wide range of potential issues are identified and addressed before final release, enhancing the software's quality and user experience.

Normal, abnormal, and extreme test data are used to evaluate the behavior of software under various conditions. Normal test data represents typical inputs that the system is designed to handle correctly, for example, a student's age between 11 and 16 in a school registration system . Abnormal data includes incorrect or unexpected inputs that the system should also handle gracefully, like an age of 45 in the same registration system . Extreme test data, or boundary data, tests the limits of input ranges, such as the ages 11 and 16, which are at the boundary of acceptable inputs for this system . Using these types of test data helps ensure that a system behaves as expected across a range of possible real-world scenarios.

A walkthrough in software testing involves reviewing documents with peers, managers, and team members, led by the document's author. The author guides participants through the document, sharing their thought process to achieve a common understanding and gather feedback . Walkthrough objectives include presenting documents across different disciplines, transferring knowledge, and evaluating content . This process particularly benefits non-software professionals by clarifying complex concepts and ensuring that all stakeholders have a mutual comprehension of project requirements and solutions, thereby enhancing collaboration and project alignment .

Black box testing involves testing a program based on its outputs without any knowledge of its internal code or structure. Testers provide input to the system and validate the output against expected results, without knowing how the program processes the data internally. An example of black box testing is when a games tester inputs commands into a new console game to see if expected results occur . White box testing, on the other hand, requires knowledge of the internal code and structure of the program. It involves testing different routes through the code to ensure correct outputs are produced. White box testing is exemplified by checking the coding structure to ensure coverage of all possible execution paths .

Stub testing involves using temporary modules called stubs as placeholders for unfinished components of a program during testing. These stubs simulate the behavior of actual modules, enabling testers to focus on connectivity between components and identify issues early in the development cycle . For example, in testing a search button on a webpage when the search algorithm is not ready, a stub simulates the algorithm, allowing verification that the button and its connectivity work correctly by displaying a dummy page . Stub testing thus provides confidence in component interaction even when some components are not fully implemented.

You might also like