0% found this document useful (0 votes)
30 views109 pages

Introduction to Software Engineering Concepts

Uploaded by

hafidachellakh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views109 pages

Introduction to Software Engineering Concepts

Uploaded by

hafidachellakh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Part-I

SoftwARE ENGINEERING
CONCEPTS
UNIt 1
C h a p t e r1
INTRODUCTION TO SOFTWARE ENGINEERING

1.1 What is software?

Software is more than just a program [Link] is the group of programs written in simple
high level language with specified syntax that is understandable by both human and machine.
Software is divided in to two major categories like system and application software.
Software is a program or set of programs containing instructions which provides desired
functionality. It also consists of data structures that enable the program to manipulate the
information.
Definition by IEEE: The collection of computer programs, procedures, rule and associated
documentation and data is called software.

1.1.1 What is Software Engineering?

Software engineering is an engineering approach for software development. The process of


designing and building something that serves a particular purpose is called engineering. As a
systematic approach to the development, operation and maintenance of desired software.
Software Engineering can be defined as a systematic approach not only deals with the
development of software but also with the maintenance of software.
According to IEEE: Software Engineering can be defined as the application of a systematic,
disciplined, quantifiable approach to the development, operation, and maintenance of software,
and the study of these approaches; that is, the application of engineering to software. The
outcome of software engineering is an efficient and reliable software product.
Software engineering enables us to build complex system software in a timely manner. It
assures high quality of software.

1.1.2 Software Products

The objective of software engineering is to produce software products. Software Products are
software systems delivered to a customer with the documentation which describes how to install
the use [Link] software products fall in to two broad categories:
1. Generic Products: These are standalone systems which are produced by a development
organization and sold on the open market to any customer who is able to buy them.
2. Customized Products: These are systems which are commissioned by a particular
customer. The software is developed specially for that customer by some contractor.

1.2 The Evolving Role of Software

Different individuals judge software on different basis. This is because they are involved with the
software in different ways. For example, users want the software to perform according to their
requirements. Similarly, developers involved in designing, coding, and maintenance of the software
evaluate the software by looking at its internal characteristics, before delivering it to the user. Software
characteristics are classified into six major components.
• Functionality: Refers to the degree of performance of the software against its intended purpose.
• Reliability: Refers to the ability of the software to provide desired functionality under the given
conditions.
• Usability: Refers to the extent to which the software can be used with ease.
• Efficiency: Refers to the ability of the software to use system resources in the most effective and
efficient manner.
• Maintainability: Refers to the ease with which the modifications can be made in a software system
to extend its functionality, improve its performance, or correct errors.
• Portability: Refers to the ease with which software developers can transfer software from one
platform to another, without (or with minimum) changes. In simple terms, it refers to the ability of
software to function properly on different hardware and software platforms without making any
changes in it.
In addition to the above mentioned characteristics, robustness and integrity are also important.
 Robustness: Refers to the degree to which the software can keep on functioning in spite of
being provided with invalid data.
 Integrity: Refers to the degree to which unauthorized access to the software or data can be
prevented.
1.3 Changing Nature of software:
The changing nature of software can be well understood when compared with hardware. Unlike
hardware software does not go through manufacturing processes. The environmental hazards
may affect hardware since it is a factory product. This state of hardware is called wear-out state.
These defects can be repaired by manufacturer or designer and it becomes as it was. But it is
totally different in case of software. Software does not wear out.
Although software does not have any defects it may get the need of modification as the users
demand from the software may [Link] the demands of users remain unfulfilled then such
software is called defected [Link] the developer is trying to modify one need of the user then the
other necessity comes up in front. In this way the original software is getting slowly changed as
a result of changes done afterwards according to demands of users. Thus the changing nature of
software is defined in terms of bath-tub curve. Introducing changes to source code often results
in the introduction of [Link] idealized curve of the software is shown at the beginning when it
is original as follows:

Figure: Changing Nature of SoftwareFrom Original to Actual


1.4 Software Myths and their Realities:
Myth means a widely held but false belief or idea. Many software problems occur due to myths
formed during the initial stages of software development. Software myths propagate confusions
and misunderstandings among Management, users and developers.
1.3.1 Management Myths: The well-known management myths are listed below:
1. Myth:―The Developers in the organization can acquire all the information from
manuals available which contains principles, procedures and standards.‖
Reality: The developers rarely follow the standards or principles and they are not even aware
of it. These manuals of standards are often incomplete, inadaptable and outdata.
2. Myth: ―The developers believe that we can add more manpower to the project if we
get behind the schedule.‖
Reality:New workers take longer time to learn the project details as compared to those already
working on the project; as a result they may face further delays.
3. Myth: ―The Third party out-sourcing is believed to be good than developing it ourselves. Let
the third party develop it so that the management can relax.‖
Reality: The software which is developed by third party is incompetent in managing and
controlling internally. The organization suffers a lot by out-sourcing.

1.3.2 Customer Myths: Users sometimes believes some myths about software and may lead to false
expectations which may disappoint them. Some common customer myths are listed below:
1. Myth:―The requirements given at the initial stage are enough for the development
of software.‖
Reality: At the start stage the requirements are generally incomplete and ambiguous which
may often lead to failure of project. The detailed requirement is needed before starting
development. When the requirement gets added in the later stages then entire process has to be
repeated again which consumes time as well as efforts.
2. Myth: ―The changes in the software can be easily added at any stage of development since
the software is considered to be very flexible.‖
Reality: Adding changes in the software at the later phases may require redesigning and cost
of development may also get increased than changes added at early stage.

1.3.3 Developer Myths: Some of the common developer myths are as follows:
1. Myth: ―Developer believes that development is complete after code is delivered to
the customer.‖
Reality: The Efforts are doubled when the software reaches to the customer.
2. Myth: ―The Documentation is considered as unnecessary which takes more time to complete
the project and to get successful.‖
Reality: The systematic documentation is essential as it enhances the quality and ultimately
redesigning is reduced.
3. Myth: ―Software quality can be assessed only after the program is executed.”
Reality:Software quality must be measured after every phase of software development
activity. Some of the measures such as Quality Assurance techniques can be applied. There
are various quality measures which are explained in the later part of this text.
Review Questions
1. What is software and state various definitions of software engineering.
2. What is software Product Explain in brief.
3. Explain Evolving role of software and write its characteristics.
4. Write a short Note on Changing Nature of Software.
5. Explain any three types of myths and reality in software engineering in detail.
Chapt er 2

A Generic View of Process

2.1 Software Engineering as Layered Technology:

Software engineering is a software development process which consists of several different


phases from which software has to go through. This can be best described as a layered
technology. These layers are dependent on each other. The layered technology can be explained
as follows:

Tools

METHODS

PROCesses

QUALIty FocUS

Figure: Software as a Layered Technology.

1. Quality Focus:
The Fundamental building block of any software is quality focus. Software development is
basically depends on an organizational commitment for quality. The software quality acts as
the ―BedRock‖ which supports for software engineering activities. The culture of quality focus
in development of software ultimately leads to improvement in software.

2. Process:
The process layer is the foundation for the software development. Process defines a framework
for a set of Key Process Areas (KPAs) that must be established for effective software
development. The key process area such as 1. Technical Methods 2. Work Products 3.
Documents, Reports etc. 4. Mile Stones establishment 5. Quality Management. Process in
software development holds all the technology layers together and thus timely development is
achieved.

3. Methods:
Software Engineering methods provide the "Technical Questions" for building Software.
Methodsusually include requirements analysis, design, program construction, testing, and
support. Methods describe modeling activities and other illustrations which are needed in
critical conditions.
4. Tools:
Software Engineering tools provide automated or semi-automated support for the "Process" and the
"Methods". The information created by these tools can be shared on different platforms. Tools
are integrated so that information created by one tool can be used by another. This is nothing but
computer aided software engineering is established to develop a system for the support of software
development faster.

2.2 Software Process Framework:

Software Process
Process defines a framework for a set of Key Process Areas (KPAs) that must be established for
effective delivery of software engineering technology. This establishes the context in which
technical methods are applied, work products such as models, documents, data, reports, forms,
etc. are produced, milestones are established, quality is ensured, and change is properly
managed. A process framework establishes the foundation for a complete software process by
identifying a small number of framework activities that are applicable to all software projects,
regardless of size or complexity. It also includes a set of umbrella activities that are applicable
across the entire software process. Some most applicable framework activities are described
below.

Figure: A process Framework

2.3 The Capability Maturity Model Integration (CMMI):

The primary focus of CMMI model is that quality of a system or product must be highly
influenced by the process used to develop and maintain. It act a as a guide to project and an
entire [Link] provides organizations with the essential elements of effective processes.
Thus it is called as world-class performance improvement [Link] implies a potential for
growth in capability and indicates both the richness of an organization‘s process and the
consistency of its application across projects.

CMMI is used in process improvement activities such as a


 Collection of best [Link] for organizing and prioritizing activities.
 The coordination of multi-disciplined activities that are required to successfully build a
product.
 Emphasizing the alignment of the process improvement objectives with organizational
business objectives.
According to George Box (Quality and Statistics Engineer)A CMMI model describes the
characteristicsof effective processes. The usability of CMMI can be increased by integrating
many different models into one [Link] output of the CMMI is a system which provides
an integrated approach across the enterprise for improving processes, while reducing the
redundancy, complexity and cost resulting from the use of separate and multiple capability
maturity models (CMMs).There are two different representations of CMMI - staged and
continuous. The staged representation groups the process areas into 5 maturity levels. Maturity
level defines a well-defined evolutionary platform for achieving a mature software process. Each
maturity level comprises a set of process goals that, when satisfied, stabilize an essential part of
the software process. Achieving each level of the maturity framework establishes a different
component in the software process, resulting in an increase in the process capability in software
development.

Figure: 5 Levels of CMMI model

Maturity Level 1. Initial:


It is the starting point for use of any new process. This level is often over budgeted and work
gets delayed. Hence it is called unpredictable and reactive.
Maturity Level 2. Managed:
It is quantified level and product management and measurement takes place. Project planning
and scheduling takes place in this level.
Maturity Level 3. Defined:
It is proactive rather than reactive. The industry-wide standards and principles are used for
guidance throughout this phase in programs and portfolios. Hence it is called as it is accepted or
confirmed as standard business process.
Maturity Level 4. Quantitatively Managed:
In this level basically the project is measured and controlled. The organization is data driven
with quantitative performance improvement. Ultimately this will meet the need of internal and
external stakeholders.
Maturity Level 5. Optimized:
This is stable and flexible level. As the word optimize tell about the accuracy and best gaining
results, this level does such work. The organization builds to pivot and respond to the
opportunity and change. The stability is achieved as a result of this level which provides
platform for more innovation.

2.4 Process Patterns:

The ProcessPattern can best be understood by pondering over process and pattern words
separately. A process is defined as a series of actions inwhich one or more inputs are used to
produce one or more outputs. The pattern can be explained as the similar features which keep
recurring overand over again, although their detailed appearance will never remain the
[Link] are the steps followed to achieve a task and patterns which are related behavior
in a software development [Link] to Alexander, a pattern is a general solution to a
common problem or issue, one from which a specific solution may be derived. Thus combining
both Process patterns can be defined as the set of activities, actions, work tasks or work products
in software [Link] are: 1. Customer communication. 2. Analysis. 3.
Requirements gathering. 4. Reviewing a work product. 5. Design model.

The use of process pattern enhances reusability and flexibility, and reduces the costs and risks
involved in systems development. These patterns are extensively used for building almost all
types of software systems [Link] such definition by ambler in Cambridge University-
defines a process pattern as ―a collection of general techniques, actions, and/or tasks (activities) for
developing object-oriented software‖. [Link] [Link] [Link].
1. A Task process pattern depicts the detailed steps to perform a specific task.
2. A Stage process pattern includes a number of Task process patterns and depicts the steps of a
single project stage. This is oftena iterative process.
3. Phase process pattern represents the interactions between its Stage process patterns in a single
phase.
The process patterns can be a process fragment commonly encountered in software development.
They are being used as process components and can thus be applied as reusable building blocks.
2.5 Process Assessment:
The existence of software process does not guarantee the timely delivery of the software and its
ability to meet the user's expectations. The process needs to be assessed in order to ensure that it
meets a set of basic process criteria, which is essential for implementing the principles of
software engineering in an efficient manner. The process is assessed to evaluate methods, tools,
and practices, which are used to develop and test the software. The aim of process assessment is
to identifythe areas for improvement and suggest a plan for making that improvement. The main
focus areas of process assessment are listed below.
1. Obtaining guidance for improving software development and test processes
2. Obtaining an independent and unbiased review of the process
3. Obtaining a baseline (defined as a set of software components and documents that have
been formerly reviewed and accepted; that serves as the basis for further development)
for improving quality and productivity of processes.
Software process assessment examines whether the software processes are effective and efficient
in accomplishing the goals. This is determined by the capability of selected software processes.
The capability of a process determines whether a process with some variations is capable of
meeting user's requirements. In addition, it measures the extent to which the software process
meets the user's requirements. Process assessment is useful to the organization as it helps in
improving the existing processes. In addition, it determines the strengths, weaknesses and the
risks involved in the processes.

2.6 Personal And Team Process Models:


Personal Software Process (PSP) is a scaled down version of the industrial software process. The
PSP and TSP both use standard proven techniques to improve individual performance and team
in software development. PSP is suitable for individual use. The engineers usually use effective
personal practice. PSP is a disciplined framework that helps engineers to measure and improve
the way they work. PSP recognizes that the process for individual use is different from that
necessary for a team. The individuals in almost any technical field can improve their estimating
and planning skills, make commitments using the PSP concepts and methods in their work. . It
helps in developing personal skills and methods by estimating and planning. One can show how
to track performance against plans.

Figure: levels of PSP


The levels of PSP are explained in above figure. PSP2 introduces defect management via the use
of checklists for code and design reviews. The checklists are developed from gathering and
analyzing defected data earlier.

The Team Software Process guides engineering teams that develops software-intensive
products. It provides a defined operational process framework that is designed to help teams of
managers and engineers organize projects. TSP helps organizations to establish a mature and
disciplined engineering practice that produces secure, reliable software in less time and at lower
costs. They help produce defect-free and inside deadline by empowering development teams.
These technologies are based on the premise that a defined and structured process can improve
individual work quality and efficiency. The TSP was introduced in 1998, and builds upon the
foundation of PSP to enable engineering teams to build software-intensive products more
predictably and effectively. It aims produce software products that range in size from small
projects of several thousand lines of code (KLOC) to very large projects greater than half a
million lines of code.

Review Questions:

1. Explain software engineering as a layered technology with the help of diagram.


2. What is Software Process Framework explain with diagram.
3. Explain four levels of the Capability Maturity Model Integration in detail.
4. What are process patterns in software engineering?
5. Write a short note on Process Assessment in detail.
6. Explain Personal and Team Process Models with suitable diagram.
UNIT - II
C h a p t e r3

Process Models

3.1 What is process model?

There are various types of software development life cycle models that are designed to be
followed during the process of software development. We can call it as "Software Development
Process Models". In order to ensure full completion of each process a Series of steps which can
be unique to its type, is followed.
Some popular software development life cycle followed by industry is: Waterfall Model,
Incremental model, Iterative Model, Spiral Model, V-Model, and Big Bang Model. Some other
methodologies are RAD Model, Agile Model, Rapid Application Development and Prototyping
Models.
Hence Software Process Models are process Model that describes the series and sequence of
phases for the entire lifetime of a product. Hence sometime we can call it as Product Life Cycle.

3.1.1 The waterfall model

 The Waterfall Model is also called as linear-sequential life cycle model. It is the first model that has
been introduced. This is very simple model to understand and use.
 In a waterfall model, each phase or step is followed sequentially that is initial phase must be
completed before the next phase can be started.
 The whole phases of waterfall model have been divided in two numbers of steps such as:
Requirement analysis, system design, Implementation, testing, deployment and maintenance.
The sequential phases in Waterfall model are:

 Requirement Gathering and analysis: This phase captures all the possible requirements of
the system that needs to be developed and are documented in requirement specification.

 System Design: In this phase the overall system architecture is specified such as specifying
hardware and system requirements. Here the system design is prepared.

 Implementation: In this phase we take inputs from system design and develop small
programs called units and then integrate it in the next phase.

 Integration and Testing: Testing is a measure to test the developed unit, after testing of each
unit developed in the implementation phase are integrated into a system. After integration the
entire system is tested for any faults and failures.

 Deployment of system: Once we complete all the functional and non functional testing, the
final product is released into the market or deployed in the customer‘s environment.

 Maintenance: After the deployment phase some issues get generated in the client
environment. To fix such issues proper action is executed or some better version is released to
enhance the functionality of product this action of working comes under criteria of
maintenance which is all about delivering changes in the customer environment.

3.1.2 Incremental process models

In incremental process model the requirements are broken down in to multiple module of
software development cycle. Here each iteration passes through the requirements, design, coding
and testing phases. The word increment justifies that the work proceeds incrementally till the
work gets completed. While incrementing a little more is added each time to new stuffs.

In incremental process the requirement of the system are clearly understood, it is used when
demand for early release of product arises, such types of model is more in use for web
applications and product based companies.

Figure: Incremental Model


Advantages of Incremental Model
 It is used for quick and easy development of software during the software life cycle.
 Incremental model is more flexible i.e. it incurs less cost in its modification.
 Incremental model is easy to test.
 Incremental model is used to manage the module easily.
Disadvantages of Incremental Model
 In incremental model each iteration is rigid and do not overlap with each other.
 In this model problem arises because all the requirements are not gathered at once.

3.1.3 Evolutionary process models

Evolutionary Software Process is typically a cycle in which the software allows the developers
to enable the changes increasingly by the process of iteration. The evolutionary process models
are of following types:
1. the prototyping model
2. The spiral model
3. Concurrent development model

1. The Prototyping model


 Prototype is defined as first or preliminary form using which other forms are copied
or derived.
 Prototype model is a set of general objectives for software.
 It does not identify the requirements like detailed input, output.
 It is software working model of limited functionality.
 In this model, working programs are quickly produced.

The different phases of Prototyping model are:

1. Communication
Communication is the main part for any information collection. In this phase, customer and
developer communicate and discussion related to the overall objectives of the software is
done.
2. Quick design
 To implement the Quick design it is mandatory to know requirements
 Important aspects like input and output format of the software is required.
 Rather than the detailed plan it focuses on those aspects which are visible to the user.

3. Modelling quick design


 Modelling quick design means to build a model, hence after building; this phase gives
idea about the development of software.
 This phase helps better understand the actual requirement of customer.
4. Construction of prototype
Construction of prototype means the customer itself evaluates the model.

5. Deployment, delivery, feedback

 After evaluating the current prototype it the customer is not satisfied then the necessary
changes are made accordingly.
 The process of making necessary corrections in prototype is repeated until
all the requirements of users are met.
 When customer gets satisfied with the final developed prototype then on the basis of
final prototype the system is developed.

2. The Spiral model


 Spiral model is also called as risk driven model.
 It is used for generating an alternate solution if risk is found in risk analysis; alternate
solution for risk is suggested and implemented.
 It is a combination of different types of model like waterfall model or prototype etc.
 Here in this model in one iteration all activities are done.
 There are several advantages of spiral model like it is used for large and risky projects.
 Spiral model produces high amount of risk analysis and gives strong documentation as
software is produced in early stage.
 Alternatively it has some disadvantages like, it is costly to develop software model and
is not used for small projects.

3. The concurrent development model


 The concurrent development model which is called concurrent engineering is a
simultaneous engineering method used for designing and developing products. It is a
process in which the different stages are run simultaneously. The main advantage of this
is improved productivity and reduced cost apart from this it helps in decreasing product
development time and time to market.
 The concurrent model is more appropriate for projects in which involvement of more
than one team is there.
 In concurrent process model there are number of series of event that changes according
to software engineering activities.
 With the communication the project covers its first iteration and goes to awaiting change
state where the customer change is reflected again it is transferred in to
underdevelopment state.
 If the customer specifies the change in the requirement, then the modelling activity
moves from the under development state into the awaiting change state.
 The main advantage of concurrent development model is it is applicable to all types of
software development processes and is easy to use and understand.
 Concurrent development model gives the accurate picture of the current state of a
project.
 Apart from many advantage the drawback of this model is that it require proper and
frequent communication among team member which may not be possible all the time.

3.1.4 The Unified Process.


 The Unified Process is a well known iterative process that follows incremental software
development process.
 This process recognizes the importance of customer communication and describing the
customer‘s view of a system.
 As the name unified is suggests that this model is the combination of spiral and
evolutionary models.
 It emphasizes on the understandability and reuse. It helps the architect focus on the right
goal.
 In unifies process, Unified Modelling Language provides the necessary technology to
support Object Oriented Software Engineering practice. Framework is not provided by this
technology.
There are different phases in Unified process like Inception, elaboration, transition,
construction, production.
1. In the unified process the Inception phase includes both customer communication and
planning, by the collaboration with the end-users, customer their business requirement for the
software are identified, here the plans are iterative in nature.

Fig: Unified Process Model

2. The elaboration phase refines and expands the preliminary use-cases that were developed
as part of the inception phase and expands the architectural representation.
3. The construction phase of the unified process is all about construction activity. The
construction phase develops the software component using the architectural model as input.
4. The transition phase is all about giving software to end-users for beta testing. At the end of
the transition phase, the software becomes a usable software release.
5. In the production phase the on-going use or working of the software is monitored and
support for the operational environment is provided.
C h a p t e r4
Software Requirements
4.1 What are Software Requirements?

Software Requirement can be defined as the description of features and functionalities of the
target system. Software requirement actually specifies the expectations of users from the software
products. The software requirement can be known or unknown, expected or unexpected. process
of gathering requirement from the user, analyze and then to document it is called requirement
engineering. The main of requirement engineering is to develop and maintain a document related
to system requirement specification.

4.1.1 Functional and non-functional requirements

In systems engineering and requirements engineering, there are two types of requirement that can
be specified as functional and a non-functional requirement requirement. Functional requirement
is something which can be done by system for example: Add more customer detail, print receipt
etc. Example of functional requirement:
 Search option given to user to search from various invoices.
 User should be able to mail any report to management.
 Users can be divided into groups and groups can be given separate rights.
 Should comply business rules and administrative functions.
 Software is developed keeping downward compatibility intact.
The non-functional requirement is that it essentially specifies criteria that can be used to judge the
operation of a system that is how the system should behave. Simply we can differentiate
functional and non-functional requirements as non-functional justifies how the system works,
while functional requirements justifies what the system should do.
Non-functional requirements include -
 Security
 Logging
 Storage
 Configuration
 Performance
 Cost
 Interoperability
 Flexibility
 Disaster recovery
 Accessibility

4.1.2. User requirements

We can define user requirement as the expectation of a user from the software. It is generally the
expected set of tasks that generally the user wants the software must be able to do. The user
requirements are noted in URD (user requirement document).The URD signifies the following
points:
 The main function of user requirement document is to provide the mandatory terms that has
been decided in terms of Design, development etc.
 A User Requirement document is produced with the help of Requirements Analysis activity
 The User Requirement document is the primary input to subsequent System Design work
and to the procurement specifications for pertinent system development contracts.

4 .1.3 System requirements

There are various types of requirement such as user requirement, software requirement and
system requirement. System requirement can be defined as a document structured for writing
detailed descriptions of the services provided by system. It can be contract written between
client and contractor. Following are the properties of system requirement:
1. The facilities are provided to user to define the type of external files.
2. Each external file type may represented as specific icon on user‘s display and have an
associated tool which may be applied to the file
3. When an icon is selected by user representing an external file, the selection effect should be
like that should be applied to the tool associated with the type of external file as the file
represented by the selected icon.

4.1.4 Interface specification

Now a days the new system and the existing systems must work together because of which the
interfaces of existing systems have to be precisely specified .An Interface in computing can be
defined as a shared boundary across which two or more separate components like software,
computer hardware, peripheral devices ,humans exchange information. Interface may need
requirement called as interface requirement in which systems need to provide data to other
system or user. For example inventory management system may require information like store
data. Interface specification is a document that records the detail related to the software user
interface. It covers all the actions that an end user may perform and all visual, auditory and other
interaction elements. There are five main type of user interface:

 Command Line.
 Graphical User Interface (GUI)
 Menu Driven.
 Form Based.
 Natural Language

4.1.5 The software requirements document

Collection of software requirements is the basis of the entire software development project.
Hence the main thing is that they must be clear, correct and well-defined. A complete Software
Requirement specification must be clear, correct, coherent and consistent. Software
Requirements is about understanding what sort of requirement may arise in the phase of
requirement elicitation and what kind of requirement are expected from the software system.
Requirements are categorized as ―Must have‖,‖ should have‖, ‖could have‖, ‖wish list‖. Here we
can summarize it as ―must have‖ means software could not be able to operate without them;
―Should have‖ is about enhancing the functionality of software which generally the client
suggests, ―could have‖ is about the further expectation of client but software can still properly
function without implementing this function, ‖Wish list‖ means the expectation which may not
have direct link with the function of software, requirement that do not map any objective of
software rather it can be kept for software updates.
The Software requirement is captured in a document called Software requirement
specification.A software requirements specification (SRS) is a full descriptive document that
actually consists of complete description about how the system is system is going to perform, it
is all about the expected task. After requirements engineering phase this SRS is signed off.
A software requirements specification (SRS) is a detailed description of a software system that
has to be developed. It is collection of functional and non-functional requirements, SRS is used
to minimize the time and effort required by software development team to reach desired goal.
The main aim of this is to minimize the development [Link] are the qualities of a good
SRS:
 It should be Correct
 It should be Unambiguous
 It should be Complete
 It should be Consistent
 It should be Ranked for importance and/or stability
 It should be Verifiable
 It should be Modifiable
 It should be Traceable.
Software requirement specification is responsible for documenting various requirements such as
• Functional requirement
• Performance
• Interface
• Maintainability
• Reliability
• Safety
• Quality
• Operational
• Resources

4.1.6 User Interface requirements


UI is an important part of any software or hardware or hybrid system. A software is widely
accepted if it is -
 Easy To Operate
 Quick In Response
 Effectively Handling Operational Errors
 Providing Simple Yet Consistent User Interface
User acceptance majorly depends upon how user can use the software. UI is the only way for
users to perceive the system. A well performing software system must also be equipped with
attractive, clear, consistent and responsive user interface. Otherwise the functionalities of
software system cannot be used in convenient way. A system is said be good if it provides
means to use it efficiently. User interface requirements are briefly mentioned below -

 Content presentation
 Easy Navigation
 Simple interface
 Responsive
 Consistent UI elements
 Feedback mechanism
 Default settings
 Purposeful layout
 Strategical use of color and texture.
 Provide help information
 User centric approach
 Group based view settings.

Review Questions:

 Define Process model?


 Explain waterfall model?
 Explain unified process model?
 Explain system requirement?
 Explain software required documents?
UNIt 3
Chapt er 5
Requirements Engineering
5.1 What is Requirement Engineering?

Requirement can be defined as necessary demand which needs to be fulfilled according to the demand.
The requirement must be relevant and detailed. Requirement Engineering focuses more on the process of
designing the system that users want.

Example: ―A system shall allow the users to register by entering their username and password, so to get
an access to the system‖

Requirement engineering is also known as Requirements analysis. Requirement engineering involves


the different sets of processes and such process depends upon the user‘s requirement and the application
domain. Hence Requirement Engineering can be said as the process of determining user expectations for a
new or modified product.

 Requirements engineering shared many concepts, techniques and concerns with human computer
interaction (HCI) especially user-centred design, participatory design and Interaction design.
 The goal of requirement engineering is to develop and maintain simple, informative, detailed and
descriptive ‗system requirements specification‘ documents. This documents helps the developer to
get specific requirement. Actually speaking, System requirements specification is a software
requirement specification (SRS) is a description of a software system to be developed.

5.2. Requirement Engineering Process

The Requirement Engineering process consists of following processes,


 Feasibility Studies
 Requirements Elicitation And Analysis
 Requirements Validation
 Requirements Management

Let us see the process briefly-

5.2.1 Feasibility Study

1. A feasibility study is an analysis of how successfully a project can be completed.


2. Feasibility study is about answering questions like, Is the system useful to the business? When any
client approaches any firm or organization for getting the desired product developed it comes up
with rough idea about what all functions the software must perform and which all features are
expected from the software.
3. For example, a small academy looking to expand its campus might perform a feasibility study to
determine if it should follow through, taking into account material and labor costs, how much useful
the project would be to the students, the public opinion, and laws that might have an effect on the
expansion.

4. Feasibility study helps the business personals to research on market before taking any big step.
Various types of feasibility study like technical feasibility, economic feasibility and legal feasibility
are developed by industry personnels.
Feasibility study analyzes whether the software product can be practically implemented? what would be
its contribution to organization, cost constraints and as per values and objectives of the organization. It
explores technical aspects of the project and product such as usability, maintainability, productivity and
integration ability.

5.2.2 Requirements Elicitation & Analysis:

Requirement Elicitation means requirement gathering, it is the process of gathering system


requirement from the system, user or any other stake holder. Requirements analysis, also called
requirements engineering, is the process of determining user expectations for a new or modified
product. The requirements must be relevant and detailed. Actually speaking it is the process of
writing down the user and system requirements into a document. The main specification is that
the requirements should be clear, easy to understand, complete and consistent. We can also term
requirement as functional specifications. For project management requirements analysis is an
important aspect.
Some important points regarding requirements analysis:
1. Requirements analysis is a team work.
2. It requires combination of hardware, software and manpower.
3. Group of experts in dealing with people.

5.2.3 Requirements Validation


1. Requirement validation is about checking that the requirement actually states the system
that the customer wants.
2. Requirements validation is a repetitive process which takes place throughout the software
development lifecycle of the project. During elicitation and analysis there must be constant
clarification of the data given in order to check its validity.
3. The main aim of requirements validation process is to ensure that the SRS is complete and
consistent. There must be scope for modification.
4. In requirement validation we constantly test to makes sure that the requirements statements
themselves are complete, correct, feasible, necessary, unambiguous and verifiable. This
may seem like a ―heavy duty‖ task but it is essential to pick up errors at this stage in order
to minimize defects later.

Some of the important points that requirement validation notifies:

 Complete – A requirement statement must be fully descriptive about its functionality to be delivered.
The description must be sufficient for the developer to understand and implement it.

 Technically available – Next important thing is the technically availability of stuffs. The impossible
technical requirement must not be specified.
 Necessary – A requirement must be of some value to a product being made or service to be
delivered. It must dictate something that a person really wants.

 Correct – Each requirement must accurately and specifically describe the required functionality, and
must be technically correct.

 Unambiguous – Unambiguous means not open to one or more interpretation. A requirement must
have only one possible interpretation or meaning for all the readers. While writing down a
requirement one must avoid ambiguous words like e.g. ―adequate‖, ―handles‖, ―fault tolerant‖, ―user
friendly‖, ―as much as possible‖, ―robust‖, ―several‖, ―as fast as possible‖etc.

 Verifiable – Verifiable means to verify that whether the requirement has been correctly implemented
.A requirement must be like a test can validate.

 Implementation free – A requirement statement should not specify about how its design should be
made or implementation process. A requirement states what is required not how a requirement
should be met.

5.1.4 Requirements Management

Requirement management is about the activity of managing changing requirements. The requirement for
the large system changes frequently. Hence requirement management can be said as managing the
changing requirement during the software development process. Some of the features of Requirement
management:

1. Requirement management is a continuous process throughout the lifecycle of a product.


2. Requirements can be generated by many stakeholders like: customers, partners, sales, support,
management, engineering, operations, and of course product management.
3. With proper requirement management there is clear and consistent communication between the
product team and engineering team and any needed changes are broadly shared with all
stakeholders.
4. As it is also an iterative process requirements management does not end with product release. From
that point on, the data coming in about the application‘s acceptability is gathered and fed into the
Investigation phase of the next generation or release. And the process starts again.

Figure: Requirement Management


Review Questions
1. Define Requirements. Explain Requirement Engineering process.
2. Explain Feasibility studies in details.
3. Explain Requirements elicitation and analysis in details.
4. Write a short note on Requirements validation.
5. Explain Requirements Management with neat diagram.
Chapt er 6

System Models
6.1 What is system Model?

A Model is said to be representation of some aspect of an existing or planned system. It is an abstraction


whose vision of software development is to built a problem free and systematic model. System modeling
helps the analyst to understand the functionality of system. This is all about maintaining problem level to
implementation of system. system modeling has generally come up with the representation language like
UML(unified modeling language).

Hence system model can be defined as ―a system model is the conceptual model that describes and
represents a system, it is used to conceptualize and construct a system in business and IT sector. It helps
the analyst to understand the functionality of a target system and are used to communicate with system.‖

Different models presents the system with different views like external view shows the system
environment, behavioral view shows the behavior of system. system comprises multiple views such as
planning, requirement , design, implementation, deployment, structure, behavior, input data, and output
data views.

6.2 Context Models

1. The context is the surrounding element for the system, and a model provides the mathematical interface
and a behavioral description of the surrounding environment.
2. Context modeling basically plays a key role in making efficient context management. It works to
illustrate that how the system will look like. It involves working with the stakeholders to distinguish
what the real system looks like.
3. A context model defines how context data are maintained. A formal or semi-formal description of the
context information is produced, that is present in a context-aware system.
4. Context models actually shows what lies outside the system boundaries, are used to illustrate the
operational context of a system. During requirement elicitation and analysis process one should decide
on the boundaries of system, this decision should be made early to limit the system cost and time
needed for analysis
5. Context model gets affected by social and organizational concerns.

Fig: Context diagram for online system.


6.3 Behavioral Models

Behavioral models are used to describe the overall behavior of a system. Behavior modeling is also called
as dynamic modeling is about guiding employees how work is to be done in workplace and guiding them
throughout the process of implementing the modeled behavior. All behavioral models really do is describe
the control structure of a system.

Describing the control system involves following steps:

 Sequence Of Operations
 Object States
 Object Interactions

There are two types of behavioral model :


• Data processing models shows that how data is processed through the system.
• State machine models shows that how the systems response to events.

6.4 Data Models


1. Conceptual data model like entity-relationship modeling is a relational schema database modeling
method, used in software engineering.
2. Data flow diagrams (DFDs) may be used to model the system‘s data processing.
3. Data flow diagram shows the processing steps as data flows through a system.
4. DFDs are used for many analysis methods.
5. DFDs are simple and intuitive notation that customers can understand. It shows how end-to-end
processing of data is done
6. Data flow diagrams may also be used in showing the data exchange between a system and other
systems in its environment.

6.5 Object Models

1. An object model is part of the object-oriented programming (OOP) lifecycle.


2. Through the use of object-oriented techniques an object model behaves like logical
interface, software or system that is modelled. It enables the creation of an architectural software or
system model prior to development or programming.
3. An object model helps describe or define a software/system in terms of objects and classes. It defines
the interfaces or interactions between different models, inheritance, encapsulation and other object-
oriented interfaces and features.

Object model examples include:


 Document Object Model (DOM): A set of objects that provides a modeled representation of dynamic
HTML and XHTML-based Web pages
 Component Object Model (COM): A proprietary Microsoft software architecture used to create
software components
6.6 Structured Methods
1. In software engineering, structured methods includes structured analysis (SA) and structured design
(SD) methods for analyzing business requirements and developing specifications for converting
practices into computer programs, hardware configurations, and related manual procedures.
2. A structured method includes a design process model, notations to represent the design, report
formats, rules and design guidelines. Structured methods may support some or all of the following
models of a system:
 An object model that shows the object classes used in the system and their dependencies.
 A sequence model that shows how objects in the system interact when the system is executing.
 A state transition model that shows system states and the triggers for the transitions from one
state to another.
 A structural model where the system components and their aggregations are documented.
 A data flow model where the system is modelled using the data transformations that take place
as it is processed. This is not normally used in object-oriented methods but is still frequently
used in real-time and business system design.
 A use-case model that shows the interactions between users and the system.

Structured methods within an organization ensure that the project has a justified business case before the
development begins and significant costs are incurred. Here before arriving at a single solution the
different solutions like, benefits, risks and costs are considered. The advantage to the organization is that
the benefits, risks and costs are considered and approved by the projects governance structure before
deciding on the project providing clarity to the organization

Structured methods provide a mechanism to identify record, assess and mitigate risks that occur during
the project. The advantage for the organization is that there is a clear procedure for the management of
risk which is auditable and builds trust with stakeholders since there is a clear demonstrable process. It
reduces the risk to the organization and ensures that risks are managed appropriately protecting the
organization from reputational damage which could impact on sales.

Review Questions:

 Define Requirement?
 Discuss requirement engineering?
 Explain context model?
 Define Behavioral model?
 Explain data model?
Unit 4
Chapt er 7

Design Engineering
7.1 What is Design Engineering?
Definition: The design engineering process is a step by step description that many engineers
follow to find a solution to a problem or creating any functional product or processes. Design
process is iterative in nature.

Example: Designing a machine or computer code


• When customer requirements, business needs, and technical considerations all come together in
the formulation of a product or system this is considered as design.
• The design model is helpful in providing detail about the software data structures,
architecture, interfaces, and components.
• The design model can be assessed for quality and be corrected & improved before code is
generated and tests are conducted.
• Software design is the step in software development life cycle which provides the solution to a
problem. It tries to specify how to fulfil the requirements mentioned in software requirement
specification.
• Software design is a process to transform user requirements into some suitable form,
which helps the programmer in software coding and implementation.

Common Stages of the Design Process:


• Problem Definition
• Concept Designing
• Preliminary Design
• Detailed Design
• Design Communication
7.1.1 Pattern Based Software Design:

1. Pattern Based Software Design is a description for how to solve a problem that can be used in
many different situations.
2. A design pattern is a general solution to a commonly occurring problem in software design. A
design pattern isn't a finished design that can be transformed directly into code.

7.1.2 Uses of Design Patterns


• Reusing design patterns helps to prevent several issues that can cause major problems and improves
code readability for coders and architects familiar with the patterns.
• Design patterns can speed up the development process by giving tested, proven developed formula.
Effective software design requires considering issues that may not become visible until unless it is
implemented.
• Design patterns provide general solutions, documented in a proper format that doesn't require specifics
tied to a particular problem. Patterns allow developers to communicate with other software‘s using
well-known, well understood names.
• Common design patterns can be improved over time, making them more robust than ad-hoc designs.
• Design patterns have 4 essential elements:
• Pattern name: increases vocabulary of designers
• Problem: intent, context, when to apply
• Solution: Unified Modeling Language-like structure, abstract code
• Final: results and tradeoffs
Types of Pattern

There are 23 design patterns which can be classified in three categories:-

1. Creational patterns

• Creational pattern deal with configuration and Initialization of classes and objects.
• Creates design pattern that provides a way to create objects while hiding the creation logic is provided
by these design patterns
• This doesn‘t emphasis on instantiating objects directly using new operator. This gives program more
flexibility in deciding which objects need to be created for a given use case.
Features:
 Abstract factory: Provide an interface for creating families of related or dependent objects
without specifying their concrete classes.
 Builder: Separate the construction of a complex object from its representation allowing the same
construction process to create various representations.
 Factory method: Define an interface for creating an object, but let subclasses decide which class
to instantiate. Factory Method lets a class defer instantiation to subclasses.
 Prototype: Specify the kinds of objects to create using a prototypical instance, and create new
objects by copying this prototype.
 Singleton: Ensure a class has only one instance, and provide a global point of access to it.

2. Structural Patterns:
• The main part of this pattern consists of classes or objects.
• These design patterns mainly has composition of class and object.
• Inheritance concept is used to compose interfaces and define ways to compose objects
to obtain new functionalities.
• This pattern deal with decoupling interface and implementation of classes and objects.
Features:
 Adapter or Wrapper: Convert the interface of a class into another interface clients expect.
Adapter lets classes work together that could not otherwise because of incompatible
interfaces.
 Bridge: Decouple an abstraction from its implementation allowing the two to vary
independently.
 Composite: Compose objects into tree structures to represent part-whole hierarchies.
Composite lets clients treat individual objects and compositions of objects
uniformly.
 Decorator: Attach additional responsibilities to an object dynamically keeping the same
interface. Decorators provide a flexible alternative to subclassing for extending
functionality.
 Facade: Provide a unified interface to a set of interfaces in a subsystem. Facade defines
a higher-level interface that makes the subsystem easier to use.
 Front Controller: Provide a unified interface to a set of interfaces in a subsystem.
Front Controller defines a higher-level interface that makes the subsystem easier to use.
 Flyweight: Use sharing to support large numbers of fine-grained objects efficiently.
 Proxy: Provide a surrogate or placeholder for another object to control access to it.

3. Behavioral patterns:
• Deal with dynamic interactions among societies of classes and objects
• How responsibility is being distributed
• These design patterns emphasis on communication between objects.

Features:
 Blackboard: Generalized observer, which allows multiple readers and writers.
Communicates information system-wide.
 Chain of responsibility: Avoid coupling the sender of a request to its receiver by giving more
than one object a chance to handle the request. Chain the receiving objects and pass the request
along the chain until an object handles it.
 Command: Encapsulate a request as an object, thereby letting you parameterize clients
with different requests, queue or log requests, and support undoable operations.
 Interpreter: Given a language, define a representation for its grammar along with an
interpreter that uses the representation to interpret sentences in the language.
 Iterator: Provide a way to access the elements of an aggregate object sequentially without
exposing its underlying representation.
 Mediator: Define an object that encapsulates how a set of objects interact. Mediator
promotes loose coupling by keeping objects from referring to each other explicitly, and it lets
you vary their interaction independently.
 Memento: Without violating encapsulation, capture and externalize an object's internal state
allowing the object to be restored to this state later.
 Null object: Avoid null references by providing a default object.
 Observer or Publish/subscribe: Define a one-to-many dependency between objects where a
state change in one object results with all its dependents being notified and updated
automatically.
 Servant: Define common functionality for a group of classes
 Specification: Recombinable business logic in a Boolean fashion
State: Allow an object to alter its behavior when its internal state changes. The object will

appear to change its class.
 Strategy: Define a family of algorithms, encapsulate each one, and make them interchangeable.
Strategy lets the algorithm vary independently from clients that use it.
 Template method: Define the skeleton of an algorithm in an operation, deferring some steps to
subclasses. Template Method lets subclasses redefine certain steps of an algorithm without
changing the algorithm's structure.
 Visitor: Represent an operation to be performed on the elements of an object structure. Visitor
lets you define a new operation without changing the classes of the elements on which it
operates.
Design Considerations

To design a piece of software there are many small things that need to be considered. Some of them
are:

 Compatibility - The software is able to operate with other products that are designed for
interoperability with another product. For example, some of software may be backward-
compatible with an older version of itself.
 Extensibility – Extensibility refers to adding new capabilities to the software without major
changes to the underlying architecture.
 Fault-tolerance - The software must be able to recover from component failure.
 Maintainability - The software can be restored to a specified condition within a specified period
of time. For example, Some gaming software may include the ability to periodically receive new
version definition updates in order to maintain the software's effectiveness.
 Modularity - This allows division of work in software development project which leads to better
maintainability. The resulting software comprises well defined, independent components. The
components could be then implemented and tested in isolation before being integrated to form a
desired software system.
 Packaging – Packaging is the main part of presentation. Printed material such as the box and
manuals should match the style designated for the target market and should enhance usability.
Information should be visible on the outside of the packaging. All components required for use
should be included in the package or specified as a requirement on the outside of the package.
 Reliability – The required function should be able to perform by the software under stated
conditions for a specified period of time.
 Reusability - the software is able to add further features and modification with slight or
no modification.
 Robustness - The software is able to operate under stress or tolerate unpredictable or invalid
input. For example, it can be designed with the capacity to recover quickly from difficulties to
low memory conditions.
 Security - The software is able to withstand unfriendly acts and negative influences.
 Usability - The software user interface must be usable for its target user/audience. Default
values for the parameters must be chosen so that they are a good choice for the majority of the
users.

7.2 Design quality

1. A good software design minimizes the time required to create, modify, and maintain the
software while achieving run-time performance.
2. A design must present an architecture built using known pattern designs.
3. Design pattern must consist of components with the right characteristics and that can be
implemented in an incremental way.
4. Design must be modular in nature that is must be divided in to module.
5. A design quality must be such that it should denote its notation with its meaning correctly.
A good software design must consist of following qualities:
 Functionality- Functionality is the main part of software design. A good software design must
consists of good functions, should be proper and updated.
 Reliability-One must be dependable on the software that has been provided to the stakeholders.
Dependability increases with good design.
 Usability- The designed software must be of good use. proper usability of software increases with good
software design.
 Efficiency- Efficiency is the ability of the software to do the required processing on least amount of
hardware. The software must be efficient if it uses less resources and gives maximum output.
 Maintainability- Software maintainability is defined as the degree to which an application is understood,
repaired, or enhanced. Software maintainability is important because it is approximately 75% of the cost
related to a project.
 Portability- Portability, in relation to software, is a measure of how easily an application can be
transferred from one computer environment to another.

7.3 Design concepts

Software design process is a series of well-defined steps. Software design varies according to design
approach.
 A solution design is created from requirement or previous used system and/or system
sequence diagram.
 Objects are identified and grouped into classes on behalf of similarity in attribute characteristics.
 Class hierarchy and relation among them is defined.
 Application framework is defined.
Software Design Approaches
It consists of two approaches, Top Down and Bottom up approaches both are necessary for good
design process:

Top Down Design


1. A system is composed of more than one sub-systems and it contains a number of components.
2. Sub-systems and components may create hierarchical structure in the system.
3. Top-down design takes the whole software system as one entity and then decomposes it to achieve
more than one sub-system.
4. Each sub-system or component is then treated as a system and decomposed further.
5. Top Down process keeps on running until the lowest level of system in the top-down hierarchy is
achieved.
6. Top-down design is more suitable when the software solution needs to be designed from scratch and
specific details are unknown.

Bottom-up Design

1. The bottom up design model starts with most specific and basic components.
2. It proceeds with composing higher level of component by using basic or lower level components.
3. Bottom-up keeps creating higher level components until the desired system is not evolved as one
single component. With each higher level, the amount of abstraction is increased.
4. Bottom-up strategy is more suitable when a system needs to be created from some existing system.
7.4 Design model

Software design yields three levels of Design models:

 Architectural Design - The architectural design is the highest abstract version of the system. It
recognizes the software as a system with many components interacting with each other. At this level,
the designers get the idea of proposed solution domain.
 High-level Design- The high-level design breaks the ‗single entity-multiple component‘ concept of
architectural design into less-abstracted view of sub-systems and modules and depicts their interaction
with each other. High-level design focuses on how the system along with all of its components can be
implemented in forms of modules. It recognizes modular structure of each sub-system and their relation
and interaction among each other.
 Detailed Design- Detailed design deals with the implementation part of what is seen as a system and its
sub-systems in the previous two designs. It is more detailed towards modules and their
implementations. It defines logical structure of each module and their interfaces to communicate with
other modules.

The set of fundamental software design concepts are as follows:


1. Abstraction
A collection of data that describes a data object is a data abstraction. A more detail description of the
solution is provided by the lower level of abstraction. A sequence of instruction that contain a specific
and limited function refers in a procedural abstraction..
2. Architecture
The complete structure with all elements of the software is known as software architecture. Structure
provides conceptual integrity for a system in a number of ways. In the architecture program modules
interact with each other in a specialized way. The components use the structure of data. The aim of the
software design is to obtain an architectural framework of a system. The more detailed design activities
are conducted from the framework.
3. Patterns
A design pattern describes a design structure and that structure solves a particular design problem in a
specified content.
4. Modularity
A software is separately divided into various components like name and address, such components
Sometime called as modules. Modules integrate to satisfy the problem requirements .Modularity is the
single attribute of a software that permits a program to be managed easily.
5. Information hiding
Some of the information are hidden like algorithms and data present. Modules must be specified and
designed like such module is not accessible for other modules not requiring that information.
6. Functional independence
The functional independence means separation related to the concept of modularity, abstraction and
information hiding. The functional independence can be performed using two criteria i.e Cohesion and
coupling.
Cohesion
Cohesion is an extension of the information hiding concept. A cohesive module performs a
single task and it requires a small interaction with the other components in other parts of the
program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.
7. Refinement
Refinement is a top-down design approach. It is a process of detailing the tasks in step by step manner
by establishing a hierarchy. A program is established for refining levels of procedural details.
8. Refactoring
Refactoring is a technique of reorganization which simplifies the design of components without
changing its function behaviour. Refactoring is the process of changing the software system without
changing its external behaviour of the code but still improves its internal structure.
9. Design classes
The model of software is defined as a set of design classes. Every class describes the elements of
problem domain and that focus on features of the problem which are user visible.

Review Questions:

 Define Design Engineering?


 Discuss common stages of the Engineering Design process?
 Explain pattern based software design?
 Define Pattern and types of pattern?
 Explain Design model?
Part – II
C h a p t e r 8:

8.1 Creating an Architectural Design:


Software architecture is described as the organization of a system, where the system represents a
set of components that accomplish the defined functions. It is the process of defining a structured
solution that meets all of the technical and operational requirements, while optimizing common
quality attributes such as performance, security, and manageability. It involves a series of
decisions based on a wide range of factors, and each of these decisions can have considerable
impact on the quality, performance, maintainability, and overall success of the application.

Software architecture encompasses the set of significant decisions about the organization of a
software system including the selection of the structural elements and their interfaces by which
the system is composed; behavior as specified in collaboration among those elements;
composition of these structural and behavioral elements into larger subsystems; and an
architectural style that guides this organization. Software architecture also involves functionality,
usability, resilience, performance, reuse, comprehensibility, economic and technology
constraints, tradeoffs and aesthetic concerns.

Martin Fowler outlines some common recurring themes when explaining architecture-
―The highest-level breakdown of a system into its parts; the decisions that are hard to change;
there are multiple architectures in a system; what is architecturally significant can change over a
system's lifetime; and, in the end, architecture boils down to whatever the important stuff is.‖

In Software Architecture in Practice, Bass, Clements, and Kazman define architecture as follows.
―The software architecture of a program or computing system is the structure or structures of the
system, which comprise software elements, the externally visible properties of those elements,
and the relationships among them. Architecture is concerned with the public side of interfaces;
private details of elements—details having to do solely with internal implementation—are not
architectural.‖
Keep in mind that the architecture should:
 Expose the structure of the system but hide the implementation details.
 Realize all of the use cases and scenarios.
 Try to address the requirements of various stakeholders.
 Handle both functional and quality requirements.
8.1.1 Why is Architecture Important?
Software must be built on a solid foundation. Failing to consider key scenarios, failing to design
forcommon problems, or failing to appreciate the long term consequences of key decisions can
put your application at risk. Modern tools and platforms help to simplify the task of building
applications, but they do not replace the need to design your application carefully, based on your
specific scenarios and requirements. The risks exposed by poor architecture include software that
is unstable, is unable to support existing or future business requirements, or is difficult to deploy
or manage in a production environment.
Systems should be designed with consideration for the user, the system (the IT infrastructure),
and the business goals. For each of these areas, you should outline key scenarios and identify
important quality attributes (for example, reliability or scalability) and key areas of satisfaction
and dissatisfaction. Where possible, develop and consider metrics that measure success in each
of these areas.
Consider the following high level concerns when thinking about software architecture:
 How will the users be using the application?
 How will the application be deployed into production and managed?
 What are the quality attribute requirements for the application, such as security,
performance, concurrency, internationalization, and configuration?
 How can the application be designed to be flexible and maintainable over time?
 What are the architectural trends that might impact your application now or after it has
been deployed?

8.1.2. Architecture description languages


An architecture description language (ADL) is used to describe a software architecture of ISO.
Many special-purpose ADLs have been developed since the 1990s,
1. AADL of SAE standard
2. Wright developed by Carnegie Mellon
3. Acme -developed by Carnegie Mellon
4. xADL developed by UCI
5. Darwin developed by Imperial College London
6. DAOP-ADL developed by University of Málaga
7. SBC-ADL developed by National Sun Yat-Sen University
8. byADL University of L'Aquila, Italy.
8.2 Data Design:
Data design is the first design activity, which results in less complex, modular and efficient
program structure. The information domain model developed during analysis phase is
transformed into data structures needed for implementing the software. The data objects,
attributes, and relationships depicted in entity relationship diagrams and the information stored in
data dictionary provide a base for data design activity. During the data design process, data types
are specified along with the integrity rules required for the data. For specifying and designing
efficient data structures, some principles should be followed. These principles are listed below.
 The data structures needed for implementing the software as well-as the operations that can
be applied on them should be identified.
 A data dictionary should be developed to depict how different data objects interact with
each other and what constraints are to be imposed on the elements of data structure.
 Stepwise refinement should be used in data design process and detailed design decisions
should be made later in the process.
 Only those modules that need to access data stored in a data structure directly should be
aware of the representation of the data structure.
 A library containing the set of useful data structures along with the operations that can be
performed on them should be maintained.
 Language used for developing the system should support abstract data types.
The structure of data can be viewed at three levels, namely, program component level,
application level, and business level.
- At the program component level, the design of data structures and the algorithms
required to manipulate them is necessary, if high-quality software is desired.
- At the application level, it is crucial to convert the data model into a database so that
the specific business objectives of a system could be achieved.
- At the business level, the collection of information stored in different databases
should be reorganized into data warehouse, which enables data mining.
8.2.1 Software Architecture:
Definition: Software Architecture is the process of designing the global organization of a
software system, including dividing software into subsystems, deciding how these will interact,
and determining their interfaces.
Basically Architecture plays a central role in any building construction. The architecture of any
building is described using a set of plans taken together and represent all those aspects of the
building in one design.
Similarly Software architecture also plays a central role in software engineering, and involves
the development of a variety of high-level views of the system. The individuals called software
architects often lead a team of other software engineers.
The term ‗software architecture‘ is also applied to the documentation produced as a result
of the process. For clarity, this documentation is often also called the architectural model
8.3 Architectural Style and Patterns:
The architectural style, also called as architectural pattern, is a set of principles which shapes
an application. It defines an abstract framework for a family of system in terms of the pattern
of structural organization. The architectural style is responsible to −
 Provide a lexicon of components and connectors with rules on how they can be combined.
 Improve partitioning and allow the reuse of design by giving solutions to frequently
occurring problems.
 Describe a particular way to configure a collection of components (a module with well-
defined interfaces, reusable, and replaceable) and connectors (communication link between
modules).
An architectural pattern is a general, reusable solution to a commonly occurring problem in
software architecture within a given context. Architectural patterns are often documented as
software design patterns. A software architectural style is a specific method of construction,
characterized by the features that make it notable.
An architectural style defines: a family of systems in terms of a pattern of structural
organization; a vocabulary of components and connectors, with constraints on how they can be
combined. Architectural styles are reusable 'packages' of design decisions and constraints that
are applied to architecture to induce chosen desirable qualities. The software that is built for
computer-based systems exhibit one of many architectural styles. There are many recognized
architectural patterns and styles, among them:
 Blackboard
 Broker Pattern
 Model-view-Control
 Client-server (2-tier, 3-tier, n-tier, cloud computing exhibit this style)
 Layered (or Multilayered architecture)
 Transaction processing Architecture
 Pipe and Filter architecture
 Peer-to-peer (P2P)

Some treat architectural patterns and architectural styles as the same, some treat styles as
specializations of patterns. They provide a common language or vocabulary with which one
can describe classes of systems.
8.3.1 Black Board System

A blackboard system is an artificial intelligence approach based on the blackboard


architectural model where a common knowledge base, the "blackboard", is iteratively updated
by a diverse group of specialist knowledge sources, starting with a problem specification and
ending with a solution. Each knowledge source updates the blackboard with a partial solution
when its internal constraints match the blackboard state. In this way, the specialists work
together to solve the problem. The blackboard model was originally designed as a way to
handle complex, ill-defined problems, where the solution is the sum of its parts.

8.3.2. Broker Architectural Pattern

The Broker architectural pattern can be used to structure distributed software systems with
decoupled components that interact by remote service invocations. A broker component is
responsible for coordinating communication, such as forwarding requests, as well as for
transmitting results and exceptions. The idea of the Broker architectural pattern is to distribute
aspects of the software system transparently to different nodes.

Figure: Broker Architectural Pattern

Using the Broker architecture, an object can call methods of another object without knowing
that this object is remotely located. A Proxy object calls the broker, which determines where
the remote object can be found.
CORBA is a well-known open standard that allows you to build this kind of architecture – it
stands for Common Object Request Broker Architecture. Java has many classes that allow you
to use CORBA facilities. There are also several other commercial architectures that also
provide broker capabilities.

Advantages
 A system that consists of multiple remote objects which interact synchronously or
asynchronously.
 Heterogeneous environment.

Problems
 Usually, there is a need of having great flexibility, maintainability and changeability when
developing applications.
 Scalability is reduced.
 Inherent networking complexities such as security concerns, partial failures, etc.
 Networking diversity in protocols, operating systems, hardware.
8.3.3. Model–View–Controller (MVC)
Model–View–Controller (MVC) is an architectural pattern used to help separate the user
interface layer from other parts of the system. It divides a given application into three
interconnected parts in order to separate internal representations of information from the ways
that information is presented to and accepted from the user. The MVC design pattern decouples
these major components allowing for efficient code reuse and parallel development. The MVC
pattern separates the functional layer of the system (the model) from two aspects of the user
interface, the view and the controller.

Figure: Modelview controller

 The model contains the underlying classes whose instances are to be viewed and
manipulated.
 The view contains objects used to render the appearance of the data from the model in the
user interface. The view also displays the various controls with which the user can interact.
 The controller contains the objects that control and handle the user‘s interaction with the
view and the model. It has the logic that responds when the user types into a field or clicks
the mouse on a control.

8.3.4. Client And Server Architecture:

The basic principles in the client and server architecture are: a) there is at least one component
that has the role of server, waiting for and then handling connections, and b) there is at least
one component that has the role of client, initiating connections in order to obtain some
service. An important variant of the client–server architecture is the three-tier model under
which a server communicates with both a client (usually through the Internet) and a database
server (usually within an intranet, for security reasons). The server acts as a client when
accessing the database server. A further extension to the Client–Server architectural pattern is
the Peer-to-Peer architectural pattern.
Fig.: Client and server architecture

8.3.5. Multitier architecture


In software engineering, multitier architecture (often referred to as n-tier architecture)
or multilayered architecture is a client–server architecture in which presentation, application
processing, and data management functions are physically separated. Building software in
layers is a classical architectural pattern that is used in many systems.
A complex system can be built by superimposing layers at increasing levels of abstraction. The
Multi-Layer architectural pattern replaces a layer by an improved version, or one with a
different set of capabilities.
N-tier application architecture provides a model by which developers can create flexible and
reusable applications. By segregating an application into tiers, developers acquire the option of
modifying or adding a specific layer, instead of reworking the entire application. A three-tier
architecture is typically composed of a presentation tier, a domain logic tier, and a data
storage tier.
The concept of layer and tier is often used interchangeably. This view holds that a layer is a
logical structuring mechanism for the elements that make up the software solution, while
a tier is a physical structuring mechanism for the system infrastructure. For example, a three-
layer solution could easily be deployed on a single tier, such as a personal workstation.

Figure: Multitier Architecture


8.3.6. The Transaction Processing Architectural Pattern:
In the Transaction Processing architectural pattern, a process reads a series of inputs one by
one. Each input describes a transaction – a command that typically makes some change to the
data stored by the system. There is a transaction dispatcher component that decides what to do
with each transaction; this dispatches a procedure call or message to a component that will
handle the transaction. For example, in the airline system, transactions might be used to add a
new flight, add a booking, change a booking or delete a booking.

Figure: Transaction Processing System For Airline Reservation

Transaction processing systems are often embedded in servers. A typical example is a database
engine, where the transactions are various types of queries and updates. Transactions
themselves vary in their level of complexity. In many cases an update transaction requires that
several separate changes be made to a database. Many transaction processing systems work in
environments where several different threads or processes can attempt to perform transactions
at once.

8.3.7. The Pipe-And-Filter Architectural Pattern


In software engineering, a pipeline consists of a chain of processing elements
(processes, threads, co-routines, functions, etc.), arranged so that the output of each element is
the input of the next. The Pipe-and-Filter architectural pattern is also often called the
transformational architectural pattern. In this a stream of data, in a relatively simple format, is
passed through a series of processes, each of which transforms it in some way. The series of
processes is called a pipeline. Data is constantly fed into the pipeline; the processes work
concurrently so that data is also constantly emerging from the pipeline. Data is often
a stream of records, bytes or bits, and the elements of a pipeline may be called filters; this is
also called the pipes and filters design pattern.
A pipeline is linear and one-directional, though sometimes the term is applied to more general
flows. For example, a primarily one-directional pipeline may have some communication in
the other direction, known as a return channel or backchannel, as a pipeline may be fully bi-
directional. Flows with one-directional tree and directed acyclic graph topologies behave
similarly to (linear) pipelines – the lack of cycles makes them simple – and thus may be
loosely referred to as "pipelines".The strength of this pattern is that the system can be
modified easily by adding or changing the transformational processes. This is easiest if, at
most stages of the pipeline, the data have the same general form.
Another example of a pipe-and-filter architecture is a speech transmission system. This would
continuously read sound coming from microphones, process it in various ways, compress it,
transmit it over a network and then regenerate sound at a remote location. It would use several
different transformational components to do this. Its architecture can be illustrated as follows.

8.3.8. Peer to Peer (P2P):


Peer-to-peer (P2P) computing or networking is a distributed application architecture that
partitions tasks or workloads between peers. Peers are equally privileged and dedicated in the
application. They are said to form a peer-to-peer network of nodes.
Peers make a portion of their resources, such as processing power, disk storage or network
bandwidth, directly available to other network participants, without the need for central
coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in
contrast to the traditional client-server model in which the consumption and supply of
resources is divided.
A peer-to-peer system is composed of various software components that are distributed over
several hosts. Each of these components can be both a server and a client. Any two
components can set up a communication channel to exchange information as required. The
concept has inspired new structures and philosophies in many areas of human interaction. In
such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has
emerged throughout society, enabled by Internet technologies in general.

Figure: Peer to Peer Architectural Pattern


8.4. Architectural Design:
Definition by IEEE: An Architectural Design is the process of defining a collection of
hardware and software components and their interfaces to establish the framework for the
development of a computer system.
Definition 2: The architecture design focuses on the decomposition of a system into different
components and their interactions to satisfy functional and nonfunctional requirements. The
architecture design is also called as system design. It appears as framework or we can say
preliminary blueprint from which the software can be developed. The key inputs to software
architecture design are −
 The requirements produced by the analysis tasks.
 The hardware architecture (the software architect in turn provides requirements to the
system architect, who configures the hardware architecture).
The result or output of the architecture design process is an architectural description. The
basic architecture design process is composed of the following steps −
8.4.1 Problem Recognition
This is the most crucial step because it affects the quality of the design that follows. Without a
clear understanding of the problem, it is not possible to create an effective solution. In fact,
many software projects and products are considered as unsuccessful because they did not
actually solve a valid business problem or have a recognizable return on investment (ROI).
Identify Design Elements and their Relationships:
In this phase, build a baseline for defining the boundaries and context of the system.
Decomposition of the system into its main components is based on the functional
requirements. The decomposition can be modeled by using a design structure matrix (DSM),
which shows the dependencies between design elements without specifying the granularity of
the elements. In this step, the first validation of the architecture is done by describing a number
of system instances and this step is referred as functionality based architectural design.

Figure: Basic Architectural Design Process


8.4.2. Plan the Architecture Design:
Each quality attribute is given an estimate, so in order to gather qualitative measures or
quantitative data, the design is evaluated. It involves evaluating the architecture for
conformance to architectural quality attributes requirements.
If all the estimated quality attributes are as per the required standard, the architectural design
process is finished. If not, then the third phase of software architecture design is entered:
architecture transformation. However, if the observed quality attribute does not meet its
requirements, then a new design must be created.

8.4.3. Transform the Architecture Design


This step is performed after an evaluation of the architectural design. The architectural design
must be changed until it completely satisfies the quality attribute requirements. It is concerned
with selecting design solutions to improve the quality attributes while preserving the domain
functionality.
Further, a design is transformed by applying design operators, styles, or patterns. For
transformation, take the existing design and apply design operator such as decomposition,
replication, compression, abstraction, and resource sharing.
Moreover, the design is again evaluated and the same process is repeated multiple times if
necessary and even performed recursively. The transformations (i.e. quality attribute
optimizing solutions) generally improve one or some quality attributes while they affect others
negatively.

8.4.4. The Principles of Architecture Design


Current thinking on architecture assumes that your design will evolve over time and that you
cannot know everything you need to know up front in order to fully architect your system.
Your design will generally need to evolve during the implementation stages of the application
as you learn more, and as you test the design against real world requirements. Create your
architecture with this evolution in mind so that it will be able to adapt to requirements that are
not fully known at the start of the design process.
Consider the following questions as you create an architectural design
 What are the foundational parts of the architecture that represent the greatest risk if you
get them wrong?
 What are the parts of the architecture that are most likely to change, or whose design you
can delay until later with little impact?
 What conditions may require you to refactor the design?

Review Questions:

1. Define Architecture and Software Architecture. Also explain the importance of


architecture.
2. Explain any four architectural styles and patterns.
3. Explain Client-server architectural pattern in details.
4. Give IEEE definition of architectural design also explain it with the help of diagram.
5. What is data design in terms of software engineering?
Testing Stratepies

An Overview
Chapt er 9
TestINg STRAteGIES
9.1 A Strategic Approach To Software Testing
 A testing strategy is an outline that describes the testing approach of the software development
cycle.
 It defines how testing should be carried out. A testing strategy is used to identify the levels of
testing which are to be applied along with the techniques, and tools to be used during testing.
 A Software Testing Strategy helps to convert test case designs into a well-planned execution
steps that will result in the construction of successful software.
 It is created to inform project managers, testers, and developers about some key issues of
the testing process.
 This strategy also decides test cases, test specifications, their decisions, and all these are
associated for execution.
 The testing strategy is to be developed as such which will meet the requirement of the
organization as it is critical to the success of software.
 The main purpose of testing can be quality assurance, reliability estimation, validation or
verification.
 Thus overview can be stated as these testing strategies must contain complete information
about the procedures to perform on testing and purpose and requirement of testing.
 The nature of development of software decides the choice of testing strategy. The design and
architecture of the software are also useful in choosing testing strategy.
 All testing strategies have following characteristics:
1. A software team should conduct effective formal reviews. This eliminates many
errors before testing starts.
2. Testing begins at component level and works ―outward‖ towards the integration of
entire computer based system.
3. Different testing techniques are appropriate at different points in time.
4. Testing is conducted by the developer and for large project, an independent test
group.
5. Testing and debugging are different activities, but debugging must be included in any
testing strategy.

 The Software testing strategy must accommodate low-level tests and high-level tests. The
low-level tests are necessary to verify a small source code segment that has been implemented
effectively. Also the high-level tests that validate major system functions against customer
requirements.
 Hence testing is a set of activity that can be planned in advance and conducted systematically.
So for this reason the template for software testing must be defined for the software processes.
The template is nothing but set of steps into which we can place specific test case design
techniques and testing methods.
9.2 Conventional Software
Conventional software is the software or applications which perform some particular task. For
example desktop application, Microsoft Power point, Ms Excel etc are considered as
conventional software. Many software errors are eliminated before testing begins by conducting
effective technical reviews.

9.2.1 Testing Strategies For Conventional Software:


Software testing is one of the significant activities in software development. It determines the
correctness, completeness and quality of the software product. Testing begins ―in the small‖ and
progresses ―to the large‖.Conventional Testing is based on the conventions/testing
standards planed as per Quality Management System to maintain [Link] is nothing but
validation i.e. testing done while executing the code. It is a sort of testing in which the test
engineers will check the developed application or its related parts are working according to the
requirements or not. In Conventional testing, the developed components of the application are
checked by the tester for whether they are working according to the expectations of the
consumers or not. The Conventional testing is focused more up on the functionality of the
software system or application rather than the guidelines and specifications provided by the
client company as that of in unconventional testing. It is concluded that Conventional testing is
nothing but a similar initiative of quality management system where as Unconventional testing
is to just by looking at the name that it doesn‘t follow any conventions.
Conventional software development is a spiral process. The testing may also be viewed in
context of spiral as in following figure.

Figure: Test Strategy as spiral process

Here we spiral along the stream lines that decrease the level of abstraction on each of them. The
initial phase of software development is system engineering which leads further to system
requirements analysis. In this Information processing, function, behavior, performance,
constraints, validation criteria for software are established. As we move inward along the spiral
we come to design and finally to the coding. Accordingly the Strategy for software testing may
also be viewed in the context of spiral as in system development. In that first comes Unit
Testing, Integration testing, Validation testing and finally System testing i.e from low-level to
high-level.
[Link]. Unit Testing:
At vertex of spiral, testing begins with unit testing. It aims at testing each component or unit
of software to check its functionality, independently. Ensures that it works properly as a unit.
Typical units are
 Interface: tested to check proper flow of information into and out of the program unit
under test.
 Local data structures: tested to check integrity of data during execution.
 Boundary conditions: tested to ensure unit operates properly at boundaries to limit
processing.
 Independent paths: tested to ensure all statements in the unit are executed at least once.
 Error handling paths: tested to check whether error messages are user friendly and
corresponds to error encountered, whether they reroute or terminate process when error
occurred.
 Common errors found during unit testing are: incorrect initialization, precision
inaccuracy, mixed mode operation, incorrect arithmetic precedence etc.

[Link]. Integration Testing:


Further processing the testing process, these units must be assembled or integrated to form
complete software package. So integration testing focuses the problems of verification and
construction and design of software architecture. Sandwich testing uses top-down tests for
upper levels of program structure coupled with bottom-up tests for subordinate levels.
Overall plan for integration of software and the specific tests are documented in a test
specification. Usually Black-Box testing is predominantly used and these techniques are most
prevalent during integration. Although some amount of white-box testing may be used for
ensuring some aspects.
Integration Testing Strategies:
 Top-down integration testing
1. Main control module used as a test driver and stubs are substitutes for components
directly subordinate to it.
2. Subordinate stubs are replaced one at a time with real components (following the
depth-first or breadth-first approach).
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests and other stub is replaced with a real component.
5. Regression testing may be used to ensure that new errors not introduced.
 Bottom-up integration testing
1. Low level components are combined into clusters that perform a specific software
function.
2. A driver (control program) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
 Regression testing – used to check for defects propagated to other modules by changes
made to existing program
1. Representative sample of existing test cases is used to exercise all software
functions.
2. Additional test cases focusing software functions likely to be affected by the change.
3. Tests cases that focus on the changed software components.
 Smoke testing
1. Software components already translated into code are integrated into a build.
2. A series of tests designed to expose errors that will keep the build from performing
its functions are created.
3. The build is integrated with the other builds and the entire product is smoke tested
daily (either top-down or bottom integration may be used).

[Link]. Validation testing:


Taking one more outward turn along spiral, comes validation testing. It consists of higher
order tests using validation criteria defined during requirement analysis phase. This test
assures that software meets all functional, behavioral and performance requirements.
Validation tests are based on the use-case scenarios, the behavior model, and the event flow
diagram created in the analysis model. Configuration review or audit is used to ensure that all
elements of the software configuration have been properly developed, cataloged, and
documented to allow its support during its maintenance phase. Validation testing provides
final assurance that software meets all the functional, behavioral, and performance
requirements. Usually black-box testing are used exclusively during validation. Validation
succeeds when software functions in a manner that is expected by customer and which is
reasonably affordable. The requirement specification describes all user visible attributes of
software which is reasonable expectations. This specification contains a section called
validation criteria. It then contains the information which forms the basis for validation testing
approach.

[Link]. System testing:


Finally we arrive at system testing where softer and other system elements are tested as a
whole. It is series of tests whose purpose is to exercise a computer-based system. The focus of
these system tests cases identifies interfacing errors. There are various testing performed
during validation such as
-Recovery testing checks the system‘s ability to recover from failures.
-Security testing verifies that system protection mechanism prevent improper penetration or
data alteration.
-Stress testing program is checked to see how well it deals with abnormal resource
demands.
-Performance testing designed to test the run-time performance of software, especially real-
time software.
-Deployment (or configuration) testing exercises the software in each of the environment in
which it is to operate.
Hence software once validated must be combined with other system. System testing verifies
that all the elements mesh properly and that overall system performance or function is
achieved. It is actually a series of different tests whose primary purpose is to exercise the
computer based system. Although each test has different purpose, all work to verify that
system elements have been properly integrated and perform allocated functions as discussed
above.

9.3. Black-Box testing:


Black-box testing is a method of software testing that examines the functionality of an
application in which the internal structure/ design/ implementation of the item is not known to
the tester. It is also known as behavioral testing. It is also known as Specifications based testing.
This method attempts to find errors in the following categories:
-Incorrect or missing functions
-Errors in data structures or external database access
-Behavior or performance errors
-Initialization and termination errors
Independent Testing Team usually performs this type of testing during the software testing life
cycle. This method of test can be applied to each and every level of software testing such as unit,
integration, system and acceptance testing.

Figure: Black-Box Testing


As we can see in the above figure, the software program, in the eyes of the tester, is like a black
box; inside which one cannot see. Thus this testing method is named so. Hence it is a procedure
to derive and/or select test cases based on an analysis of the specification, either functional or
non-functional, of a component or system without reference to its internal structure.

For Example: A tester, without knowledge of the internal structures of a website, tests the web
pages by using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against
the expected outcome.

9.3.1 Black Box Testing Techniques

Following are some techniques that can be used for designing black box tests.
 Equivalence Class
 Boundary Value Analysis
 Cause Effect Graphing
 Orthogonal Arrays
 Decision Tables
 State Models
 Exploratory Testing
 All-pairs testing

Some of these techniques are explained in short.


 Equivalence partitioning: It is a software test design technique that involves dividing
input values into valid and invalid partitions. It also selects representative values from
each partition as test data.
 Boundary Value Analysis: It is a software test design technique that involves
determination of boundaries for input values. In this it selects values that are at the
boundaries and just inside/ outside of the boundaries as test data.
 Cause Effect Graphing: It is a software test design technique that involves identifying the
cases (input conditions) and effects (output conditions) which produces a Cause-Effect
Graph, and generates test cases accordingly.

9.3.2 Black Box Testing Advantages

 Tests are done from a user‘s point of view and will help in exposing discrepancies in the
specifications.
 Tester need not know programming languages or how the software has been implemented.
 Tests can be conducted by a body independent from the developers, which allows for an
objective perspective.
 Test cases can be designed as soon as the specifications are complete.

9.3.3 Black Box Testing Disadvantages

 Only a small number of possible inputs can be tested and many program paths will be left
untested.
 Without clear specification test cases will be difficult to design.
 Tests can be redundant if the software developer has already run a test case.

9.4. White Box Testing:


-It is also known as Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box
Testing, Code-Based Testing or Structural Testing.
-White box testing examines the program structure and derives test data from the program
logic/code.
-White Box testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is known to the tester.
-The tester chooses inputs to exercise paths through the code and determines the appropriate
outputs.
-Programming know-how and the implementation knowledge is essential. White box testing is
testing beyond the user interface and into the nitty-gritty of a system.
White Box Testing method is applicable to the following levels of software testing:
1. Unit Testing: For testing paths within a unit.
2. Integration Testing: For testing paths between units.
3. System Testing: For testing paths between
subsystems. However, it is mainly applied to Unit Testing.
For Example
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs
against the expected outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car
is not moving.

9.4.1. White Box Testing Techniques:


 Statement Coverage - This technique is aimed at exercising all programming statements with
minimal tests.
 Branch Coverage - This technique is running a series of tests to ensure that all branches are
tested at least once.
 Path Coverage - This technique corresponds to testing all possible paths which means that
each statement and branch is covered.

9.4.2. White Box Testing Advantages

 Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
 Forces test developer to reason carefully about implementation.
 Reveals errors in "hidden" code.
 Spots the Dead Code or other issues with respect to best programming practices.
 Testing is more thorough, with the possibility of covering most paths.

9.4.3. White Box Testing Disadvantages

 Since tests can be very complex, highly skilled resources are required, with thorough
knowledge of programming and implementation.
 Expensive as one has to spend both time and money to perform white box testing.
 Every possibility that few lines of code are missed accidentally.
 In-depth knowledge about the programming language is necessary to perform white box
testing.
 Test script maintenance can be a burden if the implementation changes too frequently.
 Since this method of testing is closely tied with the application being testing, tools to cater to
every kind of implementation/platform may not be readily available.
9.5. The Art of Debugging:
Debugging occurs as a consequence of successful testing. When a test case uncovers an error,
debugging is a process that results in removal of errors. A testing strategy can be defined and test
case design can be conducted for the expected results. Although debugging can and should be an
orderly process implementing it is still an art.
A software engineer is confronted with the syntactic indication of software problem as a result of
tests. The syntactic external errors and the internal cause of errors may have no relationship with
one another.
Debugging is not testing but always occurs as a result of testing. The debugging process begins
with execution of a test case. Results are assessed effectively. There should be keen
correspondence between expected and actual performance. In many cases non-corresponding data
are symptom of cause debugging.
Sometimes symptom may appear in one part of program while the cause may actually be located
at site that is far removed. The symptoms may disappear temporarily when another error is
corrected. The major cause is non-error that is round-off inaccuracies. Sometimes symptoms may
be caused by human error which is not traced easily. They may be result of timing problem rather
than processing problems. It may be difficult to accurately reproduce input conditions such as
real-time applications. The symptoms may be sometimes intermittent which is particularly
common in embedded systems. These symptoms may be due to cause that is distributed across a
number of tasks running on different processors.

9.5.1 Debugging Strategies

 Brute force – memory dumps and run-time traces are examined for clues to error causes
 Backtracking – source code is examined by looking backwards from symptom to potential
causes of errors
 Cause elimination – uses binary partitioning to reduce the number of locations potential where
errors can exists.

Review Questions:
1. Explain Strategic approach to software testing in details.
2. What is Conventional software also explain test strategies for conventional software.
3. Explain Black-Box and White-Box testing with their diagrams also list their advantages and
disadvantages.
4. Write a short note on art of debugging.
Product Metrics
An Overview
C h a p t e r 10
10.1 Introduction:
What are metrics?
The IEEE glossary defines a metric as ―a quantitative measure of the degree to which a system,
component, or process possesses a given attribute.‖
Software metrics can be classified into three categories:
1. Product Metrics,
2. Process Metrics,
3. Project Metrics.
Product metrics describe the characteristics of the product such as size, complexity, design
features, performance, and quality level. They focus on the quality of deliverables. Product
metrics are combined across several projects to produce process metrics. Process metrics can be
used to improve software development and maintenance. Process metrics are collected across all
projects and over long periods of time. They are used for making strategic decisions. The intent is
to provide a set of process indicators that lead to long-term software process improvement. The
only way to know how/where to improve any process is to Measure specific attributes of the
process is to develop a set of meaningful metrics based on these attributes. Use the metrics to
provide indicators that will lead to a strategy for improvement. Examples include the
effectiveness of defect removal during development, the pattern of testing defect arrival, and the
response time of the fix process. Project metrics describe the project characteristics and execution.
Examples include the number of software developers, the staffing pattern over the life cycle of the
software, cost, schedule, and productivity. Some metrics belong to multiple categories. For
example, in-process quality metrics of a project are both process metrics and project metrics.

In software process, basic quality and productivity data are collected. These data are analyzed,
compared against past averages, and assessed. The goal is to determine whether quality and
productivity improvements have occurred. The data can also be used to pinpoint problem areas.
Remedies can then be developed and the software process can be improved.

10.1.1 Need for software metrics:


1. To characterize in order to gain an understanding of processes, products, resources, and
environments.
2. To evaluate in order to determine status with respect to plans
3. To predict in order to gain understanding of relationships among processes and products.
4. Build models of these relationships.
5. To improve in order to identify roadblocks, root causes, inefficiencies, and other
opportunities for improving product quality and process performance.

.
10.2 Software Quality:

Software quality is the degree of conformance to explicit or implicit requirements and


expectations.
Definition by IEEE

 The degree to which a system, component, or process meets specified requirements.


 The degree to which a system, component, or process meets customer or user needs
or expectations.

Other Definitions

 Quality: The degree to which a component, system or process meets


specified requirements and/or user/customer needs and expectations.
 Software Quality: The totality of functionality and features of a software product
that bear on its ability to satisfy stated or implied needs.

In the context of software engineering, software quality refers to two related but distinct notions:

 Software functional quality reflects how well it complies with or conforms to a given
design, based on functional requirements or specifications. It is the degree to which
the correct software was produced.
 Software structural quality refers to how it meets non-functional requirements that support
the delivery of the functional requirements, such as robustness or maintainability.
Software quality measurement quantifies to what extent a software program or system rates
along each of these five dimensions. An aggregated measure of software quality can be
computed through a qualitative or a quantitative scoring scheme or a mix of both and then a
weighting system reflecting the priorities. Such programming errors found at the system level
represent up to 90% of production issues. Consequently, code quality without the context of the
whole system has limited value.
"A science is as mature as its measurement tools," Measuring software quality is motivated by at
least two reasons: 1. Risk Management 2. Cost Management.

 Risk Management: Software failure has caused much inconvenience. Software errors have
caused human fatalities. The causes have ranged from poorly designed user interfaces to
direct programming errors. This resulted in requirements for the development of some types
of software, particularly for software embedded in medical that regulate critical
infrastructures.
 Cost Management: As in any other fields of engineering, an application with good structural
software quality costs less to maintain and is easier to understand and change. Industry data
demonstrate that poor application structural quality in core business applications (such
as enterprise resource planning(ERP), customer relationship management (CRM) or
large transaction processing systems in financial services results in cost and schedule
overruns and creates waste in the form of rework.
Both types of software now use multi-layered technology stacks and complex architecture so
software quality analysis and measurement have to be managed in a comprehensive and
consistent manner. There are many different definitions of quality. For some it is the
"capability of a software product to conform to requirements." There are few definitions given
by various authors.
1. Software quality according to Deming
The difficulty in defining quality is to translate future needs of the user into measurable
characteristics, so that a product can be designed and turned out to give satisfaction at a
price that the user will pay. This is not easy, and as soon as one feels fairly successful in the
endeavor, he finds that the needs of the consumer have changed, competitors have moved
in, etc.
2. Software quality according to Feigenbaum
Quality is a customer determination, not an engineer's determination, not a marketing
determination, nor a general management determination. It is based on the customer's
actual experience with the product or service, measured against his or her requirements --
stated or unstated, conscious or merely sensed, technically operational or entirely
subjective -- and always representing a moving target in a competitive market.
3. Software quality according to Juran
The word quality has multiple meanings. Two of these meanings dominate the use of the
word: 1. Quality consists of those product features which meet the need of customers and
thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies.
Nevertheless, in a handbook such as this it is convenient to standardize on a short definition
of the word quality as "fitness for use".
10.2.1 CISQ's(Consortium for IT Software Quality) quality model
Even though "quality is a perceptual, conditional and somewhat subjective attribute and may
be understood differently by different people", software structural quality characteristics have
been clearly defined by the Consortium for IT Software Quality (CISQ). Under the guidance
of Bill Curtis, CISQ has defined five major desirable characteristics of a piece of software
needed , these are "Whats" that need to be achieved:
1. Reliability
Reliability measures the level of risk and the likelihood of potential application failures. It
also measures the defects injected due to modifications made to the software we call it as
―Stability‖. The goal for checking and monitoring Reliability is to reduce and prevent
application downtime, application outages and errors that directly affect users.
2. Efficiency
Efficiency is especially important for applications in high execution speed environments
such as algorithmic or transactional processing where performance and scalability are
paramount. The source code and software architecture attributes are the elements that ensure
high performance. An analysis of source code efficiency and scalability provides a clear
picture of the latent risks and the harm they can cause to customer satisfaction due to
response-time degradation.
3. Security
A measure of the likelihood of potential security breaches due to poor coding practices and
architecture. This quantifies the risk of encountering critical vulnerabilities that damage the
business.
4. Maintainability
Maintainability includes concepts of modularity, understandability, changeability,
testability, reusability, and transferability from one development team to another.
Measuring and monitoring maintainability is a must for mission-critical applications where
change is driven by tight time-to-market schedules and where it is important for IT to
remain responsive to business-driven changes. It is also essential to keep maintenance costs
under control.
5. Size
Measuring software size requires that the whole source code be correctly gathered,
including database structure scripts, data manipulation source code, component headers,
configuration files etc. The sizing of source code is a software characteristic that obviously
impacts maintainability. Combined with the above quality characteristics, software size can
be used to assess the amount of work produced and other SDLC-related metrics.
McCall‘s quality factors were proposed in the early 1970s. They are as valid today as they were in
that time. It‘s likely that software built to conform to these factors will exhibit high quality well
into the 21st century, even if there are dramatic changes in technology.

10.3 Metrics for Analysis Model


Technical work in software engineering begins with the creation of the analysis model. It is at this
stage that requirements are derived and that a foundation for design is established. Therefore,
technical metrics that provide insight into the quality of the analysis model are desirable. It is
possible to adapt metrics derived for project application. These metrics examine the analysis
model predicting the ―size‖ of the resultant system. It is likely that size and design complexity will
be directly correlated. Function point, lines of code and Bang Metrics are the commonly used
methods for size estimation.

Types of Analysis Model:


1. The Function Point Metrics
2. (LOC) Line Of Code
3. Metrics for Specification Quality
4. Bang Metrics

10.3.1 The Function Point Metrics:


It can be used effectively as a means for predicting the size of a system that will be derived
from the analysis model. The function point metric, which was proposed by A.J Albrecht, is
used to measure the functionality delivered by the system, estimate the effort, predict the
number of errors, and estimate the number of components in the system. Function point is
derived by using a relationship between the complexity of software and the information domain
value. A simple analysis model representation can be illustrated in following figure. The Data
Flow diagram in the underlying section represents the analysis model which works on function
SafeHome function Software.

-The function manages user interaction, accepting a user password to activate or deactivate the
system, and allows inquiries on the status of security zones and various security sensors.
-The function displays a series of prompting messages and sends appropriate control signals to
various components of the security system.

The above data flow diagram is evaluated to determine the following measures required for
computation of the function point metrics:
• Number of user inputs
• Number of user outputs
• Number of user inquiries
• Number of files
• Number of external interfaces

10.3.2. Lines of Code (LOC)


Line of code (LOC) is one of the most widely used methods for size estimation. LOC can be
defined as the number of delivered lines of code, excluding comments and blank lines. It is
highly dependent on the programming language used as code writing varies from one
programming language to another. For example, lines of code written (for a large program) in
assembly language are more than lines of code written in C++ or Java.
Simple size-oriented metrics can be derived from LOC such as errors per KLOC (thousand
lines of code), defects per KLOC, cost per KLOC, and so on. LOC has also been used to
predict program complexity, development effort, programmer performance, and so on. For
example, Hasltead proposed a number of metrics, which are used to calculate program length,
program volume, program difficulty, and development effort.
10.3.3. Metrics for Specification Quality
To evaluate the quality of analysis model and requirements specification, a set of
characteristics has been proposed. These characteristics include specificity, completeness,
correctness, understandability, verifiability, internal and external consistency, &achievability,
concision, traceability, modifiability, precision, and reusability.
Most of the characteristics listed above are qualitative in nature. However, each of these
characteristics can be represented by using one or more metrics. For example, if there are
nrrequirements in a specification, then nr can be calculated by the following equation.
nr =nf +nrf
Where
nf = number of functional requirements
nnf = number of non-functional requirements.
In order to determine the specificity of requirements, a metric based on the consistency of the
reviewer's understanding of each requirement has been proposed. This metric is represented
by the following equation.
Q1 = nui/nr
Where
nui = number of requirements for which reviewers have same understanding
Q1 = specificity.
Ambiguity of the specification depends on the value of Q. If the value of Q is close to 1 then
the probability of having any ambiguity is less.
Completeness of the functional requirements can be calculated by the following equation.
Q2 = nu / [nj*ns]
Where
nu = number of unique function requirements
ni = number of inputs defined by the specification
ns = number of specified state.
Q2 in the above equation considers only functional requirements and ignores non-functional
requirements. In order to consider non-functional requirements, it is necessary to consider the
degree to which requirements have been validated. This can be represented by the following
equation.
Q3 = nc/ [nc + nnv]
Where
nc= number of requirements validated as correct
nnv= number of requirements, which are yet to be validated.

10.3.4. Bang Metrics:

Like the function point metric, the bang metric can be used to develop an indication of the size
of the software to be implemented as a consequence of the analysis model. It is developed by
DeMarco. The bang metric is ―an implementation independent indication of system size.‖ The
software engineer must first evaluate a set of primitives to compute the bang metric.
Primitives are determined by evaluating the analysis model. These set of primitives are as
follows:
 Functional primitives (FuP). The number of transformations (bubbles) that appears at the
lowest level of a data flow diagram.
 Data elements (DE). The number of attributes of a data object, data elements are not
composite data and appear within the data dictionary.
 Objects (OB). The number of data objects.
 Relationships (RE). The number of connections between data objects.
 States (ST). The number of user observable states in the state transition diagram.
 Transitions (TR). The number of state transitions in the state transition diagram.

In addition to these six primitives, additional counts are determined for


 Modified manual function primitives (FuPM): Functions that lie outside the system
boundary but must be modified to accommodate the new system.
 Input data elements (DEI): Those data elements that are input to the system.
 Output data elements. (DEO): Those data elements that are output from the system.
 Retained data elements. (DER): Those data elements that are retained (stored) by the
system.
 Data tokens (TCi): The data tokens (data items that are not subdivided within a functional
primitive) that exist at the boundary of the ith functional primitive (evaluated for each
primitive).
 Relationship connections (REi): The relationships that connect the ith object in the data
model to other objects.

10.4 Metrics for Design Model:


One can determine metrics for various aspects of design quality and using them to guide the
manner in which the design evolves. The design of complex software-based systems often
proceeds with virtually no measurement. One cannot design without measurement and yet design
without measurement is an unacceptable alternative. Architectural design metrics focus on
characteristics of the program architecture with an emphasis on the architectural structure and the
effectiveness of modules. These metrics are black box in the sense that they do not require any
knowledge of the inner workings of a particular software component. Card and Glass define three
software design complexity measures: structural complexity, data complexity, and system
complexity.
1. Structural Complexity of a module i is defined in the following manner:

S(i) = f2out(i)
Where,
fout(i) is the fan-out of module i.

2. Data Complexity provides an indication of the complexity in the internal interface for a
module i and is defined as

D(i) = v(i)/[ fout(i) +1]


Where,
v(i) is the number of input and output variables that are passed to and from module i.
3. System Complexity is defined as the sum of structural and data complexity, specified as

C(i) = S(i) + D(i)


As each of these complexity values increases, the overall architectural complexity of the system
also increases. This leads to a greater likelihood that integration and testing effort will also
increase.

An earlier high-level architectural design metric proposed by Henry and Kafura also makes use
the fan-in and fan-out. The authors define a complexity metric (applicable to call and return
architectures) of the form

HKM = length(i) x [fin(i) + fout(i)]2

Where,
length(i) is the number of programming language statements in a module i and fin(i) is the
fan-in of a module i. Henry and Kafura extend the definitions of fan-in and fan-out.

Component-level design metrics focus on internal characteristics of a software component and


include measures of the ―three Cs—module cohesion, coupling, and complexity. These measures
can help a software engineer to judge the quality of a component-level design. The metrics
discussed are glass box in the sense that they require knowledge of the inner working of the
module under consideration. Component-level design metrics may be applied once a procedural
design has been developed.

Cohesion metrics: Bieman and Ott define a collection of metrics that provide an indication of
the cohesiveness of a module. The metrics are defined in terms of five concepts and measures:
 Data slice. Stated simply, a data slice is a backward walk through a module
that looks for data values that affect the module location at which the walk
began. It should be noted that both program slices (which focus on statements
and conditions) and data slices can be defined.
 Data tokens. The variables defined for a module can be defined as data
tokens for the module.
 Glue tokens. This set of data tokens lies on one or more data slice.
 Superglue tokens. These data tokens are common to every data slice in a module.
 Stickiness. The relative stickiness of glue token is directly proportional to the number of
data slices that it binds.
Coupling metrics: Module coupling provides an indication of the ―connectedness‖ of a module
to other modules, global data, and the outside environment. Coupling was discussed in
qualitative terms.
Complexity Metrics: A variety of software metrics can be computed to determine the
complexity of program control flow. Many of these are based on the flow graph. A graph is a
representation composed of nodes and links (also called edges). Such graphs are called directed
graphs. McCabe and Watson identify a number of important uses for complexity metrics:
Complexity metrics can be used to predict critical information about reliability and
maintainability of software systems from automatic analysis of source code [or procedural
design information]. Complexity metrics also provide feedback during the software project to
help control the design activity. The most widely used complexity metric for computer software
is cyclomatic complexity and was originally developed by Thomas McCabe. The McCabe
metric provides a quantitative measure of testing difficulty and an indication of ultimate
reliability. The Cyclomatic complexity may be used to provide a quantitative indication of
maximum module size. Thus Quality of software design also plays an important role in
determining the overall quality of the software.

10.5 Metrics for Source Code


Halstead proposed the first analytic laws for Source Code by using a set of primitive measures,
which can be derived once the design phase is complete and code is generated. These measures
are listed below.
nl = number of distinct operators in a program
n2 = number of distinct operands in a program
N1 = total number of operators
N2= total number of operands.
By using these measures, Halstead developed an expression for overall program length, program
volume, program difficulty, development effort, and so on.
1. Program length (N) can be calculated by using the following
equation. N = n1log2nl + n2 log2n2.
2. Program volume (V) can be calculated by using the following equation.
V = N log2 (n1+n2).
Note that program volume depends on the programming language used and represents the volume
of information (in bits) required to specify a program. Volume ratio (L) can be calculated by
using the following equation.
L = Volume of the most compact form of a
program Volume of the actual program
Where, value of L must be less than 1. Volume ratio can also be calculated by using the
following equation.
L = (2/n1)* (n2/N2).
3. Program Difficulty Level (D) and Effort (E) can be calculated by using the following
equations.
D = (n1/2)*(N2/n2).
E = D * V.
10.6 Metrics for Testing
Majority of the metrics used for testing focus on testing process rather than the technical
characteristics of test. Generally, testers use metrics for analysis, design, and coding to guide
them in design and execution of test cases.
Function point can be effectively used to estimate testing effort. Various characteristics like errors
discovered, number of test cases needed, testing effort, and so on can be determined by estimating
the number of function points in the current project and comparing them with any previous
project.
Metrics used for architectural design can be used to indicate how integration testing can be carried
out. In addition, Cyclomatic complexity can be used effectively as a metric in the basis-path
testing to determine the number of test cases needed.
For developing metrics for object-oriented (OO) testing, different types of design metrics that
have a direct impact on the testability of object-oriented system are considered. While developing
metrics for OO testing, inheritance and encapsulation are also considered. A set of metrics
proposed for OO testing is listed below.

 Lack of cohesion in methods (LCOM): This indicates the number of states to be


tested. LCOM indicates the number of methods that access one or more same attributes.
The value of LCOM is 0, if no methods access the same attributes. As the value of
LCOM increases, more states need to be tested.
 Percent public and protected (PAP): This shows the number of class attributes, which
are public or protected. Probability of adverse effects among classes increases with
increase in value of PAP as public and protected attributes lead to potentially higher
coupling.
 Public access to data members (PAD): This shows the number of classes that can
access attributes of another class. Adverse effects among classes increase as the value of
PAD increases.
 Number of root classes (NOR): This specifies the number of different class
hierarchies, which are described in the design model. Testing effort increases with
increase in NOR.
 Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater than 1, it
indicates that the class inherits its attributes and operations from many root classes. Note
that this situation (where FIN> 1) should be avoided.
10.7 Metrics for Maintenance

For the maintenance activities, metrics have been designed explicitly. IEEE have proposed
Software Maturity Index (SMI), which provides indications relating to the stability of software
product. Once all the parameters are known, SMI can be calculated by using the following equation.
SMI = [MT- (Fa+ Fe + Fd)]/MT.
Where,

Number of modules in current release (MT)

Number of modules that have been changed in the current release (Fe)

Number of modules that have been added in the current release (Fa)

Number of modules that have been deleted from the current release (Fd)

Note that a product begins to stabilize as 8MI reaches 1.0. SMI can also be used as a metric for
planning software maintenance activities by developing empirical models in order to know the
effort required for maintenance.

Review Questions:
1. Define Quality and Software Quality.
2. Explain Metrics for Analysis model in detail.
3. What is Metrics for Design model explain with expressions.
4. Write a short note on Metrics for 1. Source Code. 2. Maintenance
5. Explain Metrics for Testing in detail.
C h a p t e r 11
Metrics for Process and Products
11.1 Software Measurement

Software measurement is a quantified attribute of a characteristic of a software product or the


software process. It is a discipline within software engineering. The content of software
measurement is defined and governed by ISO Standard ISO 15939 (software measurement
process).
The objectives of measurement should be established before data collection begins. Each
technical metric should be defined in an unambiguous manner. To assess the quality of the
engineered product or system and to better understand the models that are created, some measures
are used. These measures are collected throughout the software development life cycle with an
intention to improve the software process on a continuous basis.
Measurement helps in estimation, quality control, productivity assessment and project control
throughout a software project. Also, measurement is used by software engineers to gain insight
into the design and development of the work products. In addition, measurement assists in
strategic decision-making as a project proceeds.
Software measurements are of two categories, namely, direct measures and indirect measures.
Direct measures include software processes like cost and effort applied and products like lines of
code produced, execution speed, and other defects that have been reported. Indirect measures
include products like functionality, quality, complexity, reliability, maintainability, and many
more.
Generally, software measurement is considered as a management tool which if conducted in an
effective manner, helps the project manager and the entire software team to take decisions that
lead to successful completion of the project. Measurement process is characterized by a set of five
activities, which are listed below.

 Formulation: This performs measurement and develops appropriate metric for software
under consideration.
 Collection: This collects data to derive the formulated metrics.
 Analysis: This calculates metrics and the use of mathematical tools.
 Interpretation: This analyzes the metrics to attain insight into the quality of representation.
 Feedback: This communicates recommendation derived from product metrics to the
software team.

Note that collection and analysis activities drive the measurement process. In order to perform
these activities effectively, it is recommended to automate data collection and analysis. One can
use statistical techniques to interrelate external quality features and internal product attributes.
Software measurement is an engineering process meant to aid in assessing these areas:

1. Productivity, 2. Code Quality3. Code Complexity, 4. Software Risk, 5. Technical Debt, 6.


Software Size, Compliance to Standards and Regulations
If your developers are not producing high-quality code or not meeting architecture standards,
then numerous problems are likely to surface during or after application implementation. These
include increased IT costs, additional maintenance efforts, and reduced security within a
complex infrastructure. Software measurement and metrics provide an accurate, objective
approach to evaluating these key factors.

11.1.1 What Are Software Measurement Function Points?


Several forms of analysis may be used to assess an application; however, software measurement
function points provide a repeatable evaluation method. A function point represents tasks to be
accomplished by an application. They are based on identified end user requirements and allow
organizations to obtain a benchmarking score for continuously monitoring the above items. Each
application must accomplish a set which can be automatically evaluated. This will help in
achieving the following:
 Vulnerability Detection
 Quality Assessments
 Productivity Enhancements
 Compliance Management
 Development Practice Improvements
Software measurement function points finds vulnerabilities in a complex infrastructure where
problems often exist in one or more tiers. This benchmarking measurement can also be used to
ensure developers and meeting defined architecture standards for each completed application.
One method of software measurement is metrics that are analyzed against the code itself.

11.2 Metrics for software quality


Over the last several years, improvements in development and testing have provided an
opportunity for organizations to apply new metrics that can lead to genuine transformation. The
most common of these proven concepts is agile development practices. When executed well,
agile methods can enable a team to quickly deliver high-quality software. Without these types
of metrics, organizations will simply attempt their transformation blindly, with limited capacity
to show results.

1. Collect and Organize Test Cases: Split your test cases into test suites. Set project
milestones and assign tasks to individual testers.

2. Track Execution and Test Results: Track the number of completed, failed, and rescheduled
tests. Keep a complete history of all results.

3. Measure Progress and Success Rate: Project dashboards, clear reports, and email
notifications tell you where you are in the test cycle.

4. Take Action in the Right Areas: Reports on all levels: from single test runs, milestones, to
project reports guide your decisions.
11.2.1 The Three types of metrics to assure software quality
The three types of metrics you should collect as part of your quality assurance process
are: source code metrics, development metrics, and testing metrics.
1. Source code metrics
These are measurements of the source code that make constructs software. Source code is
the fundamental building block of which software is made. Hence measuring it is a way of
making sure that the code is of high-caliber. The best source code when looked closely
might spot a few areas that can be optimized for even better performance.
One must ensure that appropriate amount of code have been generated by measuring source
code quality and the number of lines of code. Another thing to track is how compliant each
line of code is with the programming languages‘ standard usage rules. It is equally
important to track the percentage of comments within the code, which tells the how much
maintenance is required. Less the comments, more are the problems, while deciding changes
or upgrades in the program. Thecode duplications and unit test coverage must be avoided,
which tells how smoothly the product is going to run.
2. Development metrics
These metrics measure the software development process itself. Gather development metrics
to look for several ways to make the operations more efficient and reduce incidents of
software errors. Measuring number of defects within the code and time to fix them tells a lot
about the development process itself. One must tally number of defects that appear in the
code and also note the time it takes to fix them. If any defects have to be fixed multiple
times then there might be a misunderstanding of requirements or skills gaps. Those gaps are
important to address or fix as soon as possible. Thus defining the root cause and
implementing corrective measures enables continuous improvement.
3. Testing metrics
These metrics helps to evaluate the product to be functional and worth using it. There are
two major testing metrics. 1. Test coverage: It collects data about which parts of the
software program are executed when it runs a test. 2. Defect removal efficiency: This is the
second part is a test of the testing itself and it checks the success rate for spotting and
removing defects. The more you measure, the more you know about your product, the more
likely you are able to improve it. Automating the measurement process is the best way to
measure software quality. It is not the easiest thing, or the cheapest, but it will save tons of
cost down the line.

Review Questions:

1. What is software measurement and explain its accessing areas?


2. What are software measurement function points?
3. Explain Metrics for software quality in details.
4. What is Source code metrics, Development metrics and testing metrics?
Risk Management
An Overview
C h a p t e r 12

12.1 Risk Management:


There are two types of events i.e. 1. Negative events can be classified as risks while 2. Positive
events are classified as opportunities.
Definition: Risk management is the identification, assessment, and prioritization
of risks (defined in ISO 31000 as the effect of uncertainty on objectives) followed by
coordinated and economical application of resources to minimize, monitor, and control the
probability or impact of unfortunate events or to maximize the realization of opportunities.
Risk management is the process of identifying, assessing and controlling threats to an
organization's capital and earnings. Risks can come from various sources including uncertainty
in financial markets, threats from project failures legal liabilities, credit risk, accidents, natural
causes and disasters, deliberate attack from an adversary, or events of uncertain or
unpredictable root-cause. A Risk management plan includes processes for identifying and
controlling threats to its digital huge assets.
Risk management standards have been developed by several organizations, these standards are
designed to help organizations identify specific threats, assess unique vulnerabilities to determine
their risk, identify ways to reduce these risks and then implement risk reduction efforts according
to organizational strategy. The ISO 31000 is designed to "increase the likelihood of achieving
objectives, improve the identification of opportunities and threats, and effectively allocate and
use resources for risk treatment".
The ISO recommended the following principles which should be a part of the overall risk
management process:
 The process should create value for the organization.

 It should be an integral part of the overall organizational process.


 It should factor into the overall decision-making process.
 It must explicitly address any uncertainty.
 It should be systematic and structured.
 It should be based on the best available information.
 It should be tailored to the project.
 It must take into account human factors, including potential errors.
 It should be transparent and all-inclusive.
 It should be adaptable to change.
 It should be continuously monitored and improved upon.
Thus, the ultimate goal for these standards is to establish common frameworks and processes to
effectively implement risk management strategies.
The Risk Management Process involves two major phrases 1. Risk Assessment 2. Risk Control
Risk Management

Risk Assessment Risk Control

Identification Analysis Prioritization Planning Mitigation Monitoring

12.1.1 Risk management strategies and processes:


 Risk Identification. The company identifies and defines potential risks that may
negatively influence a specific company process or project.
 Risk Analysis. Once specific types of risk are identified, the company then determines the
odds of it occurring, as well as its consequences. The goal of the analysis is to further
understand each specific instance of risk, and how it could influence the company's projects
and objectives.
 Risk Assessment and evaluation. The risk is then further evaluated after determining the
risk's overall likelihood of occurrence combined with its overall consequence. The
company can then make decisions on whether the risk is acceptable and whether the
company is willing to take it on based on its risk appetite.
 Risk Mitigation. During this step, companies assess their highest-ranked risks and develop
a plan to alleviate them using specific risk controls. These plans include risk mitigation
processes, risk prevention tactics and contingency plans in the event the risk comes to
fruition.
 Risk Monitoring. Part of the mitigation plan includes following up on both the risks and
the overall plan to continuously monitor and track new and existing risks. The overall risk
management process should also be reviewed and updated accordingly.

After the company's specific risks are identified and the risk management process has been
implemented, there are several different strategies companies can take in regard to different types
of risk.
 Risk avoidance. While the complete elimination of all risk is rarely possible, a risk avoidance

strategy is designed to deflect as many threats as possible in order to avoid the costly and
disruptive consequences of a damaging event.
 Risk reduction. Companies are sometimes able to reduce the amount of effect certain risks can
have on company processes. This is achieved by adjusting certain aspects of an overall project
plan or company process, or by reducing its scope.
 Risk sharing. Sometimes, the consequences of a risk is shared, or distributed among several of
the project's participants or business departments. The risk could also be shared with a third
party, such as a vendor or business partner.
 Risk retaining. Sometimes, companies decide a risk is worth it from a business standpoint,
and decide to retain the risk and deal with any potential fallout. Companies will often retain a
certain level of risk a project's anticipated profit is greater than the costs of its potential risk.

12.1.2. Kinds of Risks:


1. Project risks:
It threatens the project plan. In this one can identify potential budget, schedule, recruitment
in organization, resources, customer and requirement problems and their impact on software
project.
2. Technical risks:
It threatens the quality and timeliness of the software to be produced
 It identifies potential design, implementation, and interface, verification, and
maintenance problems.
 Specification ambiguity, technical uncertainty, technical obsolescence, and "leading-
edge" technology are risk factors.
 Technical risks occur because the problem is harder to solve than they are thought would
be.
3. Business risks:
It threatens the viability of the software to be built.
Following are the top five business risks:
(1) Build an excellent product or system that no one really wants is called market risk.
(2) Build a product that no longer fits into the overall business strategy for the company is
called strategic risk.
(3) Build a product that the sales force doesn't understand how to sell
(4) Losing the support of senior management due to a change in focus or a change in
people is called management risk.
(5) Losing budgetary or personnel commitment is the budget risks.
4. Known risks:
They are uncovered after evaluation:
 project plan
 business and technical environment in which the project is being developed
 And other reliable information sources (e.g., unrealistic delivery date, lack of
documented requirements or software scope, poor development environment).
5. Predictable risks:
It is extrapolated from past project experience of
 Staff turnover
 Poor communication with the customer
 Dilution of staff effort as ongoing maintenance requests are serviced.

6. Unpredictable risks: It occurs, but is difficult to identify in advance.

Other Risks Involved:


1. People risks are associated with the availability, skill level, and retention of the people on
the development team.
[Link] risks are associated with the magnitude of the product and the product team. Larger
teams are harder to coordinate if team members do not maintain their discipline.
[Link] risks are related to whether the team uses appropriate software development
process and to whether the team members actually follow the process.

4. Technology risks are derived from the software or hardware technologies that are being
used as part of the system being developed. Using new or emerging or complex technology
increases the overall risk.
[Link] risks are similar to technology risks. They relate to the use, availability, and
reliability of support software used by the development team, such as development
environments and other

Computer-Aided Software Engineering (CASE) tools.


[Link] and managerial risks are derived from the environment where the
software is being developed. Some examples are the financial stability of the company and
threats of company reorganization and the potential of the resultant loss of support by
management due to a change in focus or a change in people.
[Link] risks are derived from changes to the customer requirements, the process of
managing these requirements changes, and the ability of the customer to communicate
effectively with the team and to accurately convey the attributes of the desired product.
8. Estimation risks are derived from inaccuracies in estimating the resources and the time
required to build the product properly.
[Link] and support risks involve the chances that the team builds a product that the sales
force does not understand how to sell or that is difficult to correct, adapt, or enhance.
12.2. What Is Software Risk And Software Risk Management?

Risk always involves two characteristics:


 Uncertainty- the risk may or may not happen; that is, there are no 100% probable risks
 Loss- if the risk becomes a reality, unwanted consequences or losses will occur.

Risk is an expectation of loss, a potential problem that may or may not occur in the future. It is
generally caused due to lack of information, control or time. A possibility of suffering from loss
in software development process is called a software risk. Loss can be anything, increase in
production cost, development of poor quality software, not being able to complete the project on
time. Software risk encompasses the probability of occurrence for uncertain events and their
potential for loss within an organization.

Software risk exists because the future is uncertain and there are many known and unknown
things that cannot be incorporated in the project plan. Typically, software risk is viewed as a
combination of robustness, performance efficiency, security and transactional risk propagated
throughout the system. A software risk can be of two types (1) internal risks that are within the
control of the project manager and (2) external risks that are beyond the control of project
manager. Risk management is carried out to:
1. Identify the risk
2. Reduce the impact of risk
3. Reduce the probability or likelihood of risk
4. Risk monitoring

A project manager has to deal with risks arising from three possible cases:
1. Known known‘s are software risks that are actually facts known to the team as well as to
the entire project. For example not having enough number of developers can delay the
project delivery. Such risks are described and included in the Project Management Plan.
2. Known unknowns are risks that the project team is aware of but it is unknown that such
risk exists in the project or not. For example if the communication with the client is not of
good level then it is not possible to capture the requirement properly. This is a fact known
to the project team however whether the client has communicated all the information
properly or not is unknown to the project.
3. Unknown Unknowns are those kind of risks about which the organization has no idea.
Such risks are generally related to technology such as working with technologies or tools
that you have no idea about or the work that suddenly exposes to absolutely unknown
risks.
Software risk management is all about risk quantification of risk. This includes:
1. Giving a precise description of risk event that can occur in the project
2. Defining risk probability that would explain what are the chances for that risk to occur
3. Defining How much loss at particular risk can cause
4. Defining the liability potential of risks.

12.3. Reactive vs. Proactive Risk strategies:

The Employees who understand the real difference between reactive, predictive, and proactive
risk management activities gain considerable benefit for generating good safety performance.

12.3.1 Reactive Risk Management:


It is often termed as the lowest means the most basic form of risk management. The reactive risk
management is generally associated with aviation safety programs such as
 Early implementation,

 Not very developed safety program,


 Lack of safety culture
It is also called as underdeveloped risk management. Reactive risk management is extremely
important for new AND mature safety programs. Programs without strong reactive risk
management strategies are exposed to considerable risk. The essential elements of reactive risk
management are as follows:
 Mitigating safety events after hazard has occurred;

 Minimizing damage from critical safety situations;


 Acting quickly and efficiently in response to undesirable incidents; and
 High quality decision making in reaction to safety data (threats, risk, etc.).

[Link] Significance of Reactive Risk Management Strategies:


 In new SMS programs who do not have the requisite safety data to practice proactive or
predictive risk management;
 In response to safety events; and
 In dealing with threats that suddenly arise the operating environment.

High quality reactive risk management is critical at all levels of SMS implementation. These new
SMS programs will deal with more safety events in particular.
The reactive risk management behavior must be set early in the implementation so that it will
prove extraordinarily beneficial.
For quality risk management to be cultivated, it requires following:
 Quality risk management training for all employees.
 Strong bureaucracy regarding safety behavior, such as procedures, checklists and a list
of desired employee behavioral actions.
 Good hazard and risk fluency for identifying and assessing safety items.

12.3.2 Proactive Risk Management:

Proactive risk management is often termed as the highest form of risk management. Proactive
risk management activities generally don‘t happen until an SMS program is fairly mature.
Basically, goals of proactive risk management are:
 Identify behaviors that lead to hazard occurrence, and stop it before it happens;
 Identify root causes before they lead to hazard occurrence,
 Understand safety ―inputs‖ of the program for safe performance.
The proactive risk management generally requires the following:
 A great deal of safety data;
 The ability to monitor complex safety metrics,
 A mature safety culture.

Proactive risk management involves specific activities that are entirely different than that of
reactive risk management activities. Both reactive and proactive risk management complement
each other, and each strategy is useful in different situations. So, let us discuss when to use
Proactive Risk Management Strategies. Proactive risk management strategies are best used in
the following situations:
 Identifying how to best de-escalate safety issues (after hazard occurrence) before it leads
to undesirable consequences;
 Understand the inputs of the program, as well as underlying behaviors, attitudes and
actions that directly correlate to safety performance,
 Analyze the relationship between certain root causes and hazard occurrence.
Management ofriskmust be done proactively which is the responsibility of front line employees
as well as safety management. Each sector of an organization has its own proactive behaviors
that generate a solid, proactive culture in an aviation SMS program.

12.3.3 Predictive Risk Management


The predictive risk management gets confused with proactive risk management all the time.
While there can be overlap between proactive and predicative management strategies, they are
for the most part distinct.
Predictive risk management attempts to:
 Identify possible risks in a situation based on given circumstances;
 Identify new threats in hypothetical scenarios;
 Anticipate needed risk controls.
Predictive risk management is largely possible due to the use of lagging indicators, or past
historical performance, which are used to predict possible future performances. This is the
exact opposite of proactive risk management, which uses aviation leading indicators to directly
assess underlying causes and precursors to current performance.

[Link] When to Use Predictive Risk Management Strategies

Predictive risk management becomes extremely useful in the following activities that are
common to aviation safety programs:
 Management of change;
 Risk analysis in hypothetical scenarios;
 Forecasting performance data (such as to stakeholders).
It‘s important to understand that predictive risk management is useful for creating expected
―ranges‖ of safety performance, and a framework for future risk exposure. Risk Management
comprises of following processes:
1. Software Risk Identification
2. Software Risk Analysis
3. Software Risk Planning
4. Software Risk Monitoring
These Processes are defined below.

12.4 Software Risk Identification

Definition: Risk identification is the process of determining risks that could potentially prevent
the program, enterprise, or investment from achieving its objectives. It includes documenting and
communicating the concern.
Risks are about events that, when triggered, cause problems or benefits. Hence, risk
identification can start with the source of our problems and those of our benefit, or with the
problem itself. The Risk Identification is the very first critical step of risk management process.
It is very important that one must first study the problems faced by previous project. Also study
the project plan properly and check for all the possible areas that are vulnerable to some or the
other type of risks. The best ways of analyzing a project plan is by converting it to a flowchart
and examine all essential areas. It is important to conduct few brainstorming sessions to identify
the known unknowns that can affect the project. Any decision taken related to technical,
operational, political, legal, social, internal or external factors should be evaluated properly.

Figure: Software Risk Identification

An effective risk identification process should include the following steps:

1. Creating a systematic process - The risk identification process should begin with project
objectives and success factors.
2. Gathering information from various sources - Reliable and high quality information is
essential for effective risk management.
3. Applying risk identification tools and techniques - The choice of the best suitable
techniques will depend on the types of risks and activities, as well as organizational
maturity.
4. Documenting the risks - Identified risks should be documented in a risk register and a risk
breakdown structure, along with its causes and consequences.
5. Documenting the risk identification process - To improve and ease the risk identification
process for future projects, the approach, participants, and scope of the process should be
recorded.
6. Assessing the process' effectiveness - To improve it for future use, the effectiveness of the
chosen process should be critically assessed after the project is completed.

12.4.1 Identifying Risks in the Systems Engineering Program:


There are multiple sources of risk. For risk identification, the project team should review the
program scope, cost estimates, schedule (to include evaluation of the critical path), technical
maturity, key performance parameters, performance challenges, stakeholder expectations vs.
current plan, external and internal dependencies, implementation challenges, integration,
interoperability, supportability, supply-chain vulnerabilities, ability to handle threats, cost
deviations, test event expectations, safety, security, and more. In addition, historical data from
similar projects, stakeholder interviews, and risk lists provide valuable insight into areas for
consideration of risk.
Comprehensive databases of the events on past projects are very helpful while identification of
risks. Project team participation and face-to-face interaction are needed to encourage open
communication and trust, which are essential to effective risk identification. The risk
identification process needs to be repeated as these sources of information change and new
information becomes available.
There are many ways to approach risk identification. Two possible approaches are (1) To
identify the root causes of risks. (2) To identify all the essential Functions.
1. Identifying the undesirable events or things that can go wrong and then identify the
potential impacts on the project of each such event.
2. Identifying functions that the project must perform. It must reach to be considered
successful and then identify all the possible modes by which these functions might fail to
perform.

12.4.2 Software Risk Analysis


Software Risk analysis is a very important aspect of risk management. In this phase the risk is
identified and then categorized. After the categorization of risk, the level, likelihood
(percentage) and impact of the risk is analyzed. Likelihood is defined in percentage after
examining what are the chances of risk to occur due to various technical conditions. These
technical conditions can be:

1. Complexity of the technology


2. Technical knowledge possessed by the testing team
3. Conflicts within the team
4. Teams being distributed over a large geographical area
5. Usage of poor quality testing tools
With impact we mean the consequence of a risk in case it happens. It is important to know
about the impact because it is necessary to know how a business can get affected:

1. What will be the loss to the customer


2. How would the business suffer
3. Loss of reputation or harm to society
4. Monetary losses
5. Legal actions against the company
6. Cancellation of business license

Level of risk is identified with the help of:


Qualitative Risk Analysis: Here you define risk as:
 High
 Low
 Medium

Quantitative Risk Analysis: It can be used for software risk analysis but is considered
inappropriate because risk level is defined in % which does not give a very clear picture.

SWOT Analysis
A useful tool for systematic risk identification is Swot Analysis. It consisting of four
elements:
 Strengths - Internal organizational characteristics that can help to achieve project
objectives.
 Weaknesses - Internal organizational characteristics that can prevent a project from
achieving its objectives.
 Opportunities - External conditions that can help to achieve project objectives.
 Threats - External conditions that can prevent a project from achieving its objectives.

12.4.3 Software Risk Monitoring


Software risk monitoring is integrated into project activities and regular checks are conducted
on top risks. Software risk monitoring comprises of:
 Tracking of risk plans for any major changes in actual plan, attribute, etc.
 Preparation of status reports for project management.
 Review risks and risks whose impact or likelihood has reached the lowest possible
level should be closed.
 Regularly search for new risks.
12.5 Risk Projection

Risk projection is also called as risk estimation. It attempts to rate each risk in two ways
1. The likelihood or probability that the risk is real
2. The consequences of the problems associated with the risk. The project planner, along with
other managers and technical staff, performs four risk projection activities:
(1) Establish a scale that reflects the perceived likelihood of a risk,
(2) Delineates the consequences of the risk,
(3) Estimate the impact of the risk on the project and the product,
(4) Note the overall accuracy of the risk projection.

12.5.1 Developing a Risk Table


A risk table provides a project manager with a simple technique for risk projection .A project
team begins by listing all risks in the first column of the table. This can be accomplished with
the help of the risk item checklists. Each risk is categorized in the second column. The
probability of occurrence of each risk is entered in the next column of the table. The probability
value for each risk can be estimated by team members individually.
Next, the impact of each risk is assessed. Each risk component is assessed and an impact
category is determined. The categories for each of the four risk components: 1. performance, 2.
support, 3. cost, 4. Schedule. They are averaged to determine an overall impact value.
First-Order Risk Prioritization:
Once the first four columns of the risk table have been completed, the table is sorted by
probability and by impact. High-probability, high-impact risks percolate to the top of the table,
and low-probability risks drop to the bottom. This accomplishes first-order risk prioritization.
Second-Order Prioritization:
The project manager studies the resultant sorted table and defines a cutoff line. The cutoff line
(drawn horizontally at some point in the table) implies that only risks that lie above the line will
be given further attention. Risks that fall below the line are re-evaluated to accomplish second-
order prioritization. Thus Second order prioritization is accomplished. Risk impact and
probability have a distinct influence on management concern. For Example the table is
explained below:
A risk factor that has a high impact but a very low probability of occurrence should not absorb a
significant amount of management time. However, high-impact risks with moderate to high
probability and low-impact risks with high probability should be carried forward into the risk
analysis steps.
All risks that lie above the cutoff line must be managed. The column labeled RMMM contains a
pointer into a Risk Mitigation, Monitoring and Management Plan or alternatively, a collection of
risk information sheets developed for all risks that lie above the cutoff.
Risk probability can be determined by making individual estimates and then developing a single
consensus value. Although that approach is workable, more sophisticated techniques for
determining risk probability have been developed. Risk drivers can be assessed on a qualitative
probability scale that has the following values:
1. Impossible,
2. Improbable, probable, and frequent.
3. Mathematical probability can then be associated with each qualitative value (e.g., a
probability of 0.7 to 1.0 implies a highly probable risk).

A risk that is 100 percent probable is a constraint on the software [Link] risk table should be
implemented as a spreadsheet model. This enables easy manipulation and sorting of the entries.A
weighted average can be used if one risk component has more significance for the project.

12.5.2 Three factors determine the consequences if a risk occur:


1. Nature of the risk - the problems that are likely if it occurs, for example a poorly defined
external interface to customer hardware (a technical risk) will preclude early design and
testing and will likely lead to system integration problems late in a project.
2. Scope of a risk - combines the severity with its overall distribution (how much of
the project will be affected or how many customers are harmed?).
3. Timing of a risk - when and how long the impact will be felt.

Steps recommended to determine the overall consequences of a risk:


1. Determine the average probability of occurrence value for each risk component.
2. Using Figure 1, determine the impact for each component based on the criteria shown.
3. Complete the risk table and analyze the results as described in the preceding sections.
Overall risk exposure, RE, determined using:

RE = P x C

Where,
P is the probability of occurrence for a risk
C is the cost to the project should the risk occur.
Example
Assume the software team defines a project risk in the following manner:
Risk Identification.
 Only 70 percent of the software components scheduled for reuse will be integrated into the
application.
 The remaining functionality will have to be custom developed.
Risk Probability. 80% (likely).

Risk Impact.
 60 reusable software components were planned.
 If only 70 percent can be used, 18 components would have to be developed from scratch (in
addition to other custom software that has been scheduled for development).
 Since the average component is 100 LOC and local data indicate that the software
engineering cost for each LOC is $14.00, the overall cost (impact) to develop the
components would be 18 x 100 x 14 = $25,200.

Risk Exposure. RE = 0.80 x 25,200 ~ $20,200.

Risk Assessment
Have established a set of triplets of the form:
[ri, li, xi]
Where,
ri is risk
li is the likelihood (probability) of the risk
xi is the impact of the risk.
During risk assessment:
 Examine the accuracy of the estimates that were made during risk projection.
 Attempt to rank the risks that have been uncovered.
 Begin thinking about ways to control and/or avert risks that are likely to occur.

Must define a risk referent level:


 Performance, cost, support, and schedule represent risk referent levels.
 There is a level for performance degradation, cost overrun, support difficulty, or schedule
slippage (or any combination of the four) that will cause the project to be terminated.
 A risk referent level has a single point, called the referent point or break point, at which
the decision to proceed with the project or terminate it are equally weighted.
Referent level rarely represented as a smooth line on a graph.
 Most cases - a region in which there are areas of uncertainty
 Therefore, during risk assessment, perform the following steps:
• Define the risk referent levels for the project.
• Attempt to develop a relationship between each (ri, li, xi) and each of the referent
levels.
• Predict the set of referent points that define a region of termination, bounded by a
curve or areas of uncertainty.
• Try to predict how compound combinations of risks will affect a referent level.

12.6 Risk Refinement

Risk refinement is the process of decomposing risks into more detailed risks that will be easier
to [Link] the CTC (condition-transition-consequence) format may be helpful to us as
they refine their own [Link] (condition-transition-consequence) format may be a good
representation for the detailed risks (e.g. given that <condition> then there is a concern that
(possibly) <consequence>).
A Risk may be stated generally during early stages of project planning. With time, more is
learned about the project and the risk may be possible to refine the risk into a set of more
detailed risks. Represent risk in condition-transition-consequence (CTC) format.
Stated in the following form:

Given that <condition> then there is concern that (possibly) <consequence>

Using CTC format for the reuse we can write:


Given that all reusable software components must conform to specific design standards and that
some do not conform, then there is concern that (possibly) only 70 percent of the planned
reusable modules may actually be integrated into the as-built system, resulting in the need to
custom engineer the remaining 30 percent of components.
This general condition can be refined in the following manner:
Sub condition 1. Certain reusable components were developed by a third party with no
knowledge of internal design standards.
Sub condition 2. The design standard for component interfaces has not been solidified and
may not conform to certain existing reusable components.
Sub condition 3. Certain reusable components have been implemented in a language that is
not supported on the target environment.

12.7 Risk Mitigation, Monitoring, And Management (RMMM):

12.7.1. Risk Mitigation:

Risk Mitigation is a problem avoidance activity. The team develops strategies to reduce the
possibility or the loss impact of a risk through Risk Mitigation. In this risk items are eliminated
or otherwise resolved.
Effective strategy must consider three issues:
 Risk Avoidance
 RiskProtection:
 Risk Leverage

1. Risk Avoidance: When the team is facing lose, the team can opt to eliminate the risk. This is
an example of a risk avoidance strategy. The team is opting not to develop a product or a
particularly risky feature in their project to avoid [Link] Proactive approach to risk is one of
the Risk Avoidance Strategy.

2. Risk protection: The organization can buy insurance to cover any financial loss should the
risk become a reality. Alternately, a team can employ fault tolerance strategies, such as parallel
processors, to provide reliability insurance. Risk planning and risk mitigation actions often
come with an associated cost. The team must do a cost/benefit analysis to decide whether the
benefits accrued by the risk management steps outweigh the costs associated with
implementing them.

3. Risk Leverage:
The risk protection can be implemented by using Risk Leverage calculations. This calculation
can involve the calculation of Cost/benefit Analysis.

Risk Leverage = (risk exposure before reduction – risk exposure after reduction)/cost of
risk reduction

1. If risk leverage value, rill, is ≤1, then the benefit of applying risk reduction is not worth its
cost. 2. If rill is only slightly > 1, then still the benefit is very questionable, because these
computations are based on probabilistic estimates and not on actual data.
Therefore, rill is usually multiplied by a risk discount factor as shown below:
Ρ< 1. If ρ rl> 1,
Here, the benefit of applying risk reduction is considered worth its cost. If the discounted
leveraged valued is not high enough to justify.
For Example: Develop risk mitigation plan and assume that the high staff turnover is noted as
a project risk, r1. Based on past history, the likelihood, l1, of high turnover is estimated to be
0.70. The impact, x1, is projected at level 2. So, high turnover will have a critical impact on
project cost and schedule.

Develop a strategy to mitigate this risk for reducing turnover.


Possible steps to be taken are as follows:
 Current staff meeting must be held to determine causes for turnover
 The causes that are under our control should be mitigated before the project starts.
 Once the project commences, assume turnover will occur and develop techniques to
ensure continuity when people leave.
 The Project teams must be organized so that information about each development activity
is widely dispersed.
 Documentation standards must be defined and establish mechanisms to be sure that
documents are developed in a timely manner.
 Peer Reviews must be conducted of all work so that more than one person may speed up
the work.
 Backup Staff-Member must be assigned for every critical technologist.

12.7.2 Risk Monitoring:


The software testing team must regularly monitor the progress of the product and resolution of
risk items and at times they may need to take corrective actions. However this can be done
after risks are identified, analyzed, and prioritized, and actions are established as it is very first
step as per risk management. This monitoring can be done as part of the team project
management activities or via explicit risk management activities. These teams often regularly
monitor their Top 10 risks. Risks need to be revisited at regular intervals for the team to
reevaluate each risk to determine when new circumstances caused. Thus they can define its
probability or we can say impact to accept changes. Some risks may be added to the list at
each interval. Hence they need to be reprioritized to see which are moved above the line and
need to have action plans. Also the risks which move below the linemust be prioritized which
no longer need action plans. A key to successful risk management is that proactive actions are
owned by individuals which are to be monitored.

Example:

Risk: Computer Crash


1. Mitigation:
The cost associated with a computer crash resulting in a loss of data is crucial. A computer
crash itself is not crucial, but rather the loss of data. A loss of data will result in not being
able to deliver the product to the customer. This will result in a not receiving a letter of
acceptance from the customer. Without the letter of acceptance, the group will receive a
failing grade for the course. As a result the organization is taking steps to make multiple
backup copies of the software in development and all documentation associated with it, in
multiple locations.
2. Monitoring
When working on the product or documentation, the staff member should always be aware
of the stability of the computing environment they‘re working in. Any changes in the
stability of the environment should be recognized and taken seriously. ·

3. Management
The lack of a stable-computing environment is extremely hazardous to a software
development team. In the event that the computing environment is found unstable, the
development team should cease work on that system until the environment is made stable
again, or should move to a system that is stable and continue working there.
Thus, RMMM steps incur additional project cost. 80 percent of the overall project risk can
be accounted for by only 20 percent of the identified risks. Work performed during earlier
risk analysis steps will help the planner to determine which of the risks reside in that 20
percent (e.g., risk that lead to the highest risk exposure).

12.8 The RMMM Plan:


Risk Mitigation, Monitoring and Management Plan (RMMM) documents all work performed as
part of risk analysis. It is used by the project manager as part of the overall project [Link] goal
of the Risk Mitigation, Monitoring and Management Plan is to identify as many potential risks as
[Link] checklists are used to identify potential risks in a generic [Link] all risks have
been identified, they will then be evaluated to determine their probability of occurrence. Plans
will then be made to avoid each risk, to track each risk to determine if it is more or less likely to
occur, and to plan for those risks should they occur.
The primary objective of risk mitigation can be achieved by developing a plan. The objectives of
monitoring are to assess whether predicted risk occurs and to collect information that can be used
for future risk analysis. It is essential that risk management be done iteratively, throughout the
project, as a part of the team‘s project management routine.
Alternative to RMMM – RiskInformation Sheet (RIS).RIS is maintained using a database
system, so that creation and information entry, priority ordering, searches, and other analysis
may be accomplished easily.
Review Questions:
1. Define Risk Management and explain phase involved in risk management.
2. Explain management strategies and processes in detail.
3. Write a short note on RMMM plan.
4. Explain reactive versus proactive risk management.
5. Explain about Risk Identification in detail
6. Define Risk Projection and explain how to develop risk table.
7. Write a short note on Risk Monitoring with an example.
C h a p t e r 13

13.1 QUALITy MANAgement

Quality of a product can be measured in terms of performance, reliability and durability. Quality
management ensures superior quality product and services which is consistent. Quality
management is the act of overseeing all activities and tasks needed to maintain a desired level of
excellence. It has four main components: 1. Quality Planning, 2. Quality Assurance, 3. Quality
Control and 4. Quality Improvement.
Quality Management is essential for customer satisfaction which eventually leads to customer
[Link] Management is focused not only on product and service quality, but also on the
means to achieve it. Quality management, therefore, uses quality assurance and control of
processes as well as products to achieve more consistent quality.
Quality can be improved by applying some effective quality measures which are discussed as
follows:
 Break down barriers between departments,
 Management should learn their responsibilities, and take on leadership,
 Continuous supervision should be to help people and machines and gadgets to do a better
job
 Improve constantly and forever the system of production and service
 Institute a vigorous program of education and self-improvement.

Managing quality means constantly pursuing excellence. Things such as making sure that what
the organization does is fit for purpose, and not only stays that way, but keeps improving.
Quality products ensure that you survive the cut throat competition with a smile. Customers
recognize that quality is an important attribute in products and services. Suppliers recognize that
quality can be an important differentiator between their own offerings and those of competitors
(quality differentiation is also called the quality gap). In the past two decades this quality gap has
been greatly reduced between competitive products and services. Customer satisfaction is the
backbone of Quality Management. Setting up a million dollar company without taking care of
needs of customer will ultimately decrease its revenue.
The significant factors including quality culture are the importance of knowledge management,
and the role of leadership in promoting and achieving high quality. There are many methods for
quality improvement. These cover product improvement, process improvement and people based
improvement. In the following list are methods of quality management and techniques that
incorporate and drive quality improvement:
1. ISO 9004:2008 - guidelines for performance improvement.
2. ISO 9001:2015 - a certified quality management system (QMS) for organizations who
want to prove their ability to consistently provide products and services that meet the
needs of their customers and other relevant stakeholder.
3. ISO 15504-4: 2005 - information technology, guidance on use for process improvement
and process capability determination.
4. QFD - Quality Function Deployment is also known as the house of quality approach.
5. Zero Defect Program: It was created by NEC Corporation of Japan, based upon statistical
process control and one of the inputs for the inventors of Six Sigma.
6. Six Sigma - Six Sigma combines established methods such as statistical process
control, design of experiments and failure mode and effects analysis (FMEA) in an overall
framework.
7. PDCA -Plan, Do, Check, Act cycle for quality control purposes. (Six
Sigma's DMAIC method (define, measure, analyze, improve, control) may be viewed as a
particular implementation of this.)
8. Quality circle is a group (people oriented) approach to improvement.
9. Taguchi methods are the statistical oriented methods including quality robustness, quality
loss function, and target specifications.
10. The Toyota Production System — reworked in the west into lean manufacturing.
11. TQM (Total Quality Management) is a management strategy aimed at embedding
awareness of quality in all organizational processes. First promoted in Japan with the
Deming prize which was adopted and adapted in USA as the Malcolm Baldrige National
Quality Award and in Europe as the European Foundation for Quality Management award.
12. TRIZ states the Theory of inventive problem solving.
13. BPR (Business Process Reengineering) is a management approach aiming at optimizing
the workflows and processes within an organization.
14. OQRM (Object-Oriented Quality and Risk Management) is a model for quality and risk
management.
15. Top down & Bottom up Approaches the leadership approaches to change.

A quality management system (QMS) is a formalized system that documents processes,


procedures, and responsibilities for achieving quality policies and objectives. Quality management
systems serve many purposes, including:
 Improving processes
 Reducing waste
 Lowering costs
 Facilitating and identifying training opportunities
 Engaging staff
 Setting organization-wide direction.
For Example: If an organization is earning means its employees are also earning. It will be an
oblivious thing that employees will get frustrated when their salaries or other payments will not get
released on time. Then just ask yourself would you feel like working in such organization which
does not give you salary on time? Implementing Quality management tools ensure high customer
loyalty, in turn excellent work, and with that increased cash flow in organization, consequently
satisfied employees, healthy workplace and so on. Quality management processes make the
organization a better place to work in. Thus, we need to remove unnecessary processes which
merely waste employee‘s time and do not contribute much to the organization‘s productivity.
Hence, quality management enables employees to deliver more work in less time.
13.2 Quality Concepts:

Where to apply quality?


On Everything, Every product, service, process, task, action or decision in an organization can be
judged in terms of quality. How good is it, is it good enough, how can we make it better?

Who is responsible for quality?


It is everyone from the CEO to the intern employee who is responsible for maintaining quality.
Different people will have responsibility or influence over different things that affect quality,
such as specifying requirements, meeting those requirements or determining the quality of
something. It is important to have people who can provide the knowledge, tools and guidance to
help everyone else, plays very vital role in achieving quality. These people are quality
professionals and their job is to make organizations better. Those people may be named as per
the kind of motivation they give such as some are generalists, some are specialists. Many will
have titles such as quality manager, quality engineer, quality director or assurance manager,
while others deal with aspects of quality as part of a broader remit. Some of them concerned with
the delivery of products and services, Excellency in business, while some are part of the
leadership of their organizations.
The dedication of quality professionals unites together to protect and strengthen their
organizations. They take care of everything ideally that their expectations are exceeded.

13.3 Software Quality Assurance:

Definition 1:
Software Quality Assurance (SQA) is a set of activities for ensuring quality in software
engineering processes that ultimately result in quality in software products.
Definition 2:Software quality assurance (SQA) is a process that ensures that developed
software meets and complies with defined or standardized quality specifications. SQA is an
ongoing process within the software development life cycle (SDLC) that routinely checks the
developed software to ensure it meets desired quality measures.
SQA helps ensure the development of high-quality software. SQA practices are implemented in
most types of software development. A quality assurance system is said to increase customer
confidence and a company's credibility, to improve work processes and efficiency. It
incorporates and implements software testing methodologies to test the overall software, rather
than checking for quality after completion. SQA processes test for quality in each phase of
development until the software gets completed. It is the Degree to which a system meets
specified requirements and customer expectations. It is also monitoring the processes and
products throughout the [Link] software development process moves into the next phase
only once the current/previous phase complies with the required quality standards. It includes
the following activities:
 Process definition and implementation
 Auditing
 Training
Processes could be:
 Software Development Methodology
 Project Management
 Configuration Management
 Requirements Development/Management
 Estimation
 Software Design
Once the processes have been defined and implemented, Quality Assurance has the following
responsibilities:
 identify weaknesses in the processes
 correct those weaknesses to continually improve the process
The quality management system under which the software system is created is normally based on
one or more of the following standards:
 CMMIcapability maturity model integration (CMMI)
 Six Sigma
 ISO 9000

The above mentioned standards are the most popular ones.


Software Quality Assurance encompasses the entire software development life cycle and the
goal is to ensure that the development and maintenance processes are continuously improved to
produce products that meet specifications/[Link] are the Quality assurance criteria
against which the software would be evaluated against:
 correctness
 efficiency
 flexibility
 integrity
 interoperability
 maintainability
 portability
 reliability
 reusability
 testability
 usability

13.4 Software Reviews


A software review is "A process or meeting during which a software product is examined by a
project personnel, managers, users, customers, user representatives, or other interested parties for
comment or approval". –Definition from Wikipedia.
A review is a systematic examination of a document by one or more people with the main aim
of finding and removing errors early in the software development life cycle. Reviews are used
to verify documents such as requirements, system designs, code, test plans and test cases. The
Productivity is improved and timescales get reduced because of the correction of defects in
early stages. The work-products will help to ensure that those work-products are clear and
[Link] Testing costs and time is reduced as enough time has been spent during the
initial phase consequently reducing costs of final software due fewer defects.

13.4.1 Software Review Process:

The above figure explains Software Review process. The pre-review activities are concerned
with review planning and review preparation. This is the first phase of review process. After
completing this phase there comes Review Meeting. In this the author of the document or the
program being reviewed must ―Walk through‖ the document with the review team. Then after
the post review has to be processed which contains error correction, improvement. In this
phase the problems and issues which are raised in the review meeting are addressed.

13.4.2 Software reviews may be divided into three categories:


 Software peer reviews are conducted by the author of the work product, or by one or more
colleagues of the author, to evaluate the technical content and/or quality of the work.[2]
 Software management reviews are conducted by management representatives to evaluate the
status of work done and to make decisions regarding downstream activities.
 Software audit reviews are conducted by personnel external to the software project, to
evaluate compliance with specifications, standards, contractual agreements, or other criteria.

 Different types of Peer Review can be explained as follows:

1. Code Review:

It is systematic examination which can find and remove the vulnerabilities in the code such
as memory leaks and buffer overflowsof computer source [Link] kind of review is
usually performed as a peer review without management [Link] reviews can
often find and remove common vulnerabilities such as format string exploits, race
conditions, memory leaks and buffer overflows, thereby improving software
security. Reviewers prepare for the review meeting and prepare a review report with a list
of [Link] code review rates are about 150 lines of code per hour. Technical
reviews may be quite informal or very formal and can have a number of purposes but not
limited to discussion, decision making, evaluation of alternatives, finding defects and
solving technical [Link] review practices fall into three main categories:
1. Pair programming is a type of code review where two persons develop code together at
thesame workstation.
2. Formal Code Review
3. Lightweight code review

2. Inspection:
 It is a very formal type of peer review where the reviewers are following a well-
defined process to find defects.
 The Trained Moderators are the persons who take care of this testing activity however
they are not authors. They are responsible to conduct peer examination of document or
product.
 During inspection the documents are prepared and checked keenly by the reviewers
before the meeting gets started. It is essential to have pre-meeting preparation.
 The product is examined accordingly and the defects are found and as a result they are
fixed.
 The defects and their solutions are documented in a logging record or issue log.
 A formal follow-up is carried out by the moderator applying exit criteria which
ensures a timely and a prompt corrective action, so in this way the inspection is carried
out.

The goals of inspection are:


 It helps the author to improve the quality of the document under inspection
 It removes defects efficiently and as early as possible. It improves product quality.
 It creates common understanding by exchanging information.
 It learns from defects found and prevent the occurrence of similar defects
3. Walkthrough:

It is a form of peer review where the author leads members of the development team and
other interested parties through a software product. The participants ask questions and
make comments about possible defects and development standards. Walkthrough- the name
itself suggests checking or reviewing the entire software product. Software product
normally refers to some kind of technical document. As indicated by the IEEE definition,
this might be a software design document or program source code, but use cases, business
process definitions, test case specifications, and a variety of other technical documentation
may also be walked through..

Objectives
Basically, a walkthrough has one or two objectives:
1. To gain feedback about the technical quality or content of the document;
2. To familiarize the audience with the content.
IEEE 1028has recommended three specialist roles in a walkthrough:
The Author is one who is responsible to explain overall product in step-by-step manner at
the walk-through meeting, and is probably responsible for completing other formalities too.
The Author guides the participants through the document according to his or her thought
process to achieve a co-operation.
The Walkthrough Leader is one who conducts the walkthrough, handles administrative
tasks, and ensures that the process is conducted efficiently.
The Recorder is one who notes all potential errors, decisions, and action items identified
during the walkthrough meetings.

4. Technical review:

It is a form of peer review in which a team of qualified people examines the suitability of
the software product for its intended use and identifies discrepancies from specifications and
standards. It is not a kind of formal review and is led by trained moderators but can also be
led by a technical expert. It is often performed as a peer review without management
participation. The Architects, designers, key users are one who focuses on the content of the
document and are busy in finding the defects, errors and various other [Link]
practice, technical reviews vary from quite informal to very formal.

The goals of the technical review are:

 The participants must be informed about the technical content of the document
 The technical concepts are used correctly or not must be ensured in the early stage
itself.
 The value of technical concepts and alternatives in the product are to be accepted and
implemented.
 There must be constant consistency in the use of these concepts and representation of
technical concepts.

13.3.1 Formal Technical Reviews:

A formal technical review is a software quality assurance activity performed by software


[Link] Formal Technical Review serves as a training ground, enabling junior engineers
to observe different approaches to software analysis, design, and implementation. The FTR is
actually a class of reviews that includes walkthroughs, inspections,round-robin reviews and
other small group technical assessments of [Link] FTR also servesto promote backup and
continuity because a number of people become familiar withparts of the software that they may
not have otherwise seen.

The Objectives of the FTR


 Errors in function, logic, or implementation are often found for any representation of the
software;
 In FTR, the software is verified until review meets its requirements;
 The software has been represented according to predefined standards is to be ensured.
 The software that is developed in a uniform manner is to be achieved.
 The projects needs to be more manageable after the process of technical review.

13.5 Statistical Software Quality Assurance:

Statistical quality assurance reflects a growing trend throughout industry to becomemore


quantitative about qualityEvaluation of software quality depends on statistics for many functions,
such as assessing the number of defects in different software processes and evaluating those
defects efficiently. Statistical quality assurance reflects a growing trend throughout industry to
become more quantitative about quality. It is necessary to understand the fundamentals of
statistical reasoning, use of numeric and graphical descriptive statistics, parameter estimation and
inferential methods, research design, and linear regression methods for effective quality assurance.
The statistical quality assurance implies the following steps:
1. Information about software defects is collected and categorized.
2. An attempt is made to trace each defect to its underlying cause (e.g., non conformance to
specifications, design error, violation of standards, and poor communication with the
customer).
3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all
possible causes), isolate the 20 percent (the "vital few").
4. Once the vital few causes have been identified, move to correct the problems that have
caused.
Six Sigma is most widely used strategy of statically quality assurance in industry today. In the
process of six sigma,the data is used and statically analyzed to measure and improve a company‘s
operational performance. It identifies and remitsthe defects in manufacturing and service related
process.

13.6 Software Reliability:


Definition by NPTEL:Reliability of a software product essentially denotes its trustworthiness or
dependability.
Definition by IEEE 610.12-1990: Reliability as "The ability of a system or component to
perform its required functions under stated conditions for a specified period of time."
According to ANSI, Software Reliability is defined as: the probability of failure-free software
operation for a specified period of time in a specified [Link] Reliabilityis an
important to attribute of softwarequality, together with functionality, usability,
performance,serviceability, capability, install ability, maintainability, anddocumentation.

Software reliability is a key part in software quality. The study of software reliability can be
categorized into three parts: modeling, measurement and [Link] to
theability of a product to perform its specified function under service conditions. In other words,
reliability can be depicted as the probability that an item will perform appropriately for a specified
time period under a given service condition. The high complexity of software is the major
contributing factor of Software Reliability problems.
The reliability of a computer program is an important element of its overall quality. Software
Reliability is the probability of failure-free software operation for a specified period of time in a
specified environment. It is quite sure that the reliability of a system improves, if the number of
defects in it is reduced. However, there is no simple relationship between the observed system
reliability and the number of defects in the system.
For example: Suppose it has beenobserved the behaviour of a large number of programs that
90% of the execution time of a program is spent in executing only 10% of the instructions in the
program. These most used 10% instructions are often called the core of the program. The rest
90% of the program statements are called non-core. These non-core statements are executed only
for 10% of the total execution time. Therefore one can note that it will not be beneficial to
remove 60% defects or errors from least used part of system which would lead very less
improvement of the system‘s reliability.

Thus, reliability of a product depends not only on the number of errors but also on the
exactlocation of the [Link] not considered carefully, software reliability can be the reliability
bottleneck of the whole system.

13.6.1 Why software reliability is difficult to measure:

1. The reliability can be improved due to fixing of a single bug which depends on the location
of bug in the code.
2. The perceived reliability of a software product is highly observer-dependent.
3. The reliability of a product keeps changing as errors are detected and fixed.

13.6.2 Difference between Hardware reliability and software reliability

Reliability behaviour for hardware and software are very [Link] the characteristics of
software and hardware defers, then how can be their reliability will work similar. Hardware
failures are inherently different from software failures.
 Most hardware failures are due to component wear and tear. We know that hardware can be
touched or it is visible. Hence it is quite obvious that it might get wear-out, deteriorates, or
may get rusted in critical environmental conditions. For Example, A logic gate may be
stuck at 1 or 0, or a resistor might get short circuit. To fix hardware faults, one has to either
replace or repair those failed parts.
 On the other hand, a software product would continue to fail until the error is tracked down
and either the design or the code is changed. However,when hardware is repaired its
reliability is maintained even if the changes and repairs are done.
 When a software failure is repaired, we are not sure that the reliability may increase or
decrease (reliability may decrease if a bug introduces new errors).
 The Hardware Reliability is concerned with stability, where as software reliability aims at
reliability growth.
 The change of failure rate over the product lifetime for typical hardware and a software
product can be represented as follows:
Figure: Change in Failure Rate
13.7 The ISO 9000 Quality Standards:
ISO (International Standards Organization) is a consortium of 63 countries established to
formulate and foster standardization. ISO published its 9000 series of standards in 1987. Also
ISO created the Quality Management System (QMS) standards in 1987. Standardization
actually helps in the optimization of operations by proper utilization of [Link] standards
are reviewed every few years by the International Organization for Standardization. The ISO
9000 standard specifies the guidelines for maintaining a quality system. The structure of ISO is
comprised of technical committees, sub-committees and working [Link] 9000 series is
developed to serve the quality aspects, which also include the eight principles of management
systems. In short, the standards require an organization to say ―what it is doing to
ensure quality‖ and finally, document or a proof from ISO is mark of quality certification for
that organization.

Types of ISO 9000 quality standards:

 ISO 9001 applies to the organizations engaged in design, development, production, and
servicing of goods. This is the standard that is applicable to most software development
organizations.
 ISO 9002 applies to those organizations which do not design products but are only
involved in production. Examples of these category industries include steel and car
manufacturing industries that buy the product and plant designs from external sources and
are involved in only manufacturing those products. Therefore, ISO 9002 is not applicable
to software development organizations.
 ISO 9003 applies to organizations that are involved only in installation and testing of the
products.

Software is intangible in nature so it is difficult to control its quality. It is very difficult to


control and manage anything that is not visible or can be touched. In contrast, any other
industries such as car manufacturing industries where one can see a product or touched being
developed through various stages such as fitting engine, fitting doors, etc. The only raw
material consumed is data or [Link] large quantities of raw materials are consumed
during the development of any other productcontradictionary.

Significant Features of ISO 9001 Certification:

 All documents concerned with the development of a software product should be properly
managed, authorized, and controlled. This requires a configuration management system
to be in place.
 Proper plans should be prepared and then progress against these plans should be
monitored.
 Important documents should be independently checked and reviewed for effectiveness and
correctness.
 The product should be tested against specification. Several organizational aspects
should be addressed e.g., management reporting of the quality team.
Thus, ISO 9000 is awarded by an international standards body. Therefore, ISO 9000
certification can be quoted by an organization in official documents, communication with
external parties, and other documentations of [Link] main reason behind establishing
ISO standards is to ensure the required safety, quality and reliability of products and services.
This can raises the levels of productivity and reduce the chance of errors.

Review Questions:

1. Explain concept of quality and quality management in detail.


2. Define Software Quality Assurance and explain activities involved in details.
3. What is software review and explain the review process.
4. Explain any two review methods in details with their advantages and disadvantages.
5. Explain 9000 quality standards with its significant features.
6. Write a short note on formal technical reviews.
7. Define Software Reliability also explain difference between hardware reliability and
software reliability.

You might also like