Introduction to Software Engineering Concepts
Introduction to Software Engineering Concepts
SoftwARE ENGINEERING
CONCEPTS
UNIt 1
C h a p t e r1
INTRODUCTION TO SOFTWARE ENGINEERING
Software is more than just a program [Link] is the group of programs written in simple
high level language with specified syntax that is understandable by both human and machine.
Software is divided in to two major categories like system and application software.
Software is a program or set of programs containing instructions which provides desired
functionality. It also consists of data structures that enable the program to manipulate the
information.
Definition by IEEE: The collection of computer programs, procedures, rule and associated
documentation and data is called software.
The objective of software engineering is to produce software products. Software Products are
software systems delivered to a customer with the documentation which describes how to install
the use [Link] software products fall in to two broad categories:
1. Generic Products: These are standalone systems which are produced by a development
organization and sold on the open market to any customer who is able to buy them.
2. Customized Products: These are systems which are commissioned by a particular
customer. The software is developed specially for that customer by some contractor.
Different individuals judge software on different basis. This is because they are involved with the
software in different ways. For example, users want the software to perform according to their
requirements. Similarly, developers involved in designing, coding, and maintenance of the software
evaluate the software by looking at its internal characteristics, before delivering it to the user. Software
characteristics are classified into six major components.
• Functionality: Refers to the degree of performance of the software against its intended purpose.
• Reliability: Refers to the ability of the software to provide desired functionality under the given
conditions.
• Usability: Refers to the extent to which the software can be used with ease.
• Efficiency: Refers to the ability of the software to use system resources in the most effective and
efficient manner.
• Maintainability: Refers to the ease with which the modifications can be made in a software system
to extend its functionality, improve its performance, or correct errors.
• Portability: Refers to the ease with which software developers can transfer software from one
platform to another, without (or with minimum) changes. In simple terms, it refers to the ability of
software to function properly on different hardware and software platforms without making any
changes in it.
In addition to the above mentioned characteristics, robustness and integrity are also important.
Robustness: Refers to the degree to which the software can keep on functioning in spite of
being provided with invalid data.
Integrity: Refers to the degree to which unauthorized access to the software or data can be
prevented.
1.3 Changing Nature of software:
The changing nature of software can be well understood when compared with hardware. Unlike
hardware software does not go through manufacturing processes. The environmental hazards
may affect hardware since it is a factory product. This state of hardware is called wear-out state.
These defects can be repaired by manufacturer or designer and it becomes as it was. But it is
totally different in case of software. Software does not wear out.
Although software does not have any defects it may get the need of modification as the users
demand from the software may [Link] the demands of users remain unfulfilled then such
software is called defected [Link] the developer is trying to modify one need of the user then the
other necessity comes up in front. In this way the original software is getting slowly changed as
a result of changes done afterwards according to demands of users. Thus the changing nature of
software is defined in terms of bath-tub curve. Introducing changes to source code often results
in the introduction of [Link] idealized curve of the software is shown at the beginning when it
is original as follows:
1.3.2 Customer Myths: Users sometimes believes some myths about software and may lead to false
expectations which may disappoint them. Some common customer myths are listed below:
1. Myth:―The requirements given at the initial stage are enough for the development
of software.‖
Reality: At the start stage the requirements are generally incomplete and ambiguous which
may often lead to failure of project. The detailed requirement is needed before starting
development. When the requirement gets added in the later stages then entire process has to be
repeated again which consumes time as well as efforts.
2. Myth: ―The changes in the software can be easily added at any stage of development since
the software is considered to be very flexible.‖
Reality: Adding changes in the software at the later phases may require redesigning and cost
of development may also get increased than changes added at early stage.
1.3.3 Developer Myths: Some of the common developer myths are as follows:
1. Myth: ―Developer believes that development is complete after code is delivered to
the customer.‖
Reality: The Efforts are doubled when the software reaches to the customer.
2. Myth: ―The Documentation is considered as unnecessary which takes more time to complete
the project and to get successful.‖
Reality: The systematic documentation is essential as it enhances the quality and ultimately
redesigning is reduced.
3. Myth: ―Software quality can be assessed only after the program is executed.”
Reality:Software quality must be measured after every phase of software development
activity. Some of the measures such as Quality Assurance techniques can be applied. There
are various quality measures which are explained in the later part of this text.
Review Questions
1. What is software and state various definitions of software engineering.
2. What is software Product Explain in brief.
3. Explain Evolving role of software and write its characteristics.
4. Write a short Note on Changing Nature of Software.
5. Explain any three types of myths and reality in software engineering in detail.
Chapt er 2
Tools
METHODS
PROCesses
QUALIty FocUS
1. Quality Focus:
The Fundamental building block of any software is quality focus. Software development is
basically depends on an organizational commitment for quality. The software quality acts as
the ―BedRock‖ which supports for software engineering activities. The culture of quality focus
in development of software ultimately leads to improvement in software.
2. Process:
The process layer is the foundation for the software development. Process defines a framework
for a set of Key Process Areas (KPAs) that must be established for effective software
development. The key process area such as 1. Technical Methods 2. Work Products 3.
Documents, Reports etc. 4. Mile Stones establishment 5. Quality Management. Process in
software development holds all the technology layers together and thus timely development is
achieved.
3. Methods:
Software Engineering methods provide the "Technical Questions" for building Software.
Methodsusually include requirements analysis, design, program construction, testing, and
support. Methods describe modeling activities and other illustrations which are needed in
critical conditions.
4. Tools:
Software Engineering tools provide automated or semi-automated support for the "Process" and the
"Methods". The information created by these tools can be shared on different platforms. Tools
are integrated so that information created by one tool can be used by another. This is nothing but
computer aided software engineering is established to develop a system for the support of software
development faster.
Software Process
Process defines a framework for a set of Key Process Areas (KPAs) that must be established for
effective delivery of software engineering technology. This establishes the context in which
technical methods are applied, work products such as models, documents, data, reports, forms,
etc. are produced, milestones are established, quality is ensured, and change is properly
managed. A process framework establishes the foundation for a complete software process by
identifying a small number of framework activities that are applicable to all software projects,
regardless of size or complexity. It also includes a set of umbrella activities that are applicable
across the entire software process. Some most applicable framework activities are described
below.
The primary focus of CMMI model is that quality of a system or product must be highly
influenced by the process used to develop and maintain. It act a as a guide to project and an
entire [Link] provides organizations with the essential elements of effective processes.
Thus it is called as world-class performance improvement [Link] implies a potential for
growth in capability and indicates both the richness of an organization‘s process and the
consistency of its application across projects.
The ProcessPattern can best be understood by pondering over process and pattern words
separately. A process is defined as a series of actions inwhich one or more inputs are used to
produce one or more outputs. The pattern can be explained as the similar features which keep
recurring overand over again, although their detailed appearance will never remain the
[Link] are the steps followed to achieve a task and patterns which are related behavior
in a software development [Link] to Alexander, a pattern is a general solution to a
common problem or issue, one from which a specific solution may be derived. Thus combining
both Process patterns can be defined as the set of activities, actions, work tasks or work products
in software [Link] are: 1. Customer communication. 2. Analysis. 3.
Requirements gathering. 4. Reviewing a work product. 5. Design model.
The use of process pattern enhances reusability and flexibility, and reduces the costs and risks
involved in systems development. These patterns are extensively used for building almost all
types of software systems [Link] such definition by ambler in Cambridge University-
defines a process pattern as ―a collection of general techniques, actions, and/or tasks (activities) for
developing object-oriented software‖. [Link] [Link] [Link].
1. A Task process pattern depicts the detailed steps to perform a specific task.
2. A Stage process pattern includes a number of Task process patterns and depicts the steps of a
single project stage. This is oftena iterative process.
3. Phase process pattern represents the interactions between its Stage process patterns in a single
phase.
The process patterns can be a process fragment commonly encountered in software development.
They are being used as process components and can thus be applied as reusable building blocks.
2.5 Process Assessment:
The existence of software process does not guarantee the timely delivery of the software and its
ability to meet the user's expectations. The process needs to be assessed in order to ensure that it
meets a set of basic process criteria, which is essential for implementing the principles of
software engineering in an efficient manner. The process is assessed to evaluate methods, tools,
and practices, which are used to develop and test the software. The aim of process assessment is
to identifythe areas for improvement and suggest a plan for making that improvement. The main
focus areas of process assessment are listed below.
1. Obtaining guidance for improving software development and test processes
2. Obtaining an independent and unbiased review of the process
3. Obtaining a baseline (defined as a set of software components and documents that have
been formerly reviewed and accepted; that serves as the basis for further development)
for improving quality and productivity of processes.
Software process assessment examines whether the software processes are effective and efficient
in accomplishing the goals. This is determined by the capability of selected software processes.
The capability of a process determines whether a process with some variations is capable of
meeting user's requirements. In addition, it measures the extent to which the software process
meets the user's requirements. Process assessment is useful to the organization as it helps in
improving the existing processes. In addition, it determines the strengths, weaknesses and the
risks involved in the processes.
The Team Software Process guides engineering teams that develops software-intensive
products. It provides a defined operational process framework that is designed to help teams of
managers and engineers organize projects. TSP helps organizations to establish a mature and
disciplined engineering practice that produces secure, reliable software in less time and at lower
costs. They help produce defect-free and inside deadline by empowering development teams.
These technologies are based on the premise that a defined and structured process can improve
individual work quality and efficiency. The TSP was introduced in 1998, and builds upon the
foundation of PSP to enable engineering teams to build software-intensive products more
predictably and effectively. It aims produce software products that range in size from small
projects of several thousand lines of code (KLOC) to very large projects greater than half a
million lines of code.
Review Questions:
Process Models
There are various types of software development life cycle models that are designed to be
followed during the process of software development. We can call it as "Software Development
Process Models". In order to ensure full completion of each process a Series of steps which can
be unique to its type, is followed.
Some popular software development life cycle followed by industry is: Waterfall Model,
Incremental model, Iterative Model, Spiral Model, V-Model, and Big Bang Model. Some other
methodologies are RAD Model, Agile Model, Rapid Application Development and Prototyping
Models.
Hence Software Process Models are process Model that describes the series and sequence of
phases for the entire lifetime of a product. Hence sometime we can call it as Product Life Cycle.
The Waterfall Model is also called as linear-sequential life cycle model. It is the first model that has
been introduced. This is very simple model to understand and use.
In a waterfall model, each phase or step is followed sequentially that is initial phase must be
completed before the next phase can be started.
The whole phases of waterfall model have been divided in two numbers of steps such as:
Requirement analysis, system design, Implementation, testing, deployment and maintenance.
The sequential phases in Waterfall model are:
Requirement Gathering and analysis: This phase captures all the possible requirements of
the system that needs to be developed and are documented in requirement specification.
System Design: In this phase the overall system architecture is specified such as specifying
hardware and system requirements. Here the system design is prepared.
Implementation: In this phase we take inputs from system design and develop small
programs called units and then integrate it in the next phase.
Integration and Testing: Testing is a measure to test the developed unit, after testing of each
unit developed in the implementation phase are integrated into a system. After integration the
entire system is tested for any faults and failures.
Deployment of system: Once we complete all the functional and non functional testing, the
final product is released into the market or deployed in the customer‘s environment.
Maintenance: After the deployment phase some issues get generated in the client
environment. To fix such issues proper action is executed or some better version is released to
enhance the functionality of product this action of working comes under criteria of
maintenance which is all about delivering changes in the customer environment.
In incremental process model the requirements are broken down in to multiple module of
software development cycle. Here each iteration passes through the requirements, design, coding
and testing phases. The word increment justifies that the work proceeds incrementally till the
work gets completed. While incrementing a little more is added each time to new stuffs.
In incremental process the requirement of the system are clearly understood, it is used when
demand for early release of product arises, such types of model is more in use for web
applications and product based companies.
Evolutionary Software Process is typically a cycle in which the software allows the developers
to enable the changes increasingly by the process of iteration. The evolutionary process models
are of following types:
1. the prototyping model
2. The spiral model
3. Concurrent development model
1. Communication
Communication is the main part for any information collection. In this phase, customer and
developer communicate and discussion related to the overall objectives of the software is
done.
2. Quick design
To implement the Quick design it is mandatory to know requirements
Important aspects like input and output format of the software is required.
Rather than the detailed plan it focuses on those aspects which are visible to the user.
After evaluating the current prototype it the customer is not satisfied then the necessary
changes are made accordingly.
The process of making necessary corrections in prototype is repeated until
all the requirements of users are met.
When customer gets satisfied with the final developed prototype then on the basis of
final prototype the system is developed.
2. The elaboration phase refines and expands the preliminary use-cases that were developed
as part of the inception phase and expands the architectural representation.
3. The construction phase of the unified process is all about construction activity. The
construction phase develops the software component using the architectural model as input.
4. The transition phase is all about giving software to end-users for beta testing. At the end of
the transition phase, the software becomes a usable software release.
5. In the production phase the on-going use or working of the software is monitored and
support for the operational environment is provided.
C h a p t e r4
Software Requirements
4.1 What are Software Requirements?
Software Requirement can be defined as the description of features and functionalities of the
target system. Software requirement actually specifies the expectations of users from the software
products. The software requirement can be known or unknown, expected or unexpected. process
of gathering requirement from the user, analyze and then to document it is called requirement
engineering. The main of requirement engineering is to develop and maintain a document related
to system requirement specification.
In systems engineering and requirements engineering, there are two types of requirement that can
be specified as functional and a non-functional requirement requirement. Functional requirement
is something which can be done by system for example: Add more customer detail, print receipt
etc. Example of functional requirement:
Search option given to user to search from various invoices.
User should be able to mail any report to management.
Users can be divided into groups and groups can be given separate rights.
Should comply business rules and administrative functions.
Software is developed keeping downward compatibility intact.
The non-functional requirement is that it essentially specifies criteria that can be used to judge the
operation of a system that is how the system should behave. Simply we can differentiate
functional and non-functional requirements as non-functional justifies how the system works,
while functional requirements justifies what the system should do.
Non-functional requirements include -
Security
Logging
Storage
Configuration
Performance
Cost
Interoperability
Flexibility
Disaster recovery
Accessibility
We can define user requirement as the expectation of a user from the software. It is generally the
expected set of tasks that generally the user wants the software must be able to do. The user
requirements are noted in URD (user requirement document).The URD signifies the following
points:
The main function of user requirement document is to provide the mandatory terms that has
been decided in terms of Design, development etc.
A User Requirement document is produced with the help of Requirements Analysis activity
The User Requirement document is the primary input to subsequent System Design work
and to the procurement specifications for pertinent system development contracts.
There are various types of requirement such as user requirement, software requirement and
system requirement. System requirement can be defined as a document structured for writing
detailed descriptions of the services provided by system. It can be contract written between
client and contractor. Following are the properties of system requirement:
1. The facilities are provided to user to define the type of external files.
2. Each external file type may represented as specific icon on user‘s display and have an
associated tool which may be applied to the file
3. When an icon is selected by user representing an external file, the selection effect should be
like that should be applied to the tool associated with the type of external file as the file
represented by the selected icon.
Now a days the new system and the existing systems must work together because of which the
interfaces of existing systems have to be precisely specified .An Interface in computing can be
defined as a shared boundary across which two or more separate components like software,
computer hardware, peripheral devices ,humans exchange information. Interface may need
requirement called as interface requirement in which systems need to provide data to other
system or user. For example inventory management system may require information like store
data. Interface specification is a document that records the detail related to the software user
interface. It covers all the actions that an end user may perform and all visual, auditory and other
interaction elements. There are five main type of user interface:
Command Line.
Graphical User Interface (GUI)
Menu Driven.
Form Based.
Natural Language
Collection of software requirements is the basis of the entire software development project.
Hence the main thing is that they must be clear, correct and well-defined. A complete Software
Requirement specification must be clear, correct, coherent and consistent. Software
Requirements is about understanding what sort of requirement may arise in the phase of
requirement elicitation and what kind of requirement are expected from the software system.
Requirements are categorized as ―Must have‖,‖ should have‖, ‖could have‖, ‖wish list‖. Here we
can summarize it as ―must have‖ means software could not be able to operate without them;
―Should have‖ is about enhancing the functionality of software which generally the client
suggests, ―could have‖ is about the further expectation of client but software can still properly
function without implementing this function, ‖Wish list‖ means the expectation which may not
have direct link with the function of software, requirement that do not map any objective of
software rather it can be kept for software updates.
The Software requirement is captured in a document called Software requirement
specification.A software requirements specification (SRS) is a full descriptive document that
actually consists of complete description about how the system is system is going to perform, it
is all about the expected task. After requirements engineering phase this SRS is signed off.
A software requirements specification (SRS) is a detailed description of a software system that
has to be developed. It is collection of functional and non-functional requirements, SRS is used
to minimize the time and effort required by software development team to reach desired goal.
The main aim of this is to minimize the development [Link] are the qualities of a good
SRS:
It should be Correct
It should be Unambiguous
It should be Complete
It should be Consistent
It should be Ranked for importance and/or stability
It should be Verifiable
It should be Modifiable
It should be Traceable.
Software requirement specification is responsible for documenting various requirements such as
• Functional requirement
• Performance
• Interface
• Maintainability
• Reliability
• Safety
• Quality
• Operational
• Resources
Content presentation
Easy Navigation
Simple interface
Responsive
Consistent UI elements
Feedback mechanism
Default settings
Purposeful layout
Strategical use of color and texture.
Provide help information
User centric approach
Group based view settings.
Review Questions:
Requirement can be defined as necessary demand which needs to be fulfilled according to the demand.
The requirement must be relevant and detailed. Requirement Engineering focuses more on the process of
designing the system that users want.
Example: ―A system shall allow the users to register by entering their username and password, so to get
an access to the system‖
Requirements engineering shared many concepts, techniques and concerns with human computer
interaction (HCI) especially user-centred design, participatory design and Interaction design.
The goal of requirement engineering is to develop and maintain simple, informative, detailed and
descriptive ‗system requirements specification‘ documents. This documents helps the developer to
get specific requirement. Actually speaking, System requirements specification is a software
requirement specification (SRS) is a description of a software system to be developed.
4. Feasibility study helps the business personals to research on market before taking any big step.
Various types of feasibility study like technical feasibility, economic feasibility and legal feasibility
are developed by industry personnels.
Feasibility study analyzes whether the software product can be practically implemented? what would be
its contribution to organization, cost constraints and as per values and objectives of the organization. It
explores technical aspects of the project and product such as usability, maintainability, productivity and
integration ability.
Complete – A requirement statement must be fully descriptive about its functionality to be delivered.
The description must be sufficient for the developer to understand and implement it.
Technically available – Next important thing is the technically availability of stuffs. The impossible
technical requirement must not be specified.
Necessary – A requirement must be of some value to a product being made or service to be
delivered. It must dictate something that a person really wants.
Correct – Each requirement must accurately and specifically describe the required functionality, and
must be technically correct.
Unambiguous – Unambiguous means not open to one or more interpretation. A requirement must
have only one possible interpretation or meaning for all the readers. While writing down a
requirement one must avoid ambiguous words like e.g. ―adequate‖, ―handles‖, ―fault tolerant‖, ―user
friendly‖, ―as much as possible‖, ―robust‖, ―several‖, ―as fast as possible‖etc.
Verifiable – Verifiable means to verify that whether the requirement has been correctly implemented
.A requirement must be like a test can validate.
Implementation free – A requirement statement should not specify about how its design should be
made or implementation process. A requirement states what is required not how a requirement
should be met.
Requirement management is about the activity of managing changing requirements. The requirement for
the large system changes frequently. Hence requirement management can be said as managing the
changing requirement during the software development process. Some of the features of Requirement
management:
System Models
6.1 What is system Model?
Hence system model can be defined as ―a system model is the conceptual model that describes and
represents a system, it is used to conceptualize and construct a system in business and IT sector. It helps
the analyst to understand the functionality of a target system and are used to communicate with system.‖
Different models presents the system with different views like external view shows the system
environment, behavioral view shows the behavior of system. system comprises multiple views such as
planning, requirement , design, implementation, deployment, structure, behavior, input data, and output
data views.
1. The context is the surrounding element for the system, and a model provides the mathematical interface
and a behavioral description of the surrounding environment.
2. Context modeling basically plays a key role in making efficient context management. It works to
illustrate that how the system will look like. It involves working with the stakeholders to distinguish
what the real system looks like.
3. A context model defines how context data are maintained. A formal or semi-formal description of the
context information is produced, that is present in a context-aware system.
4. Context models actually shows what lies outside the system boundaries, are used to illustrate the
operational context of a system. During requirement elicitation and analysis process one should decide
on the boundaries of system, this decision should be made early to limit the system cost and time
needed for analysis
5. Context model gets affected by social and organizational concerns.
Behavioral models are used to describe the overall behavior of a system. Behavior modeling is also called
as dynamic modeling is about guiding employees how work is to be done in workplace and guiding them
throughout the process of implementing the modeled behavior. All behavioral models really do is describe
the control structure of a system.
Sequence Of Operations
Object States
Object Interactions
Structured methods within an organization ensure that the project has a justified business case before the
development begins and significant costs are incurred. Here before arriving at a single solution the
different solutions like, benefits, risks and costs are considered. The advantage to the organization is that
the benefits, risks and costs are considered and approved by the projects governance structure before
deciding on the project providing clarity to the organization
Structured methods provide a mechanism to identify record, assess and mitigate risks that occur during
the project. The advantage for the organization is that there is a clear procedure for the management of
risk which is auditable and builds trust with stakeholders since there is a clear demonstrable process. It
reduces the risk to the organization and ensures that risks are managed appropriately protecting the
organization from reputational damage which could impact on sales.
Review Questions:
Define Requirement?
Discuss requirement engineering?
Explain context model?
Define Behavioral model?
Explain data model?
Unit 4
Chapt er 7
Design Engineering
7.1 What is Design Engineering?
Definition: The design engineering process is a step by step description that many engineers
follow to find a solution to a problem or creating any functional product or processes. Design
process is iterative in nature.
1. Pattern Based Software Design is a description for how to solve a problem that can be used in
many different situations.
2. A design pattern is a general solution to a commonly occurring problem in software design. A
design pattern isn't a finished design that can be transformed directly into code.
1. Creational patterns
• Creational pattern deal with configuration and Initialization of classes and objects.
• Creates design pattern that provides a way to create objects while hiding the creation logic is provided
by these design patterns
• This doesn‘t emphasis on instantiating objects directly using new operator. This gives program more
flexibility in deciding which objects need to be created for a given use case.
Features:
Abstract factory: Provide an interface for creating families of related or dependent objects
without specifying their concrete classes.
Builder: Separate the construction of a complex object from its representation allowing the same
construction process to create various representations.
Factory method: Define an interface for creating an object, but let subclasses decide which class
to instantiate. Factory Method lets a class defer instantiation to subclasses.
Prototype: Specify the kinds of objects to create using a prototypical instance, and create new
objects by copying this prototype.
Singleton: Ensure a class has only one instance, and provide a global point of access to it.
2. Structural Patterns:
• The main part of this pattern consists of classes or objects.
• These design patterns mainly has composition of class and object.
• Inheritance concept is used to compose interfaces and define ways to compose objects
to obtain new functionalities.
• This pattern deal with decoupling interface and implementation of classes and objects.
Features:
Adapter or Wrapper: Convert the interface of a class into another interface clients expect.
Adapter lets classes work together that could not otherwise because of incompatible
interfaces.
Bridge: Decouple an abstraction from its implementation allowing the two to vary
independently.
Composite: Compose objects into tree structures to represent part-whole hierarchies.
Composite lets clients treat individual objects and compositions of objects
uniformly.
Decorator: Attach additional responsibilities to an object dynamically keeping the same
interface. Decorators provide a flexible alternative to subclassing for extending
functionality.
Facade: Provide a unified interface to a set of interfaces in a subsystem. Facade defines
a higher-level interface that makes the subsystem easier to use.
Front Controller: Provide a unified interface to a set of interfaces in a subsystem.
Front Controller defines a higher-level interface that makes the subsystem easier to use.
Flyweight: Use sharing to support large numbers of fine-grained objects efficiently.
Proxy: Provide a surrogate or placeholder for another object to control access to it.
3. Behavioral patterns:
• Deal with dynamic interactions among societies of classes and objects
• How responsibility is being distributed
• These design patterns emphasis on communication between objects.
Features:
Blackboard: Generalized observer, which allows multiple readers and writers.
Communicates information system-wide.
Chain of responsibility: Avoid coupling the sender of a request to its receiver by giving more
than one object a chance to handle the request. Chain the receiving objects and pass the request
along the chain until an object handles it.
Command: Encapsulate a request as an object, thereby letting you parameterize clients
with different requests, queue or log requests, and support undoable operations.
Interpreter: Given a language, define a representation for its grammar along with an
interpreter that uses the representation to interpret sentences in the language.
Iterator: Provide a way to access the elements of an aggregate object sequentially without
exposing its underlying representation.
Mediator: Define an object that encapsulates how a set of objects interact. Mediator
promotes loose coupling by keeping objects from referring to each other explicitly, and it lets
you vary their interaction independently.
Memento: Without violating encapsulation, capture and externalize an object's internal state
allowing the object to be restored to this state later.
Null object: Avoid null references by providing a default object.
Observer or Publish/subscribe: Define a one-to-many dependency between objects where a
state change in one object results with all its dependents being notified and updated
automatically.
Servant: Define common functionality for a group of classes
Specification: Recombinable business logic in a Boolean fashion
State: Allow an object to alter its behavior when its internal state changes. The object will
appear to change its class.
Strategy: Define a family of algorithms, encapsulate each one, and make them interchangeable.
Strategy lets the algorithm vary independently from clients that use it.
Template method: Define the skeleton of an algorithm in an operation, deferring some steps to
subclasses. Template Method lets subclasses redefine certain steps of an algorithm without
changing the algorithm's structure.
Visitor: Represent an operation to be performed on the elements of an object structure. Visitor
lets you define a new operation without changing the classes of the elements on which it
operates.
Design Considerations
To design a piece of software there are many small things that need to be considered. Some of them
are:
Compatibility - The software is able to operate with other products that are designed for
interoperability with another product. For example, some of software may be backward-
compatible with an older version of itself.
Extensibility – Extensibility refers to adding new capabilities to the software without major
changes to the underlying architecture.
Fault-tolerance - The software must be able to recover from component failure.
Maintainability - The software can be restored to a specified condition within a specified period
of time. For example, Some gaming software may include the ability to periodically receive new
version definition updates in order to maintain the software's effectiveness.
Modularity - This allows division of work in software development project which leads to better
maintainability. The resulting software comprises well defined, independent components. The
components could be then implemented and tested in isolation before being integrated to form a
desired software system.
Packaging – Packaging is the main part of presentation. Printed material such as the box and
manuals should match the style designated for the target market and should enhance usability.
Information should be visible on the outside of the packaging. All components required for use
should be included in the package or specified as a requirement on the outside of the package.
Reliability – The required function should be able to perform by the software under stated
conditions for a specified period of time.
Reusability - the software is able to add further features and modification with slight or
no modification.
Robustness - The software is able to operate under stress or tolerate unpredictable or invalid
input. For example, it can be designed with the capacity to recover quickly from difficulties to
low memory conditions.
Security - The software is able to withstand unfriendly acts and negative influences.
Usability - The software user interface must be usable for its target user/audience. Default
values for the parameters must be chosen so that they are a good choice for the majority of the
users.
1. A good software design minimizes the time required to create, modify, and maintain the
software while achieving run-time performance.
2. A design must present an architecture built using known pattern designs.
3. Design pattern must consist of components with the right characteristics and that can be
implemented in an incremental way.
4. Design must be modular in nature that is must be divided in to module.
5. A design quality must be such that it should denote its notation with its meaning correctly.
A good software design must consist of following qualities:
Functionality- Functionality is the main part of software design. A good software design must
consists of good functions, should be proper and updated.
Reliability-One must be dependable on the software that has been provided to the stakeholders.
Dependability increases with good design.
Usability- The designed software must be of good use. proper usability of software increases with good
software design.
Efficiency- Efficiency is the ability of the software to do the required processing on least amount of
hardware. The software must be efficient if it uses less resources and gives maximum output.
Maintainability- Software maintainability is defined as the degree to which an application is understood,
repaired, or enhanced. Software maintainability is important because it is approximately 75% of the cost
related to a project.
Portability- Portability, in relation to software, is a measure of how easily an application can be
transferred from one computer environment to another.
Software design process is a series of well-defined steps. Software design varies according to design
approach.
A solution design is created from requirement or previous used system and/or system
sequence diagram.
Objects are identified and grouped into classes on behalf of similarity in attribute characteristics.
Class hierarchy and relation among them is defined.
Application framework is defined.
Software Design Approaches
It consists of two approaches, Top Down and Bottom up approaches both are necessary for good
design process:
Bottom-up Design
1. The bottom up design model starts with most specific and basic components.
2. It proceeds with composing higher level of component by using basic or lower level components.
3. Bottom-up keeps creating higher level components until the desired system is not evolved as one
single component. With each higher level, the amount of abstraction is increased.
4. Bottom-up strategy is more suitable when a system needs to be created from some existing system.
7.4 Design model
Architectural Design - The architectural design is the highest abstract version of the system. It
recognizes the software as a system with many components interacting with each other. At this level,
the designers get the idea of proposed solution domain.
High-level Design- The high-level design breaks the ‗single entity-multiple component‘ concept of
architectural design into less-abstracted view of sub-systems and modules and depicts their interaction
with each other. High-level design focuses on how the system along with all of its components can be
implemented in forms of modules. It recognizes modular structure of each sub-system and their relation
and interaction among each other.
Detailed Design- Detailed design deals with the implementation part of what is seen as a system and its
sub-systems in the previous two designs. It is more detailed towards modules and their
implementations. It defines logical structure of each module and their interfaces to communicate with
other modules.
Review Questions:
Software architecture encompasses the set of significant decisions about the organization of a
software system including the selection of the structural elements and their interfaces by which
the system is composed; behavior as specified in collaboration among those elements;
composition of these structural and behavioral elements into larger subsystems; and an
architectural style that guides this organization. Software architecture also involves functionality,
usability, resilience, performance, reuse, comprehensibility, economic and technology
constraints, tradeoffs and aesthetic concerns.
Martin Fowler outlines some common recurring themes when explaining architecture-
―The highest-level breakdown of a system into its parts; the decisions that are hard to change;
there are multiple architectures in a system; what is architecturally significant can change over a
system's lifetime; and, in the end, architecture boils down to whatever the important stuff is.‖
In Software Architecture in Practice, Bass, Clements, and Kazman define architecture as follows.
―The software architecture of a program or computing system is the structure or structures of the
system, which comprise software elements, the externally visible properties of those elements,
and the relationships among them. Architecture is concerned with the public side of interfaces;
private details of elements—details having to do solely with internal implementation—are not
architectural.‖
Keep in mind that the architecture should:
Expose the structure of the system but hide the implementation details.
Realize all of the use cases and scenarios.
Try to address the requirements of various stakeholders.
Handle both functional and quality requirements.
8.1.1 Why is Architecture Important?
Software must be built on a solid foundation. Failing to consider key scenarios, failing to design
forcommon problems, or failing to appreciate the long term consequences of key decisions can
put your application at risk. Modern tools and platforms help to simplify the task of building
applications, but they do not replace the need to design your application carefully, based on your
specific scenarios and requirements. The risks exposed by poor architecture include software that
is unstable, is unable to support existing or future business requirements, or is difficult to deploy
or manage in a production environment.
Systems should be designed with consideration for the user, the system (the IT infrastructure),
and the business goals. For each of these areas, you should outline key scenarios and identify
important quality attributes (for example, reliability or scalability) and key areas of satisfaction
and dissatisfaction. Where possible, develop and consider metrics that measure success in each
of these areas.
Consider the following high level concerns when thinking about software architecture:
How will the users be using the application?
How will the application be deployed into production and managed?
What are the quality attribute requirements for the application, such as security,
performance, concurrency, internationalization, and configuration?
How can the application be designed to be flexible and maintainable over time?
What are the architectural trends that might impact your application now or after it has
been deployed?
Some treat architectural patterns and architectural styles as the same, some treat styles as
specializations of patterns. They provide a common language or vocabulary with which one
can describe classes of systems.
8.3.1 Black Board System
The Broker architectural pattern can be used to structure distributed software systems with
decoupled components that interact by remote service invocations. A broker component is
responsible for coordinating communication, such as forwarding requests, as well as for
transmitting results and exceptions. The idea of the Broker architectural pattern is to distribute
aspects of the software system transparently to different nodes.
Using the Broker architecture, an object can call methods of another object without knowing
that this object is remotely located. A Proxy object calls the broker, which determines where
the remote object can be found.
CORBA is a well-known open standard that allows you to build this kind of architecture – it
stands for Common Object Request Broker Architecture. Java has many classes that allow you
to use CORBA facilities. There are also several other commercial architectures that also
provide broker capabilities.
Advantages
A system that consists of multiple remote objects which interact synchronously or
asynchronously.
Heterogeneous environment.
Problems
Usually, there is a need of having great flexibility, maintainability and changeability when
developing applications.
Scalability is reduced.
Inherent networking complexities such as security concerns, partial failures, etc.
Networking diversity in protocols, operating systems, hardware.
8.3.3. Model–View–Controller (MVC)
Model–View–Controller (MVC) is an architectural pattern used to help separate the user
interface layer from other parts of the system. It divides a given application into three
interconnected parts in order to separate internal representations of information from the ways
that information is presented to and accepted from the user. The MVC design pattern decouples
these major components allowing for efficient code reuse and parallel development. The MVC
pattern separates the functional layer of the system (the model) from two aspects of the user
interface, the view and the controller.
The model contains the underlying classes whose instances are to be viewed and
manipulated.
The view contains objects used to render the appearance of the data from the model in the
user interface. The view also displays the various controls with which the user can interact.
The controller contains the objects that control and handle the user‘s interaction with the
view and the model. It has the logic that responds when the user types into a field or clicks
the mouse on a control.
The basic principles in the client and server architecture are: a) there is at least one component
that has the role of server, waiting for and then handling connections, and b) there is at least
one component that has the role of client, initiating connections in order to obtain some
service. An important variant of the client–server architecture is the three-tier model under
which a server communicates with both a client (usually through the Internet) and a database
server (usually within an intranet, for security reasons). The server acts as a client when
accessing the database server. A further extension to the Client–Server architectural pattern is
the Peer-to-Peer architectural pattern.
Fig.: Client and server architecture
Transaction processing systems are often embedded in servers. A typical example is a database
engine, where the transactions are various types of queries and updates. Transactions
themselves vary in their level of complexity. In many cases an update transaction requires that
several separate changes be made to a database. Many transaction processing systems work in
environments where several different threads or processes can attempt to perform transactions
at once.
Review Questions:
An Overview
Chapt er 9
TestINg STRAteGIES
9.1 A Strategic Approach To Software Testing
A testing strategy is an outline that describes the testing approach of the software development
cycle.
It defines how testing should be carried out. A testing strategy is used to identify the levels of
testing which are to be applied along with the techniques, and tools to be used during testing.
A Software Testing Strategy helps to convert test case designs into a well-planned execution
steps that will result in the construction of successful software.
It is created to inform project managers, testers, and developers about some key issues of
the testing process.
This strategy also decides test cases, test specifications, their decisions, and all these are
associated for execution.
The testing strategy is to be developed as such which will meet the requirement of the
organization as it is critical to the success of software.
The main purpose of testing can be quality assurance, reliability estimation, validation or
verification.
Thus overview can be stated as these testing strategies must contain complete information
about the procedures to perform on testing and purpose and requirement of testing.
The nature of development of software decides the choice of testing strategy. The design and
architecture of the software are also useful in choosing testing strategy.
All testing strategies have following characteristics:
1. A software team should conduct effective formal reviews. This eliminates many
errors before testing starts.
2. Testing begins at component level and works ―outward‖ towards the integration of
entire computer based system.
3. Different testing techniques are appropriate at different points in time.
4. Testing is conducted by the developer and for large project, an independent test
group.
5. Testing and debugging are different activities, but debugging must be included in any
testing strategy.
The Software testing strategy must accommodate low-level tests and high-level tests. The
low-level tests are necessary to verify a small source code segment that has been implemented
effectively. Also the high-level tests that validate major system functions against customer
requirements.
Hence testing is a set of activity that can be planned in advance and conducted systematically.
So for this reason the template for software testing must be defined for the software processes.
The template is nothing but set of steps into which we can place specific test case design
techniques and testing methods.
9.2 Conventional Software
Conventional software is the software or applications which perform some particular task. For
example desktop application, Microsoft Power point, Ms Excel etc are considered as
conventional software. Many software errors are eliminated before testing begins by conducting
effective technical reviews.
Here we spiral along the stream lines that decrease the level of abstraction on each of them. The
initial phase of software development is system engineering which leads further to system
requirements analysis. In this Information processing, function, behavior, performance,
constraints, validation criteria for software are established. As we move inward along the spiral
we come to design and finally to the coding. Accordingly the Strategy for software testing may
also be viewed in the context of spiral as in system development. In that first comes Unit
Testing, Integration testing, Validation testing and finally System testing i.e from low-level to
high-level.
[Link]. Unit Testing:
At vertex of spiral, testing begins with unit testing. It aims at testing each component or unit
of software to check its functionality, independently. Ensures that it works properly as a unit.
Typical units are
Interface: tested to check proper flow of information into and out of the program unit
under test.
Local data structures: tested to check integrity of data during execution.
Boundary conditions: tested to ensure unit operates properly at boundaries to limit
processing.
Independent paths: tested to ensure all statements in the unit are executed at least once.
Error handling paths: tested to check whether error messages are user friendly and
corresponds to error encountered, whether they reroute or terminate process when error
occurred.
Common errors found during unit testing are: incorrect initialization, precision
inaccuracy, mixed mode operation, incorrect arithmetic precedence etc.
For Example: A tester, without knowledge of the internal structures of a website, tests the web
pages by using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against
the expected outcome.
Following are some techniques that can be used for designing black box tests.
Equivalence Class
Boundary Value Analysis
Cause Effect Graphing
Orthogonal Arrays
Decision Tables
State Models
Exploratory Testing
All-pairs testing
Tests are done from a user‘s point of view and will help in exposing discrepancies in the
specifications.
Tester need not know programming languages or how the software has been implemented.
Tests can be conducted by a body independent from the developers, which allows for an
objective perspective.
Test cases can be designed as soon as the specifications are complete.
Only a small number of possible inputs can be tested and many program paths will be left
untested.
Without clear specification test cases will be difficult to design.
Tests can be redundant if the software developer has already run a test case.
Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
Forces test developer to reason carefully about implementation.
Reveals errors in "hidden" code.
Spots the Dead Code or other issues with respect to best programming practices.
Testing is more thorough, with the possibility of covering most paths.
Since tests can be very complex, highly skilled resources are required, with thorough
knowledge of programming and implementation.
Expensive as one has to spend both time and money to perform white box testing.
Every possibility that few lines of code are missed accidentally.
In-depth knowledge about the programming language is necessary to perform white box
testing.
Test script maintenance can be a burden if the implementation changes too frequently.
Since this method of testing is closely tied with the application being testing, tools to cater to
every kind of implementation/platform may not be readily available.
9.5. The Art of Debugging:
Debugging occurs as a consequence of successful testing. When a test case uncovers an error,
debugging is a process that results in removal of errors. A testing strategy can be defined and test
case design can be conducted for the expected results. Although debugging can and should be an
orderly process implementing it is still an art.
A software engineer is confronted with the syntactic indication of software problem as a result of
tests. The syntactic external errors and the internal cause of errors may have no relationship with
one another.
Debugging is not testing but always occurs as a result of testing. The debugging process begins
with execution of a test case. Results are assessed effectively. There should be keen
correspondence between expected and actual performance. In many cases non-corresponding data
are symptom of cause debugging.
Sometimes symptom may appear in one part of program while the cause may actually be located
at site that is far removed. The symptoms may disappear temporarily when another error is
corrected. The major cause is non-error that is round-off inaccuracies. Sometimes symptoms may
be caused by human error which is not traced easily. They may be result of timing problem rather
than processing problems. It may be difficult to accurately reproduce input conditions such as
real-time applications. The symptoms may be sometimes intermittent which is particularly
common in embedded systems. These symptoms may be due to cause that is distributed across a
number of tasks running on different processors.
Brute force – memory dumps and run-time traces are examined for clues to error causes
Backtracking – source code is examined by looking backwards from symptom to potential
causes of errors
Cause elimination – uses binary partitioning to reduce the number of locations potential where
errors can exists.
Review Questions:
1. Explain Strategic approach to software testing in details.
2. What is Conventional software also explain test strategies for conventional software.
3. Explain Black-Box and White-Box testing with their diagrams also list their advantages and
disadvantages.
4. Write a short note on art of debugging.
Product Metrics
An Overview
C h a p t e r 10
10.1 Introduction:
What are metrics?
The IEEE glossary defines a metric as ―a quantitative measure of the degree to which a system,
component, or process possesses a given attribute.‖
Software metrics can be classified into three categories:
1. Product Metrics,
2. Process Metrics,
3. Project Metrics.
Product metrics describe the characteristics of the product such as size, complexity, design
features, performance, and quality level. They focus on the quality of deliverables. Product
metrics are combined across several projects to produce process metrics. Process metrics can be
used to improve software development and maintenance. Process metrics are collected across all
projects and over long periods of time. They are used for making strategic decisions. The intent is
to provide a set of process indicators that lead to long-term software process improvement. The
only way to know how/where to improve any process is to Measure specific attributes of the
process is to develop a set of meaningful metrics based on these attributes. Use the metrics to
provide indicators that will lead to a strategy for improvement. Examples include the
effectiveness of defect removal during development, the pattern of testing defect arrival, and the
response time of the fix process. Project metrics describe the project characteristics and execution.
Examples include the number of software developers, the staffing pattern over the life cycle of the
software, cost, schedule, and productivity. Some metrics belong to multiple categories. For
example, in-process quality metrics of a project are both process metrics and project metrics.
In software process, basic quality and productivity data are collected. These data are analyzed,
compared against past averages, and assessed. The goal is to determine whether quality and
productivity improvements have occurred. The data can also be used to pinpoint problem areas.
Remedies can then be developed and the software process can be improved.
.
10.2 Software Quality:
Other Definitions
In the context of software engineering, software quality refers to two related but distinct notions:
Software functional quality reflects how well it complies with or conforms to a given
design, based on functional requirements or specifications. It is the degree to which
the correct software was produced.
Software structural quality refers to how it meets non-functional requirements that support
the delivery of the functional requirements, such as robustness or maintainability.
Software quality measurement quantifies to what extent a software program or system rates
along each of these five dimensions. An aggregated measure of software quality can be
computed through a qualitative or a quantitative scoring scheme or a mix of both and then a
weighting system reflecting the priorities. Such programming errors found at the system level
represent up to 90% of production issues. Consequently, code quality without the context of the
whole system has limited value.
"A science is as mature as its measurement tools," Measuring software quality is motivated by at
least two reasons: 1. Risk Management 2. Cost Management.
Risk Management: Software failure has caused much inconvenience. Software errors have
caused human fatalities. The causes have ranged from poorly designed user interfaces to
direct programming errors. This resulted in requirements for the development of some types
of software, particularly for software embedded in medical that regulate critical
infrastructures.
Cost Management: As in any other fields of engineering, an application with good structural
software quality costs less to maintain and is easier to understand and change. Industry data
demonstrate that poor application structural quality in core business applications (such
as enterprise resource planning(ERP), customer relationship management (CRM) or
large transaction processing systems in financial services results in cost and schedule
overruns and creates waste in the form of rework.
Both types of software now use multi-layered technology stacks and complex architecture so
software quality analysis and measurement have to be managed in a comprehensive and
consistent manner. There are many different definitions of quality. For some it is the
"capability of a software product to conform to requirements." There are few definitions given
by various authors.
1. Software quality according to Deming
The difficulty in defining quality is to translate future needs of the user into measurable
characteristics, so that a product can be designed and turned out to give satisfaction at a
price that the user will pay. This is not easy, and as soon as one feels fairly successful in the
endeavor, he finds that the needs of the consumer have changed, competitors have moved
in, etc.
2. Software quality according to Feigenbaum
Quality is a customer determination, not an engineer's determination, not a marketing
determination, nor a general management determination. It is based on the customer's
actual experience with the product or service, measured against his or her requirements --
stated or unstated, conscious or merely sensed, technically operational or entirely
subjective -- and always representing a moving target in a competitive market.
3. Software quality according to Juran
The word quality has multiple meanings. Two of these meanings dominate the use of the
word: 1. Quality consists of those product features which meet the need of customers and
thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies.
Nevertheless, in a handbook such as this it is convenient to standardize on a short definition
of the word quality as "fitness for use".
10.2.1 CISQ's(Consortium for IT Software Quality) quality model
Even though "quality is a perceptual, conditional and somewhat subjective attribute and may
be understood differently by different people", software structural quality characteristics have
been clearly defined by the Consortium for IT Software Quality (CISQ). Under the guidance
of Bill Curtis, CISQ has defined five major desirable characteristics of a piece of software
needed , these are "Whats" that need to be achieved:
1. Reliability
Reliability measures the level of risk and the likelihood of potential application failures. It
also measures the defects injected due to modifications made to the software we call it as
―Stability‖. The goal for checking and monitoring Reliability is to reduce and prevent
application downtime, application outages and errors that directly affect users.
2. Efficiency
Efficiency is especially important for applications in high execution speed environments
such as algorithmic or transactional processing where performance and scalability are
paramount. The source code and software architecture attributes are the elements that ensure
high performance. An analysis of source code efficiency and scalability provides a clear
picture of the latent risks and the harm they can cause to customer satisfaction due to
response-time degradation.
3. Security
A measure of the likelihood of potential security breaches due to poor coding practices and
architecture. This quantifies the risk of encountering critical vulnerabilities that damage the
business.
4. Maintainability
Maintainability includes concepts of modularity, understandability, changeability,
testability, reusability, and transferability from one development team to another.
Measuring and monitoring maintainability is a must for mission-critical applications where
change is driven by tight time-to-market schedules and where it is important for IT to
remain responsive to business-driven changes. It is also essential to keep maintenance costs
under control.
5. Size
Measuring software size requires that the whole source code be correctly gathered,
including database structure scripts, data manipulation source code, component headers,
configuration files etc. The sizing of source code is a software characteristic that obviously
impacts maintainability. Combined with the above quality characteristics, software size can
be used to assess the amount of work produced and other SDLC-related metrics.
McCall‘s quality factors were proposed in the early 1970s. They are as valid today as they were in
that time. It‘s likely that software built to conform to these factors will exhibit high quality well
into the 21st century, even if there are dramatic changes in technology.
-The function manages user interaction, accepting a user password to activate or deactivate the
system, and allows inquiries on the status of security zones and various security sensors.
-The function displays a series of prompting messages and sends appropriate control signals to
various components of the security system.
The above data flow diagram is evaluated to determine the following measures required for
computation of the function point metrics:
• Number of user inputs
• Number of user outputs
• Number of user inquiries
• Number of files
• Number of external interfaces
Like the function point metric, the bang metric can be used to develop an indication of the size
of the software to be implemented as a consequence of the analysis model. It is developed by
DeMarco. The bang metric is ―an implementation independent indication of system size.‖ The
software engineer must first evaluate a set of primitives to compute the bang metric.
Primitives are determined by evaluating the analysis model. These set of primitives are as
follows:
Functional primitives (FuP). The number of transformations (bubbles) that appears at the
lowest level of a data flow diagram.
Data elements (DE). The number of attributes of a data object, data elements are not
composite data and appear within the data dictionary.
Objects (OB). The number of data objects.
Relationships (RE). The number of connections between data objects.
States (ST). The number of user observable states in the state transition diagram.
Transitions (TR). The number of state transitions in the state transition diagram.
S(i) = f2out(i)
Where,
fout(i) is the fan-out of module i.
2. Data Complexity provides an indication of the complexity in the internal interface for a
module i and is defined as
An earlier high-level architectural design metric proposed by Henry and Kafura also makes use
the fan-in and fan-out. The authors define a complexity metric (applicable to call and return
architectures) of the form
Where,
length(i) is the number of programming language statements in a module i and fin(i) is the
fan-in of a module i. Henry and Kafura extend the definitions of fan-in and fan-out.
Cohesion metrics: Bieman and Ott define a collection of metrics that provide an indication of
the cohesiveness of a module. The metrics are defined in terms of five concepts and measures:
Data slice. Stated simply, a data slice is a backward walk through a module
that looks for data values that affect the module location at which the walk
began. It should be noted that both program slices (which focus on statements
and conditions) and data slices can be defined.
Data tokens. The variables defined for a module can be defined as data
tokens for the module.
Glue tokens. This set of data tokens lies on one or more data slice.
Superglue tokens. These data tokens are common to every data slice in a module.
Stickiness. The relative stickiness of glue token is directly proportional to the number of
data slices that it binds.
Coupling metrics: Module coupling provides an indication of the ―connectedness‖ of a module
to other modules, global data, and the outside environment. Coupling was discussed in
qualitative terms.
Complexity Metrics: A variety of software metrics can be computed to determine the
complexity of program control flow. Many of these are based on the flow graph. A graph is a
representation composed of nodes and links (also called edges). Such graphs are called directed
graphs. McCabe and Watson identify a number of important uses for complexity metrics:
Complexity metrics can be used to predict critical information about reliability and
maintainability of software systems from automatic analysis of source code [or procedural
design information]. Complexity metrics also provide feedback during the software project to
help control the design activity. The most widely used complexity metric for computer software
is cyclomatic complexity and was originally developed by Thomas McCabe. The McCabe
metric provides a quantitative measure of testing difficulty and an indication of ultimate
reliability. The Cyclomatic complexity may be used to provide a quantitative indication of
maximum module size. Thus Quality of software design also plays an important role in
determining the overall quality of the software.
For the maintenance activities, metrics have been designed explicitly. IEEE have proposed
Software Maturity Index (SMI), which provides indications relating to the stability of software
product. Once all the parameters are known, SMI can be calculated by using the following equation.
SMI = [MT- (Fa+ Fe + Fd)]/MT.
Where,
Number of modules that have been changed in the current release (Fe)
Number of modules that have been added in the current release (Fa)
Number of modules that have been deleted from the current release (Fd)
Note that a product begins to stabilize as 8MI reaches 1.0. SMI can also be used as a metric for
planning software maintenance activities by developing empirical models in order to know the
effort required for maintenance.
Review Questions:
1. Define Quality and Software Quality.
2. Explain Metrics for Analysis model in detail.
3. What is Metrics for Design model explain with expressions.
4. Write a short note on Metrics for 1. Source Code. 2. Maintenance
5. Explain Metrics for Testing in detail.
C h a p t e r 11
Metrics for Process and Products
11.1 Software Measurement
Formulation: This performs measurement and develops appropriate metric for software
under consideration.
Collection: This collects data to derive the formulated metrics.
Analysis: This calculates metrics and the use of mathematical tools.
Interpretation: This analyzes the metrics to attain insight into the quality of representation.
Feedback: This communicates recommendation derived from product metrics to the
software team.
Note that collection and analysis activities drive the measurement process. In order to perform
these activities effectively, it is recommended to automate data collection and analysis. One can
use statistical techniques to interrelate external quality features and internal product attributes.
Software measurement is an engineering process meant to aid in assessing these areas:
1. Collect and Organize Test Cases: Split your test cases into test suites. Set project
milestones and assign tasks to individual testers.
2. Track Execution and Test Results: Track the number of completed, failed, and rescheduled
tests. Keep a complete history of all results.
3. Measure Progress and Success Rate: Project dashboards, clear reports, and email
notifications tell you where you are in the test cycle.
4. Take Action in the Right Areas: Reports on all levels: from single test runs, milestones, to
project reports guide your decisions.
11.2.1 The Three types of metrics to assure software quality
The three types of metrics you should collect as part of your quality assurance process
are: source code metrics, development metrics, and testing metrics.
1. Source code metrics
These are measurements of the source code that make constructs software. Source code is
the fundamental building block of which software is made. Hence measuring it is a way of
making sure that the code is of high-caliber. The best source code when looked closely
might spot a few areas that can be optimized for even better performance.
One must ensure that appropriate amount of code have been generated by measuring source
code quality and the number of lines of code. Another thing to track is how compliant each
line of code is with the programming languages‘ standard usage rules. It is equally
important to track the percentage of comments within the code, which tells the how much
maintenance is required. Less the comments, more are the problems, while deciding changes
or upgrades in the program. Thecode duplications and unit test coverage must be avoided,
which tells how smoothly the product is going to run.
2. Development metrics
These metrics measure the software development process itself. Gather development metrics
to look for several ways to make the operations more efficient and reduce incidents of
software errors. Measuring number of defects within the code and time to fix them tells a lot
about the development process itself. One must tally number of defects that appear in the
code and also note the time it takes to fix them. If any defects have to be fixed multiple
times then there might be a misunderstanding of requirements or skills gaps. Those gaps are
important to address or fix as soon as possible. Thus defining the root cause and
implementing corrective measures enables continuous improvement.
3. Testing metrics
These metrics helps to evaluate the product to be functional and worth using it. There are
two major testing metrics. 1. Test coverage: It collects data about which parts of the
software program are executed when it runs a test. 2. Defect removal efficiency: This is the
second part is a test of the testing itself and it checks the success rate for spotting and
removing defects. The more you measure, the more you know about your product, the more
likely you are able to improve it. Automating the measurement process is the best way to
measure software quality. It is not the easiest thing, or the cheapest, but it will save tons of
cost down the line.
Review Questions:
After the company's specific risks are identified and the risk management process has been
implemented, there are several different strategies companies can take in regard to different types
of risk.
Risk avoidance. While the complete elimination of all risk is rarely possible, a risk avoidance
strategy is designed to deflect as many threats as possible in order to avoid the costly and
disruptive consequences of a damaging event.
Risk reduction. Companies are sometimes able to reduce the amount of effect certain risks can
have on company processes. This is achieved by adjusting certain aspects of an overall project
plan or company process, or by reducing its scope.
Risk sharing. Sometimes, the consequences of a risk is shared, or distributed among several of
the project's participants or business departments. The risk could also be shared with a third
party, such as a vendor or business partner.
Risk retaining. Sometimes, companies decide a risk is worth it from a business standpoint,
and decide to retain the risk and deal with any potential fallout. Companies will often retain a
certain level of risk a project's anticipated profit is greater than the costs of its potential risk.
4. Technology risks are derived from the software or hardware technologies that are being
used as part of the system being developed. Using new or emerging or complex technology
increases the overall risk.
[Link] risks are similar to technology risks. They relate to the use, availability, and
reliability of support software used by the development team, such as development
environments and other
Risk is an expectation of loss, a potential problem that may or may not occur in the future. It is
generally caused due to lack of information, control or time. A possibility of suffering from loss
in software development process is called a software risk. Loss can be anything, increase in
production cost, development of poor quality software, not being able to complete the project on
time. Software risk encompasses the probability of occurrence for uncertain events and their
potential for loss within an organization.
Software risk exists because the future is uncertain and there are many known and unknown
things that cannot be incorporated in the project plan. Typically, software risk is viewed as a
combination of robustness, performance efficiency, security and transactional risk propagated
throughout the system. A software risk can be of two types (1) internal risks that are within the
control of the project manager and (2) external risks that are beyond the control of project
manager. Risk management is carried out to:
1. Identify the risk
2. Reduce the impact of risk
3. Reduce the probability or likelihood of risk
4. Risk monitoring
A project manager has to deal with risks arising from three possible cases:
1. Known known‘s are software risks that are actually facts known to the team as well as to
the entire project. For example not having enough number of developers can delay the
project delivery. Such risks are described and included in the Project Management Plan.
2. Known unknowns are risks that the project team is aware of but it is unknown that such
risk exists in the project or not. For example if the communication with the client is not of
good level then it is not possible to capture the requirement properly. This is a fact known
to the project team however whether the client has communicated all the information
properly or not is unknown to the project.
3. Unknown Unknowns are those kind of risks about which the organization has no idea.
Such risks are generally related to technology such as working with technologies or tools
that you have no idea about or the work that suddenly exposes to absolutely unknown
risks.
Software risk management is all about risk quantification of risk. This includes:
1. Giving a precise description of risk event that can occur in the project
2. Defining risk probability that would explain what are the chances for that risk to occur
3. Defining How much loss at particular risk can cause
4. Defining the liability potential of risks.
The Employees who understand the real difference between reactive, predictive, and proactive
risk management activities gain considerable benefit for generating good safety performance.
High quality reactive risk management is critical at all levels of SMS implementation. These new
SMS programs will deal with more safety events in particular.
The reactive risk management behavior must be set early in the implementation so that it will
prove extraordinarily beneficial.
For quality risk management to be cultivated, it requires following:
Quality risk management training for all employees.
Strong bureaucracy regarding safety behavior, such as procedures, checklists and a list
of desired employee behavioral actions.
Good hazard and risk fluency for identifying and assessing safety items.
Proactive risk management is often termed as the highest form of risk management. Proactive
risk management activities generally don‘t happen until an SMS program is fairly mature.
Basically, goals of proactive risk management are:
Identify behaviors that lead to hazard occurrence, and stop it before it happens;
Identify root causes before they lead to hazard occurrence,
Understand safety ―inputs‖ of the program for safe performance.
The proactive risk management generally requires the following:
A great deal of safety data;
The ability to monitor complex safety metrics,
A mature safety culture.
Proactive risk management involves specific activities that are entirely different than that of
reactive risk management activities. Both reactive and proactive risk management complement
each other, and each strategy is useful in different situations. So, let us discuss when to use
Proactive Risk Management Strategies. Proactive risk management strategies are best used in
the following situations:
Identifying how to best de-escalate safety issues (after hazard occurrence) before it leads
to undesirable consequences;
Understand the inputs of the program, as well as underlying behaviors, attitudes and
actions that directly correlate to safety performance,
Analyze the relationship between certain root causes and hazard occurrence.
Management ofriskmust be done proactively which is the responsibility of front line employees
as well as safety management. Each sector of an organization has its own proactive behaviors
that generate a solid, proactive culture in an aviation SMS program.
Predictive risk management becomes extremely useful in the following activities that are
common to aviation safety programs:
Management of change;
Risk analysis in hypothetical scenarios;
Forecasting performance data (such as to stakeholders).
It‘s important to understand that predictive risk management is useful for creating expected
―ranges‖ of safety performance, and a framework for future risk exposure. Risk Management
comprises of following processes:
1. Software Risk Identification
2. Software Risk Analysis
3. Software Risk Planning
4. Software Risk Monitoring
These Processes are defined below.
Definition: Risk identification is the process of determining risks that could potentially prevent
the program, enterprise, or investment from achieving its objectives. It includes documenting and
communicating the concern.
Risks are about events that, when triggered, cause problems or benefits. Hence, risk
identification can start with the source of our problems and those of our benefit, or with the
problem itself. The Risk Identification is the very first critical step of risk management process.
It is very important that one must first study the problems faced by previous project. Also study
the project plan properly and check for all the possible areas that are vulnerable to some or the
other type of risks. The best ways of analyzing a project plan is by converting it to a flowchart
and examine all essential areas. It is important to conduct few brainstorming sessions to identify
the known unknowns that can affect the project. Any decision taken related to technical,
operational, political, legal, social, internal or external factors should be evaluated properly.
1. Creating a systematic process - The risk identification process should begin with project
objectives and success factors.
2. Gathering information from various sources - Reliable and high quality information is
essential for effective risk management.
3. Applying risk identification tools and techniques - The choice of the best suitable
techniques will depend on the types of risks and activities, as well as organizational
maturity.
4. Documenting the risks - Identified risks should be documented in a risk register and a risk
breakdown structure, along with its causes and consequences.
5. Documenting the risk identification process - To improve and ease the risk identification
process for future projects, the approach, participants, and scope of the process should be
recorded.
6. Assessing the process' effectiveness - To improve it for future use, the effectiveness of the
chosen process should be critically assessed after the project is completed.
Quantitative Risk Analysis: It can be used for software risk analysis but is considered
inappropriate because risk level is defined in % which does not give a very clear picture.
SWOT Analysis
A useful tool for systematic risk identification is Swot Analysis. It consisting of four
elements:
Strengths - Internal organizational characteristics that can help to achieve project
objectives.
Weaknesses - Internal organizational characteristics that can prevent a project from
achieving its objectives.
Opportunities - External conditions that can help to achieve project objectives.
Threats - External conditions that can prevent a project from achieving its objectives.
Risk projection is also called as risk estimation. It attempts to rate each risk in two ways
1. The likelihood or probability that the risk is real
2. The consequences of the problems associated with the risk. The project planner, along with
other managers and technical staff, performs four risk projection activities:
(1) Establish a scale that reflects the perceived likelihood of a risk,
(2) Delineates the consequences of the risk,
(3) Estimate the impact of the risk on the project and the product,
(4) Note the overall accuracy of the risk projection.
A risk that is 100 percent probable is a constraint on the software [Link] risk table should be
implemented as a spreadsheet model. This enables easy manipulation and sorting of the entries.A
weighted average can be used if one risk component has more significance for the project.
RE = P x C
Where,
P is the probability of occurrence for a risk
C is the cost to the project should the risk occur.
Example
Assume the software team defines a project risk in the following manner:
Risk Identification.
Only 70 percent of the software components scheduled for reuse will be integrated into the
application.
The remaining functionality will have to be custom developed.
Risk Probability. 80% (likely).
Risk Impact.
60 reusable software components were planned.
If only 70 percent can be used, 18 components would have to be developed from scratch (in
addition to other custom software that has been scheduled for development).
Since the average component is 100 LOC and local data indicate that the software
engineering cost for each LOC is $14.00, the overall cost (impact) to develop the
components would be 18 x 100 x 14 = $25,200.
Risk Assessment
Have established a set of triplets of the form:
[ri, li, xi]
Where,
ri is risk
li is the likelihood (probability) of the risk
xi is the impact of the risk.
During risk assessment:
Examine the accuracy of the estimates that were made during risk projection.
Attempt to rank the risks that have been uncovered.
Begin thinking about ways to control and/or avert risks that are likely to occur.
Risk refinement is the process of decomposing risks into more detailed risks that will be easier
to [Link] the CTC (condition-transition-consequence) format may be helpful to us as
they refine their own [Link] (condition-transition-consequence) format may be a good
representation for the detailed risks (e.g. given that <condition> then there is a concern that
(possibly) <consequence>).
A Risk may be stated generally during early stages of project planning. With time, more is
learned about the project and the risk may be possible to refine the risk into a set of more
detailed risks. Represent risk in condition-transition-consequence (CTC) format.
Stated in the following form:
Risk Mitigation is a problem avoidance activity. The team develops strategies to reduce the
possibility or the loss impact of a risk through Risk Mitigation. In this risk items are eliminated
or otherwise resolved.
Effective strategy must consider three issues:
Risk Avoidance
RiskProtection:
Risk Leverage
1. Risk Avoidance: When the team is facing lose, the team can opt to eliminate the risk. This is
an example of a risk avoidance strategy. The team is opting not to develop a product or a
particularly risky feature in their project to avoid [Link] Proactive approach to risk is one of
the Risk Avoidance Strategy.
2. Risk protection: The organization can buy insurance to cover any financial loss should the
risk become a reality. Alternately, a team can employ fault tolerance strategies, such as parallel
processors, to provide reliability insurance. Risk planning and risk mitigation actions often
come with an associated cost. The team must do a cost/benefit analysis to decide whether the
benefits accrued by the risk management steps outweigh the costs associated with
implementing them.
3. Risk Leverage:
The risk protection can be implemented by using Risk Leverage calculations. This calculation
can involve the calculation of Cost/benefit Analysis.
Risk Leverage = (risk exposure before reduction – risk exposure after reduction)/cost of
risk reduction
1. If risk leverage value, rill, is ≤1, then the benefit of applying risk reduction is not worth its
cost. 2. If rill is only slightly > 1, then still the benefit is very questionable, because these
computations are based on probabilistic estimates and not on actual data.
Therefore, rill is usually multiplied by a risk discount factor as shown below:
Ρ< 1. If ρ rl> 1,
Here, the benefit of applying risk reduction is considered worth its cost. If the discounted
leveraged valued is not high enough to justify.
For Example: Develop risk mitigation plan and assume that the high staff turnover is noted as
a project risk, r1. Based on past history, the likelihood, l1, of high turnover is estimated to be
0.70. The impact, x1, is projected at level 2. So, high turnover will have a critical impact on
project cost and schedule.
Example:
3. Management
The lack of a stable-computing environment is extremely hazardous to a software
development team. In the event that the computing environment is found unstable, the
development team should cease work on that system until the environment is made stable
again, or should move to a system that is stable and continue working there.
Thus, RMMM steps incur additional project cost. 80 percent of the overall project risk can
be accounted for by only 20 percent of the identified risks. Work performed during earlier
risk analysis steps will help the planner to determine which of the risks reside in that 20
percent (e.g., risk that lead to the highest risk exposure).
Quality of a product can be measured in terms of performance, reliability and durability. Quality
management ensures superior quality product and services which is consistent. Quality
management is the act of overseeing all activities and tasks needed to maintain a desired level of
excellence. It has four main components: 1. Quality Planning, 2. Quality Assurance, 3. Quality
Control and 4. Quality Improvement.
Quality Management is essential for customer satisfaction which eventually leads to customer
[Link] Management is focused not only on product and service quality, but also on the
means to achieve it. Quality management, therefore, uses quality assurance and control of
processes as well as products to achieve more consistent quality.
Quality can be improved by applying some effective quality measures which are discussed as
follows:
Break down barriers between departments,
Management should learn their responsibilities, and take on leadership,
Continuous supervision should be to help people and machines and gadgets to do a better
job
Improve constantly and forever the system of production and service
Institute a vigorous program of education and self-improvement.
Managing quality means constantly pursuing excellence. Things such as making sure that what
the organization does is fit for purpose, and not only stays that way, but keeps improving.
Quality products ensure that you survive the cut throat competition with a smile. Customers
recognize that quality is an important attribute in products and services. Suppliers recognize that
quality can be an important differentiator between their own offerings and those of competitors
(quality differentiation is also called the quality gap). In the past two decades this quality gap has
been greatly reduced between competitive products and services. Customer satisfaction is the
backbone of Quality Management. Setting up a million dollar company without taking care of
needs of customer will ultimately decrease its revenue.
The significant factors including quality culture are the importance of knowledge management,
and the role of leadership in promoting and achieving high quality. There are many methods for
quality improvement. These cover product improvement, process improvement and people based
improvement. In the following list are methods of quality management and techniques that
incorporate and drive quality improvement:
1. ISO 9004:2008 - guidelines for performance improvement.
2. ISO 9001:2015 - a certified quality management system (QMS) for organizations who
want to prove their ability to consistently provide products and services that meet the
needs of their customers and other relevant stakeholder.
3. ISO 15504-4: 2005 - information technology, guidance on use for process improvement
and process capability determination.
4. QFD - Quality Function Deployment is also known as the house of quality approach.
5. Zero Defect Program: It was created by NEC Corporation of Japan, based upon statistical
process control and one of the inputs for the inventors of Six Sigma.
6. Six Sigma - Six Sigma combines established methods such as statistical process
control, design of experiments and failure mode and effects analysis (FMEA) in an overall
framework.
7. PDCA -Plan, Do, Check, Act cycle for quality control purposes. (Six
Sigma's DMAIC method (define, measure, analyze, improve, control) may be viewed as a
particular implementation of this.)
8. Quality circle is a group (people oriented) approach to improvement.
9. Taguchi methods are the statistical oriented methods including quality robustness, quality
loss function, and target specifications.
10. The Toyota Production System — reworked in the west into lean manufacturing.
11. TQM (Total Quality Management) is a management strategy aimed at embedding
awareness of quality in all organizational processes. First promoted in Japan with the
Deming prize which was adopted and adapted in USA as the Malcolm Baldrige National
Quality Award and in Europe as the European Foundation for Quality Management award.
12. TRIZ states the Theory of inventive problem solving.
13. BPR (Business Process Reengineering) is a management approach aiming at optimizing
the workflows and processes within an organization.
14. OQRM (Object-Oriented Quality and Risk Management) is a model for quality and risk
management.
15. Top down & Bottom up Approaches the leadership approaches to change.
Definition 1:
Software Quality Assurance (SQA) is a set of activities for ensuring quality in software
engineering processes that ultimately result in quality in software products.
Definition 2:Software quality assurance (SQA) is a process that ensures that developed
software meets and complies with defined or standardized quality specifications. SQA is an
ongoing process within the software development life cycle (SDLC) that routinely checks the
developed software to ensure it meets desired quality measures.
SQA helps ensure the development of high-quality software. SQA practices are implemented in
most types of software development. A quality assurance system is said to increase customer
confidence and a company's credibility, to improve work processes and efficiency. It
incorporates and implements software testing methodologies to test the overall software, rather
than checking for quality after completion. SQA processes test for quality in each phase of
development until the software gets completed. It is the Degree to which a system meets
specified requirements and customer expectations. It is also monitoring the processes and
products throughout the [Link] software development process moves into the next phase
only once the current/previous phase complies with the required quality standards. It includes
the following activities:
Process definition and implementation
Auditing
Training
Processes could be:
Software Development Methodology
Project Management
Configuration Management
Requirements Development/Management
Estimation
Software Design
Once the processes have been defined and implemented, Quality Assurance has the following
responsibilities:
identify weaknesses in the processes
correct those weaknesses to continually improve the process
The quality management system under which the software system is created is normally based on
one or more of the following standards:
CMMIcapability maturity model integration (CMMI)
Six Sigma
ISO 9000
The above figure explains Software Review process. The pre-review activities are concerned
with review planning and review preparation. This is the first phase of review process. After
completing this phase there comes Review Meeting. In this the author of the document or the
program being reviewed must ―Walk through‖ the document with the review team. Then after
the post review has to be processed which contains error correction, improvement. In this
phase the problems and issues which are raised in the review meeting are addressed.
1. Code Review:
It is systematic examination which can find and remove the vulnerabilities in the code such
as memory leaks and buffer overflowsof computer source [Link] kind of review is
usually performed as a peer review without management [Link] reviews can
often find and remove common vulnerabilities such as format string exploits, race
conditions, memory leaks and buffer overflows, thereby improving software
security. Reviewers prepare for the review meeting and prepare a review report with a list
of [Link] code review rates are about 150 lines of code per hour. Technical
reviews may be quite informal or very formal and can have a number of purposes but not
limited to discussion, decision making, evaluation of alternatives, finding defects and
solving technical [Link] review practices fall into three main categories:
1. Pair programming is a type of code review where two persons develop code together at
thesame workstation.
2. Formal Code Review
3. Lightweight code review
2. Inspection:
It is a very formal type of peer review where the reviewers are following a well-
defined process to find defects.
The Trained Moderators are the persons who take care of this testing activity however
they are not authors. They are responsible to conduct peer examination of document or
product.
During inspection the documents are prepared and checked keenly by the reviewers
before the meeting gets started. It is essential to have pre-meeting preparation.
The product is examined accordingly and the defects are found and as a result they are
fixed.
The defects and their solutions are documented in a logging record or issue log.
A formal follow-up is carried out by the moderator applying exit criteria which
ensures a timely and a prompt corrective action, so in this way the inspection is carried
out.
It is a form of peer review where the author leads members of the development team and
other interested parties through a software product. The participants ask questions and
make comments about possible defects and development standards. Walkthrough- the name
itself suggests checking or reviewing the entire software product. Software product
normally refers to some kind of technical document. As indicated by the IEEE definition,
this might be a software design document or program source code, but use cases, business
process definitions, test case specifications, and a variety of other technical documentation
may also be walked through..
Objectives
Basically, a walkthrough has one or two objectives:
1. To gain feedback about the technical quality or content of the document;
2. To familiarize the audience with the content.
IEEE 1028has recommended three specialist roles in a walkthrough:
The Author is one who is responsible to explain overall product in step-by-step manner at
the walk-through meeting, and is probably responsible for completing other formalities too.
The Author guides the participants through the document according to his or her thought
process to achieve a co-operation.
The Walkthrough Leader is one who conducts the walkthrough, handles administrative
tasks, and ensures that the process is conducted efficiently.
The Recorder is one who notes all potential errors, decisions, and action items identified
during the walkthrough meetings.
4. Technical review:
It is a form of peer review in which a team of qualified people examines the suitability of
the software product for its intended use and identifies discrepancies from specifications and
standards. It is not a kind of formal review and is led by trained moderators but can also be
led by a technical expert. It is often performed as a peer review without management
participation. The Architects, designers, key users are one who focuses on the content of the
document and are busy in finding the defects, errors and various other [Link]
practice, technical reviews vary from quite informal to very formal.
The participants must be informed about the technical content of the document
The technical concepts are used correctly or not must be ensured in the early stage
itself.
The value of technical concepts and alternatives in the product are to be accepted and
implemented.
There must be constant consistency in the use of these concepts and representation of
technical concepts.
Software reliability is a key part in software quality. The study of software reliability can be
categorized into three parts: modeling, measurement and [Link] to
theability of a product to perform its specified function under service conditions. In other words,
reliability can be depicted as the probability that an item will perform appropriately for a specified
time period under a given service condition. The high complexity of software is the major
contributing factor of Software Reliability problems.
The reliability of a computer program is an important element of its overall quality. Software
Reliability is the probability of failure-free software operation for a specified period of time in a
specified environment. It is quite sure that the reliability of a system improves, if the number of
defects in it is reduced. However, there is no simple relationship between the observed system
reliability and the number of defects in the system.
For example: Suppose it has beenobserved the behaviour of a large number of programs that
90% of the execution time of a program is spent in executing only 10% of the instructions in the
program. These most used 10% instructions are often called the core of the program. The rest
90% of the program statements are called non-core. These non-core statements are executed only
for 10% of the total execution time. Therefore one can note that it will not be beneficial to
remove 60% defects or errors from least used part of system which would lead very less
improvement of the system‘s reliability.
Thus, reliability of a product depends not only on the number of errors but also on the
exactlocation of the [Link] not considered carefully, software reliability can be the reliability
bottleneck of the whole system.
1. The reliability can be improved due to fixing of a single bug which depends on the location
of bug in the code.
2. The perceived reliability of a software product is highly observer-dependent.
3. The reliability of a product keeps changing as errors are detected and fixed.
Reliability behaviour for hardware and software are very [Link] the characteristics of
software and hardware defers, then how can be their reliability will work similar. Hardware
failures are inherently different from software failures.
Most hardware failures are due to component wear and tear. We know that hardware can be
touched or it is visible. Hence it is quite obvious that it might get wear-out, deteriorates, or
may get rusted in critical environmental conditions. For Example, A logic gate may be
stuck at 1 or 0, or a resistor might get short circuit. To fix hardware faults, one has to either
replace or repair those failed parts.
On the other hand, a software product would continue to fail until the error is tracked down
and either the design or the code is changed. However,when hardware is repaired its
reliability is maintained even if the changes and repairs are done.
When a software failure is repaired, we are not sure that the reliability may increase or
decrease (reliability may decrease if a bug introduces new errors).
The Hardware Reliability is concerned with stability, where as software reliability aims at
reliability growth.
The change of failure rate over the product lifetime for typical hardware and a software
product can be represented as follows:
Figure: Change in Failure Rate
13.7 The ISO 9000 Quality Standards:
ISO (International Standards Organization) is a consortium of 63 countries established to
formulate and foster standardization. ISO published its 9000 series of standards in 1987. Also
ISO created the Quality Management System (QMS) standards in 1987. Standardization
actually helps in the optimization of operations by proper utilization of [Link] standards
are reviewed every few years by the International Organization for Standardization. The ISO
9000 standard specifies the guidelines for maintaining a quality system. The structure of ISO is
comprised of technical committees, sub-committees and working [Link] 9000 series is
developed to serve the quality aspects, which also include the eight principles of management
systems. In short, the standards require an organization to say ―what it is doing to
ensure quality‖ and finally, document or a proof from ISO is mark of quality certification for
that organization.
ISO 9001 applies to the organizations engaged in design, development, production, and
servicing of goods. This is the standard that is applicable to most software development
organizations.
ISO 9002 applies to those organizations which do not design products but are only
involved in production. Examples of these category industries include steel and car
manufacturing industries that buy the product and plant designs from external sources and
are involved in only manufacturing those products. Therefore, ISO 9002 is not applicable
to software development organizations.
ISO 9003 applies to organizations that are involved only in installation and testing of the
products.
All documents concerned with the development of a software product should be properly
managed, authorized, and controlled. This requires a configuration management system
to be in place.
Proper plans should be prepared and then progress against these plans should be
monitored.
Important documents should be independently checked and reviewed for effectiveness and
correctness.
The product should be tested against specification. Several organizational aspects
should be addressed e.g., management reporting of the quality team.
Thus, ISO 9000 is awarded by an international standards body. Therefore, ISO 9000
certification can be quoted by an organization in official documents, communication with
external parties, and other documentations of [Link] main reason behind establishing
ISO standards is to ensure the required safety, quality and reliability of products and services.
This can raises the levels of productivity and reduce the chance of errors.
Review Questions: