0% found this document useful (0 votes)
16 views29 pages

Understanding Software: Types & Engineering

The document provides an overview of software, defining it as both a product and a vehicle for delivering information, with a focus on its unique characteristics compared to hardware. It outlines various software application domains, the importance of software engineering, and the software process, emphasizing the need for effective communication, planning, modeling, construction, and deployment. Additionally, it discusses the essence of software engineering practice, highlighting the importance of understanding problems, planning solutions, and ensuring quality through testing.

Uploaded by

lohitha m87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views29 pages

Understanding Software: Types & Engineering

The document provides an overview of software, defining it as both a product and a vehicle for delivering information, with a focus on its unique characteristics compared to hardware. It outlines various software application domains, the importance of software engineering, and the software process, emphasizing the need for effective communication, planning, modeling, construction, and deployment. Additionally, it discusses the essence of software engineering practice, highlighting the importance of understanding problems, planning solutions, and ensuring quality through testing.

Uploaded by

lohitha m87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

MODULE-1

INTRODUCTION

1. 1 The Nature Of Software

Today, software takes on a dual role. It is a product, and at the same time, the vehicle for
delivering a product. As a product, it delivers the computing potential embodied by computer
hardware or more broadly, by a network of computers that are accessible by local hardware.

As the vehicle used to deliver the product, software acts as the basis for the control of the
computer (operating systems), the communication of information (networks), and the creation and
control of other programs (software tools and environments).

Software delivers the most important product of our time—information. It transforms


personal data (e.g., an individual’s financial transactions) so that the data can be more useful in a local
context; it manages business information to enhance competitiveness; it provides a gateway to
worldwide information networks (e.g., the Internet), and provides the means for acquiring information
in all of its forms.

1.1.1 Defining Software

Software is-

1) Instructions (computer programs) that when executed provide desired features, function, and
performance;

(2) Data structures that enable the programs to adequately manipulate information, and

(3) Descriptive information in both hard copy and virtual forms that describes the operation and use of
the programs.

Software is a Logical rather than a physical system element. Therefore, software has
characteristics that are considerably different than those of hardware

 Software is developed or engineered; it is not manufactured in the classical sense.


 Although some similarities exist between software development and hardware
manufacturing, the two activities are fundamentally different.
 In both activities, high quality is achieved through good design, but the manufacturing phase
for hardware can introduce quality problems that are nonexistent (or easily corrected) for
software.
 Both activities are dependent on people, but the relationship between people applied and
work accomplished is entirely different.
 Both activities require the construction of a “product,” but the approaches are different.
 Software costs are concentrated in engineering. This means that software projects cannot be
managed as if they were manufacturing projects.
 Software doesn’t “wear out.”
 Above figure depicts failure rate as a function of time for hardware. The relationship, often
called the “bathtub curve,” indicates that hardware exhibits relatively high failure rates early
in its life; defects are corrected and the failure rate drops to a steady-state level for some
period of time.
 As time passes, however, the failure rate rises again as hardware components suffer from the
cumulative effects of dust, vibration, abuse, temperature extremes, and many other
environmental maladies.
 Stated simply, thehardware begins to wear out. Software is not susceptible to the
environmental maladies that causehardware to wear out.
 In theory, therefore, the failure rate curve for software should take the form of the “idealized
curve” shown in below figure.

 Undiscovered defects will cause high failure rates early in the life of a program. However,
these are corrected and the curve flattens as shown.
 The idealized curve is a gross oversimplification of actual failure models for software.
However, the implication is clear—software doesn’t wear out.
 But it does deteriorate.
 Another aspect of wear illustrates the difference between hardware and software. When a
hardware component wears out, it is replaced by a spare part.
 There are no software spare parts. Every software failure indicates an error in design or in the
process through which design was translated into machine executable code.

 Although the industry is moving toward component-based construction, most software


continues to be custom built.
 The reusable components have been created so that the engineer can concentrate on
the truly innovative elements of a design, that is, the parts of the design that represent
something new.
 A software component should be designed and implemented so that it can be reused in
many different programs.
 The data structures and processing detail required to build the
interface are contained within a library of reusable components for
interface construction.

1.1.2 Software Application Domains


The seven broad categories of computer software present continuing challenges for software
engineers

 System software: a collection of programs written to service other programs.


 Some system software (e.g., compilers, editors, and file management utilities)
 Systems applications (e.g., operating system components, drivers, networking
software, telecommunications processors) process largely indeterminate data.

 Application software: stand-alone programs that solve a specific


business need.
 Applications in this area process business or technical data in a way
that facilitates business operations or management/technical
decision making.

 Engineering/scientific software: has been characterized by “number crunching”


algorithms.

 Applications range from astronomy to volcanology, from automotive stress analysis


to space shuttle orbital dynamics, and from molecular biology to automated
manufacturing.
 Computer-aided design, system simulation, and other interactive applications have
begun to take on real-time and even system software characteristics.

 Embedded software: resides within a product or system and is used to implement


and control features and functions for the end user and for the system itself.
ex: key pad control for a microwave oven
ex: digital functions in an automobile such as fuelcontrol, dashboard displays, and braking
systems

 Product-line software: designed to provide a specific capability for use by many


different customers.
 Product-line software can focus on a limited and esoteric marketplace (e.g.,
inventory control products)
 or address mass consumer markets (e.g., word processing, spreadsheets,
computer graphics, multimedia, entertainment, database management, and
personal and business financial applications).

 Web applications: called “WebApps,” this network-centric software category spans a


wide array of applications.
In their simplest form, WebApps can be little more than a set of linked hypertext files
that present information using text and limited graphics.

 Artificial intelligence software: makes use of nonnumerical algorithms to solve


complex problems that are not amenable to computation or straightforward analysis.
Applications within this area include robotics, expert systems, pattern recognition
(image and voice), artificial neural networks, theorem proving, and game playing.

 Open-world computing: the rapid growth of wireless networking may soon lead to
true pervasive, distributed computing. The challenge for software engineers will be to
develop systems and application software that will allow mobile devices, personal
computers, and enterprise systems to communicate across vast networks.

 Netsourcing: the World Wide Web is rapidly becoming a computing engine as well
as a content provider. The challenge for software engineers is to architect simple (e.g.,
personal financial planning) and sophisticated applications that provide a benefit to
targeted end-user markets worldwide.

 Open source: a growing trend that results in distribution of source code for systems
applications (e.g., operating systems, database, and development environments)so that
many people can contribute to its development.

1.1.3 Legacy Software


 The older programs often referred to as legacy software.
 Unfortunately, there is sometimes one additional characteristic that is present in
legacy software—poor quality. Legacy systems sometimes have inextensibledesigns,
convoluted code, poor or nonexistent documentation, test cases and resultsthat were
never archived, a poorly managed change history.
 And yet, these systems support “core business functions and are indispensableto the
business.” What to do?
 The only reasonable answer may be: Do nothing,at least until the legacy systemmust
undergo some significant change.
 legacy systems often evolve for one or more of the following reasons:
1. The software must be adapted to meet the needs of new computing
environments or technology.
2. The software must be enhanced to implement new business requirements.
3. The software must be extended to make it interoperable with other more
modern systems or databases.
4. The software must be re-architected to make it viable within a network
environment.

1.3 SOFTWARE ENGINEERING

 It follows that a concerted effort should be made to understand the problem before a
software solution is developed.
 It follows that design becomes a pivotal activity.
 It follows that software should exhibit high quality.
 It follows that software should be maintainable.

Although hundreds of authors have developed personal definitions of software engineering, a


definition proposed by Fritz Bauer at the seminal conference on the subject still serves as a
basis for discussion.

Software engineering is an engineering discipline that is concerned with all aspects of


software production from the early stages of system specification to maintaining the system
after it has gone into use. In this definition, there are two key phrases:

1. Engineering discipline: Engineers make things work. They apply theories, methods
and tools where these are appropriate, but they use them selectively and always try to
discover solutions to problems even when there are no applicable theories and
methods. Engineers also recognise that they must work to organisational and financial
constraints, so they look for solutions within these constraints
2. All aspects of software production Software engineering is not just concerned with the
technical processes of software development but also with activities such as software
project management and with the development of tools, methods and theories to
support software production

1.4 The Software Process

 A process is a collection of activities, actions, and tasks that are performed when
some work product is to be created.
 An activity strives to achieve a broad objective (e.g., communication with
stakeholders) and is applied regardless of the application domain, size of the project,
complexity of the effort, or degree of rigor with which software engineering is to be
applied.
 An action(e.g., architectural design) encompasses a set of tasks that produce a major
work product (e.g., an architectural design model).
 A task focuses on a small, but well-defined objective (e.g., conducting a unit tests)
that produces a tangible outcome.
 In the context of software engineering, a process is not a rigid prescription for how to
build computer software. Rather, it is an adaptable approach that enables the people
doing the work (the software team) to pick and choose the appropriate set of work
actions and tasks.
 A process framework establishes the foundation for a complete software engineering
process by identifying a small number of framework activities that are applicable to
all software projects, regardless of their size or complexity.
 Process framework encompasses a set of umbrella activities that are applicable
across the entire software process.
 A generic process framework for SE encompasses five activities:

1. Communication: it is critically important to communicate and collaborate


with the customer (and other stakeholders the intent is to understand
stakeholders’ objectives for the project and to gather requirements that help
define software features and functions.
2. Planning: A software project is a complicated journey, and the planning
activity creates a “map” that helps guide the team as it makes the journey. The
map-called a software project plan-defines the software engineering work by
describing the technical tasks to be conducted, the risks that are likely, the
resources that will be required, the work products to be produced, and a work
schedule.
3. Modelling: You create a “sketch” of the thing so that you’ll understand the
big picture-what it will look like. If required, you refine the sketch into greater
and greater detail in an effort to better understand the problem and solution
too. A software engineer does the same thing by creating models to better
understand software requirements and the design that will achieve those
requirements.
4. Construction. This activity combines code generation (either manual or
automated) and the testing that is required to uncover errors in the code.
5. Deployment. The software is delivered to the customer who evaluates the
delivered product and provides feedback based on the evaluation.
 These five generic framework activities can be used during the development of small,
simple programs, the creation of large Web applications, and for the engineering of
large, complex computer-based systems.
 For many software projects, framework activities are applied iteratively as a project
progresses. That is, communication, planning, modeling, construction, and
deployment are applied repeatedly through a number of project iterations.
 Typical umbrella activities include

1. Software project tracking and control: allows the software team to assess progress
against the project plan and take any necessary action to maintain the schedule.
2. Risk management: assesses risks that may affect the outcome of the project or the
quality of the product.
3. Software quality assurance: defines and conducts the activities required to ensure
software quality.
4. Technical reviews: assesses software engineering work products in an effort to
uncover and remove errors before they are propagated to the next activity.
5. Measurement: defines and collects process, project, and product measures that assist
the team in delivering software that meets stakeholders’ needs; can be used in
conjunction with all other framework and umbrella activities.
6. Software configuration management: manages the effects of change throughout the
software process.
7. Reusability management: defines criteria for work product reuse (including software
components) and establishes mechanisms to achieve reusable components.
8. Work product preparation and production—encompasses the activities required to
create work products such as models, documents, logs, forms, and lists.
 Agile process models emphasize project “agility” and follow a set of principles that
lead to a more informal (but, proponents argue, no less effective) approach to
software process. These process models are generally characterized as “agile” because
they emphasize adaptability. They are appropriate for many types of projects and are
particularly useful when Web applications are engineered.

“Software engineering is the establishment and use of sound engineering principles in


order to obtain economically software that is reliable and works efficiently on real
machines.”

 Software engineering is a layered technology.


 The bedrock that supports software engineering is a quality focus.
 The foundation for software engineering is the process layer.
 The software engineering process is the glue that holds the technology layers together
and enables rational and timely development of computer software.
 Process defines a framework that must be established for effective delivery of
software engineering technology.
 Software engineering methods provide the technical how-to’s for building software.
 Methods encompass a broad array of tasks that include communication, requirements
analysis, design modelling, program construction, testing, and support.
 Software engineering tools provide automated or semiautomated support for the
process and the methods.
 When tools are integrated so that information created by one tool can be used by
another, a system for the support of software development, called computer-aided
software engineering, is established.

1.5 Software Engineering Practice

 Generic framework activities- communication, planning, modeling,


construction, and deployment- and umbrella activities establish a skeleton architecture
for software engineering work.
1.5.1 The Essence of Practice
The essence of software engineering practice:
1. Understand the problem (communication and analysis).
2. Plan a solution (modeling and software design).
3. Carry out the plan (code generation).
4. Examine the result for accuracy (testing and quality assurance).
 Understand the problem: Unfortunately, understanding isn’t always that easy. It’s
worth spending a little time answering a few simple questions.
• Who has a stake in the solution to the problem? That is, who are the stakeholders?
• What are the unknowns? What data, functions, and features are required to properly
solve the problem?
• Can the problem be compartmentalized? Is it possible to represent smaller problems
that may be easier to understand?
• Can the problem be represented graphically? Can an analysis model be created?

 Plan the solution. Now you understand the problem (or so you think) and you can’t
wait to begin coding. Before you do, slow down just a bit and do a little design:

• Have you seen similar problems before? Are there patterns that are recognizable in a
potential solution? Is there existing software that implements the data, functions, and
features that are required?
• Has a similar problem been solved? If so, are elements of the solution reusable?
• Can sub problems be defined? If so, are solutions readily apparent for the sub
problems?
• Can you represent a solution in a manner that leads to effective implementation?
Can a design model be created?

 Carry out the plan. The design created serves as a road map for the system to build.
There may be unexpected detours, and it’s possible that you’ll discover an even better
route as you go, but the “plan” will allow you to proceed without getting lost.

• Does the solution conform to the plan? Is source code traceable to the design model?
• Is each component part of the solution provably, correct? Have the design and code
been reviewed, or better, have correctness proofs been applied to the algorithm?

 Examine the result. You can’t be sure that your solution is perfect, but you can be
sure that you’ve designed a sufficient number of tests to uncover as many errors as
possible.
• Is it possible to test each component part of the solution? Has a reasonable testing
strategy been implemented?
• Does the solution produce result that conform to the data, functions, and features
that are required? Has the software been validated against all stakeholder
requirements?
1.5.2 General Principles

David Hooker has proposed seven principles that focus on software engineering practice as a
whole. They are-

1. The First Principle: The Reason It All Exists


Before specifying a system requirement, before noting a piece of system
functionality, before determining the hardware platforms or development processes, ask
yourself questions such as: “Does this add real value to the system?” If the answer is “no,”
don’t do it. All other principles support this one.

2. The Second Principle: KISS (Keep It Simple, Stupid!)


There are many factors to consider in any design effort. All design should be
as simple as possible, but no simpler. This facilitates having a more easily understood and
easily maintained system. Simple also does not mean “quick and dirty.” In fact, it often takes
a lot of thought and work over multiple iterations to simplify. The payoff is software that is
more maintainable and less error-prone.

3. The Third Principle: Maintain the Vision


A clear vision is essential to the success of a software project. Without one, a
project almost unfailingly ends up being “of two [or more] minds” about itself.
Compromising the architectural vision of a software system weakens and will eventually
break even the well-designed systems. An empowered architect who can hold the vision and
enforce compliance helps ensure a very successful software project.

4. The Fourth Principle: What You Produce, Others Will Consume


In some way or other, someone else will use, maintain, document, orotherwise
depend on being able to understand your system. So, always specify, design, and implement
knowing someone else will have to understand what you are doing. Someone may have to
debug the code you write, and that makes them a user of your code. Making their job easier
adds value to the system.

5. The Fifth Principle: Be Open to the Future


A system with a long lifetime has more value. In today’s computing
environments, where specifications change on a moment’s notice and hardware platforms are
obsolete just a few months old, software lifetimes are typically measured in months instead of
years.

6. The Sixth Principle: Plan Ahead for Reuse


Reuse saves time and effort. Achieving a high level of reuse is arguably the
hardest goal to accomplish in developing a software system. The reuse of code and designs
has been proclaimed as a major benefit of using object-oriented technologies.

7. The Seventh principle: Think!


Placing clear, complete thought before action almost always produces better
results. When you think about something, you are more likely to do it right. You also gain
knowledge about how to do it right again. If you do think about something and still do it
wrong, it becomes a valuable experience. A side effect of thinking is learning to recognize
when you don’t know something, at which point you can research the answer.

1.6 Software Myths

 Software myths: erroneous beliefs about software and the process that is used to build
it .can be traced to the earliest days of computing. Myths have a number of attributes
that make them insidious.
 Management myths: Managers with software responsibility, like managers in most
disciplines, are often under pressure to maintain budgets, keep schedules from
slipping, and improve quality.
1. Management myths: Managers with software responsibility, like managers in most
disciplines, are often under pressure to maintain budgets, keep schedules from
slipping, and improve quality. Like a drowning person who grasps at a straw, a
software manager often grasps at belief in a software myth, if that belief will lessen
the pressure.
2. Customer myths: A customer who requests computer software may be a person at
the next desk, a technical group down the hall, the marketing/sales department, or an
outside company that has requested software under contract. In many cases, the
customer believes myths about software because software managers and practitioners
do little to correct misinformation. Myths lead to false expectations (by the customer)
and, ultimately, dissatisfaction with the developer.
3. Practitioner’s myths: Myths that are still believed by software practitioners have
been fostered by over 50 years of programming culture. During the early days,
programming was viewed as an art form. Old ways and attitudes die hard.
1.7 A Generic Process Model

 The software process is represented schematically in below Figure. Referring


to the figure, each framework activity is populated by a set of software
engineering actions.

 Each software engineering action is defined by a task set that identifies the
worktasks that are to be completed, the work products that will be produced,
the quality assurance points that will be required, and the milestones that will
be used to indicate progress.

 In addition, a set of umbrella activities, projecttracking and control, risk


management, quality assurance, configuration management, technical reviews,
and others are applied throughout the process.
 You should note that one important aspect of the software process has not yet
been discussed. This aspect is called process flow.

 Describes how the framework activities and the actions and tasks that occur
within each framework activity are organized with respect to sequence and
time and is illustrated in Figure
1.7.1 Defining a Framework Activity

A software team would need significantly more information before it could properly
execute any one of activities as part of the software process.
For a small software project requested by one person (at a remote location) with
simple, straightforward requirements, the communication activity might encompass little
more than a phone call with the appropriate stakeholder. Therefore, the only necessary action
is phone conversation, and the work tasks (the task set) that this action encompasses are:
1. Make contact with stakeholder via telephone.
2. Discuss requirements and take notes.
3. Organize notes into a brief written statement of requirements.
4. E-mail to stakeholder for review and approval.

If the project was considerably more complex with many stakeholders, each
with
a different set of (sometime conflicting) requirements, the communication activity might have
six distinct actions (described in Chapter 5): inception, elicitation, elaboration, negotiation,
specification, and validation. Each of these software engineering actions would have many
work tasks and a number of distinct work products.

1.7.2 Identifying a Task Set


Each software engineering action (ex: elicitation, an action associated with the
communication activity) can be represented by a number of different task sets-each a
collection of software engineering work tasks, related work products, quality assurance
points, and project milestones. You should choose a task set that best accommodates the
needs of the project and the characteristics of
your team.

1.8 PROCESS ASSESSMENT AND IMPROVEMENT

The existence of a software process is no guarantee that software will be delivered on


time, that it will meet the customer’s needs, or that it will exhibit the technical characteristics
that will lead to long-term quality characteristics.

A number of different approaches to software process assessment and improvement have


been proposed over the past few decades:

1. Standard CMMI Assessment Method for Process Improvement (SCAMPI):-provides


a five-step process assessment model that incorporates five phases: initiating, diagnosing,
establishing, acting, and learning. The SCAMPI method uses the SEI CMMI as the basis for
assessment.

2. CMM-Based Appraisal for Internal Process Improvement (CBA


IPI):- provides a diagnostic technique for assessing the relative maturity
of a software organization; uses the SEI CMM as the basis for the
assessment.

3. SPICE (ISO/IEC15504): - a standard that defines a set of requirements for software


process assessment. The intent of the standard is to assist organizations in developing an
objective evaluation of the efficacy of any defined software process.

4. ISO 9001:2000 for Software: -a generic standard that applies to any organization that
wants to improve the overall quality of the products, systems, or services that it provides.
Therefore, the standard is directly applicable to software organizations and companies.

1.9 Prescriptive Process Models


“Prescriptive” prescribes a set of process elements, framework activities, software
engineering actions, tasks, work products, quality assurance, and change control mechanisms
for each [Link] process model also prescribes a process flow (also called a work flow)
that is, the manner in which the process elements are interrelated to one another.

1.9.1 The Waterfall Model

The waterfall model, sometimes called the classic life cycle, suggests a systematic,
sequential approach to software development that begins with customer specification of
requirements and progresses through planning, modelling, construction, and deployment,
culminating in ongoing support of the completed software.

The waterfall model is the oldest paradigm for software engineering. However, over
the past three decades, criticism of this process model has caused even ardent supporters to
question its efficacy. Among the problems that are sometimes encountered when the
waterfall model is applied are:

1. Real projects rarely follow the sequential flow that the model proposes. Although the
linear model can accommodate iteration, it does so indirectly. As a result, changes can cause
confusion as the project team proceeds.

2. It is often difficult for the customer to state all requirements explicitly. The waterfall
model requires this and has difficulty accommodating the natural uncertainty that exists at the
beginning of many projects.

3. The customer must have patience. A working version of the program(s) will not be
available until late in the project time span. A major blunder, if undetected until the working
program is reviewed, can be disastrous.

 A variation in the representation of the waterfall model is called the V-model. Represented
in below figure.
 The V-model depicts the relationship of quality assurance actions to the actions associated
with communication, modelling, and early construction activities. As software team
moves down the left side of the V, basic problem requirements are refined into
progressively more detailed and technical representation s of the problem and its solution.
 Once code has been generated, the team moves up the right side of the V, essentially
performing a series of tests (quality assurance actions) that validate each of the models
created as the team moved down the left side.
 In reality, there is no fundamental difference between the classic life cycle and the V-
model. The V-model provides a way of visualizing how verification and validation
actions are applied to earlier engineering work.

1.9.2 Incremental Process Models

 The incremental model combines elements of linear and parallel process flows.
 The incremental model applies linear sequences in a staggered fashion as calendar time
progresses.
 Each linear sequence produces deliverable “increments” of the software in a manner that is
similar to the increments produced by an evolutionary process flow.
 When an incremental model is used, the first increment is often a core product. That
is, basic requirements are addressed but many supplementary features remain
undelivered.
 The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality.
 This process is repeated following the delivery of each increment, until the complete
product is produced.
 The incremental process model focuses on the delivery of an operational product with
each increment.
 Incremental development is particularly useful when staffing is unavailable for a
complete implementation by the business deadline that has been established for the
project.
 Advantages of Incremental model:
 Generates working software quickly and early during the software life cycle.
 This model is more flexible – less costly to change scope and requirements.
 It is easier to test and debug during a smaller iteration.
 In this model customer can respond to each built.
 Lowers initial delivery cost.
 Easier to manage risk because risky pieces are identified and handled during it’d
iteration.
 Disadvantages of Incremental model:
 Needs good planning and design.
 Needs a clear and complete definition of the whole system before it can be broken
down and built incrementally.
 Total cost is higher than waterfall.

1.9.3 Evolutionary Process Models

Two common evolutionary process models are


[Link]
[Link] Spiral Model.
[Link] Model
Prototyping: - Often, a customer defines a set of general objectives for software,but does not
identify detailed requirements for functions and features. In othercases, the developer may be
unsure of the efficiency of an algorithm, the adaptabilityof an operating system, or the form
that human-machine interaction shouldtake. In these, and many other situations, a prototyping
paradigm may offer the bestapproach.


 The prototyping paradigm begins with communication.
 Prototyping iteration is planned quickly, and modeling (in the form of a “quick
design”) occurs. A quick design focuses on a representation of those aspects of the
software that will be visible to end users (e.g., human interface layout or output
display formats). The quick design leads to the construction of a prototype.
 The prototype is deployed and evaluated by stakeholders, who provide feedback that
is used to further refine requirements. Iteration occurs as the prototype is tuned to
satisfy the needs of various stakeholders, while at the same time enabling you to
better understand what needs to be done.

Advantages of using Prototype Model :

1. This model is flexible in design.


2. It is easy to detect errors.
3. There is scope of refinement, it means new requirements can be easily
accommodated.
4. It ensures a greater level of customer satisfaction and comfort
5. It is ideal for online system
6. It helps developers and users both understand the system better.
7. It can actively involve users in the development phase.
Disadvantages:

1. This model is costly.

2. It has poor documentation because of continuously changing customer requirements..

3. There may be sub-optimal solutions because of developers in a hurry to build


prototypes.

4. There may increase the complexity of the system.

5. Customers sometimes demand the actual product to be delivered soon after seeing an
early prototype.

6. software engineers often make implementation compromises in order to get a


prototype working quickly

The Spiral Model: -Originally proposed by Barry Boehm [Boe88], the spiral model is an
evolutionary software process model that couples the iterative nature of prototyping with the
controlled and systematic aspects of the waterfall model. It provides the potential for rapid
development of increasingly more complete versions of the software.

spiral model working procedure:

 As this evolutionary process begins, the software team performs activities that are
implied by a circuit around the spiral in a clockwise direction, beginning at the center.
 The first circuit around the spiral might result in the development of a product
specification;
 subsequent passes around the spiral might be used to develop a prototype and then
progressively more sophisticated versions of the software.
 Each pass through the planning region results in adjustments to the project plan.
 Cost and schedule are adjusted based on feedback derived from the customer after
delivery.
 In addition, the project manager adjusts the planned number of iterations required to
complete the software. Unlike other process models that end when software is
delivered, the spiral model can be adapted to apply throughout the life of the
computer software.
 Therefore, the first circuit around the spiral might represent a “concept development
project” that starts at the core of the spiral and continues for multiple iterations until
concept development is complete.

Advantages of spiral model:

[Link] spiral model is a realistic approach to the development of large-scale systems and
software. Because software evolves as the process progresses, the developer and customer
better understand and react to risks at each evolutionary level.

2. The spiral model uses prototyping as a risk reduction mechanism but, more important,
enables you to apply the prototyping approach at any stage in the evolution of the product.

3. It maintains the systematic stepwise approach suggested by the classic life cycle but
incorporates it into an iterative framework that more realistically reflects the real world

4. The spiral model demands a direct consideration of technical risks at all stages of the
project, if properly applied, should reduce risks before they become problematic.

Disadvantages:

[Link] is not suitable for small projects as it is expensive.

[Link] is much more complex than other SDLC models. Process is complex.

[Link] is not suitable for low-risk projects.

[Link] be hard to define objective, verifiable milestones. Large numbers of intermediate


stages require excessive documentation.

1.9.4 Concurrent Models

 The concurrent development model, sometimes called concurrent engineering, allows


a software team to represent iterative and concurrent elements of any of the process
models.
 The above figure provides a schematic representation of one software engineering
activity within the modeling activity using a concurrent modeling approach.
 Concurrent modeling defines a series of events that will trigger transitions from state
to state for each of the software engineering activities, actions, or tasks.
 For example, during early stages of design (a major software engineering action that
occurs during the modeling activity), an inconsistency in the requirements model is
uncovered.
 This generates the event analysis model correction, which will trigger the
requirements analysis action from the done state into the awaiting changes state.
Concurrent modeling is applicable to all types of software development and provides
an accurate picture of the current state of a project.

1.10 Specialized Process Models


1.10.1 Component-Based Development
 The component-based development model incorporates many of the characteristics of
the spiral model. It is evolutionary in nature, demanding an iterative approach to the
creation of software.
 The component-based development model constructs applications from prepackaged
software components.
 Components can be designed as either conventional software modules or object-
oriented classes or packages of classes.
 Regardless of the technology that is used to create the components, the component-
based development model incorporates the following steps (implemented using an
evolutionary approach):
1. Available component-based products are researched and evaluated for the
application domain in question.
2. Component integration issues are considered.
3. A software architecture is designed to accommodate the components.
4. Components are integrated into the architecture.
5. Comprehensive testing is conducted to ensure proper functionality.
 The component-based development model leads to software reuse, and reusability
provides software engineers with a number of measurable benefits.

1.10.2 The Formal Methods Model

 The formal methods model encompasses a set of activities that leads to formal
mathematical specification of computer software.
 Formal methods enable you to specify, develop, and verify a computer-based system by
applying a rigorous, mathematical notation.
 A variation on this approach, called cleanroom software engineering, is currently applied
by some software development organizations.
 Formal methods are used during development; they provide a mechanism for eliminating
many of the problems that are difficult to overcome using other software engineering
paradigms.
 Ambiguity, incompleteness, and inconsistency can be discovered and corrected more
easily.
 Yet have some defects they are.
1. The development of formal models is currently quite time consuming and expensive.
2. Because few software developers have the necessary background to apply formal
methods, extensive training is required.
3. It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.

1.10.2 Aspect-Oriented Software Development

 Aspectual requirements define those crosscutting concerns that have an impact across the
software architecture.
Aspect-oriented software development (AOSD), often referred to as aspect-oriented
programming (AOP), is a relatively new software engineering paradigm that provides
a process and methodological approach for defining, specifying, designing, and
constructing aspects

1.11 The Unified Process


 A “unified method” that would combine the best features of each of their individual
object-oriented analysis and design methods and adopt additional features proposed
by other experts in object-oriented modeling.
 The result was UML: -a unified modeling language that contains a robust notation for
the modeling and development of object-oriented systems.
 UML provided the necessary technology to support object-oriented software
engineering practice, but it did not provide the process framework to guide project
teams in their application of the technology.
 Jacobson, Rumbaugh, and Booch developed the Unified Process, a framework for
object-oriented software engineering using UML.

1.11.1 Phases of the Unified Process

 The inception phase of the UP encompasses both customer communication and


planning activities. By collaborating with stakeholders, business requirements for the
software are identified; a rough architecture for the system is proposed; and a plan for
the iterative, incremental nature of the ensuing project is developed.
 The elaboration phase encompasses the communication and modeling activities of
the generic process model. Elaboration refines and expands the preliminary use cases
that were developed as part of the inception phase and expands the architectural
representation to include five different views of the software-the usecase model, the
requirements model, the design model, the implementation model, and the
deployment model.
 The construction phase of the UP is identical to the construction activity defined for
the generic software process. Using the architectural model as input, the construction
phase develops or acquires the software components that will make each use case
operational for end users.
 The transition phase of the UP encompasses the latter stages of the generic
construction activity and the first part of the generic deployment (delivery and
feedback) activity. Software is given to end users for beta testing and user feedback
reports both defects and necessary changes.
 The production phase of the UP coincides with the deployment activity of the
generic process. During this phase, the ongoing use of the software is monitored,
support for the operating environment (infrastructure) is provided, and defect reports
and requests for changes are submitted and evaluated.
1.12 Personal and Team Process Models
Watts Humphrey argues that it is possible to create a “personal software process
“and/or a “team software process.” Both require hard work, training, and coordination, but
both are achievable.

1.12.1 Personal Software Process (PSP)

 The Personal Software Process (PSP) emphasizes personal measurement of


both the work product that is produced and the resultant quality of the work
product.
 In addition, PSP makes the practitioner responsible for project planning (e.g.,
estimating and scheduling) and empowers the practitioner to control the
quality of all software work products that are developed.
 The PSP model defines five framework activities:
1. Planning: This activity isolates requirements and develops both size and resource
estimates. In addition, a defect estimates (the number of defects projected for the
work) is made. All metrics are recorded on worksheets or templates. Finally,
development tasks are identified and a project schedule is created.
2. High-level design: External specifications for each component to be constructed are
developed and a component design is created. Prototypes are built when uncertainty
exists. All issues are recorded and tracked.
3. High-level design review: Formal verification methods are applied to uncover errors
in the design. Metrics are maintained for all important tasks and work results.
4. Development: The component-level design is refined and reviewed. Code is
generated, reviewed, compiled, and tested. Metrics are maintained for all important
tasks and work results
5. Postmortem: Using the measures and metrics collected (this is a substantial amount
of data that should be analyzed statistically), the effectiveness of the process is
determined. Measures and metrics should provide guidance for modifying the process
to improve its effectiveness

1.12.2 Team Software Process (TSP)

 Watts Humphrey extended the lessons learned from the introduction of PSP and
proposed a Team Software Process (TSP).
 The goal of TSP is to build a “selfdirected” project team that organizes itself to
produce high-quality software.
 Humphrey defines the following objectives for TSP:
1. Build self-directed teams that plan and track their work, establish goals, and
own their processes and plans. These can be pure software teams or integrated product
teams (IPTs) of 3 to about 20 engineers.
2. Show managers how to coach and motivate their teams and how to help them
sustain peak performance.
3. Accelerate software process improvement by making CMM Level 5
behaviour normal and expected.
4. Provide improvement guidance to high-maturity organizations.
5. Facilitate university teaching of industrial-grade team skills.

Agile process Model development:

Agility means effective (rapid and adaptive) response to change, effective


communication among all stockholder. Drawing the customer onto team and
organizing a team so that it is in control of work performed. -The Agile process, light-
weight methods are People-based rather than plan-based methods.
Agility can be applied to any software process. However, to accomplish this, it is
essential that the process be designed in a way that allows the project team to adapt
tasks and to streamline them

Agility and cost of change


 The conventional wisdom in software development (supported by decades of
experience) is that the cost of change increases nonlinearly as a project progress
(Figure 3.1, solid black curve)
 It is relatively easy to accommodate a change when a software team is gathering
requirements (early in a project).
 A usage scenario might have to be modified, a list of functions may be extended, or a
written specification can be edited. The costs of doing this work are minimal,

Fig: Change costs as a function of time in development


 The team is in the middle of validation testing (something that occurs relatively late in
the project), and an important stakeholder is requesting a major functional change.
 The change requires a modification to the architectural design of the software, the
design and construction of three new components, modifications to another five
components, the design of new tests, and so on
 A well-designed agile process “flattens” the cost of change curve (Figure , shaded,
solid curve), allowing a software team to accommodate changes late in a software
project without dramatic cost and time impact
Agile process:

Any agile software process is characterized in a manner that addresses a number of key
assumptions [Fow02] about the majority of software projects:

1. It is difficult to predict in advance which software requirements will persist and which will
change. It is equally difficult to predict how customer priorities will change as the project
proceeds.
2. For many types of software, design and construction are interleaved. That is, both activities
should be performed in tandem so that design models are proven as they are created. It is
difficult to predict how much design is necessary before construction is used to prove the
design.
3. Analysis, design, construction, and testing are not as predictable (from a planning point of
view) as we might like

Agility Principles:
The Agile Alliance (see [Agi03], [Fow01]) defines 12 agility principles for those who want to
achieve agility:

1. Our highest priority is to satisfy the customer through early and continuous delivery of
valuable software.

2. Welcome changing requirements, even late in development. Agile processes harness change
for the customer’s competitive advantage.

3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shorter timescale.

4. Business people and developers must work together daily throughout the project.

5. Build projects around motivated individuals. Give them the environment and support they
need, and trust them to get the job done.

6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.

7. Working software is the primary measure of progress.

8. Agile processes promote sustainable development. The sponsors, developers, and users
should be able to maintain a constant pace indefinitely.

[Link] attention to technical excellence and good design enhances agility.

10. Simplicity—the art of maximizing the amount of work not done—is essential.

11. The best architectures, requirements, and designs emerge from self– organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and
adjusts its behavior accordingly

Extreme Programming

Extreme Programming (XP), the most widely used approach to agile software development

XP values:

Beck defines a set of five values that establish a foundation for all work performed as part of XP—

[Link]:

XP emphasizes close, yet informal (verbal) collaboration between customers and developers,
continuous feedback, and the avoidance of voluminous documentation as a

communication medium.

[Link]:

XP restricts developers to design only for immediate needs, rather than consider

future needs. The intent is to create a simple design that can be easily implemented in

code)

[Link]:

Feedback is derived from three sources: the implemented software itself, the

customer, and other software team members.

[Link]:

The strict adherence to certain XP practices demands courage. A better word might

be discipline. For example, there is often significant pressure to design for future

requirements.

[Link]:

The agile team inculcates respect among its members, between other stakeholders and

team members, and indirectly, for the software itself.

 Each of these values is used as a driver for specific XP activities, actions, and tasks

The XP Process

Extreme Programming uses an object-oriented approach as its preferred development paradigm


and encompasses a set of rules and practices that occur within the context of four framework
activities: planning, design, coding, and testing
 Planning

1. The planning activity (also called the planning game) begins with listening—a requirements
gathering activity that enables the technical members of the XP team to understand the
business context for the software

2. Listening leads to the creation of a set of “stories” (also called user stories) that describe
required output, features, and functionality for software to be built

3. Each story is written by the customer and is placed on an index card.

4. The customer assigns a value (i.e., a priority) to the story based on the overall business value
of the feature or function.

[Link] the story is estimated to require more than three development weeks, the customer is
asked to split the story into smaller stories

6. Customers and developers work together to decide how to group stories into the next release

7. Once a basic commitment is made for a release, the XP team orders the stories that will be
developed in one of three ways: (1) all stories will be implemented immediately (within a few
weeks), (2) the stories with highest value will be moved up in the schedule and implemented
first, or (3) the riskiest stories will be moved up in the schedule and implemented first

NOTE: project velocity is the number of customer stories implemented during the first release.
Project velocity can then be used to

(1) help estimate delivery dates and schedule for subsequent releases and

(2) determine whether an overcommitment has been made for all stories across the entire
development project. If an overcommitment occurs, the content of releases is modified or end
delivery dates are changed

Design

[Link] design rigorously follows the KIS (keep it simple) principle. The design provides
implementation guidance for a story as it is written—nothing less, nothing more
2. XP encourages the use of CRC cards as an effective mechanism for thinking about the
software in an object-oriented context. CRC (class-responsibility collaborator) cards identify and
organize the object-oriented classes that are relevant to the current software increment

3. If a difficult design problem is encountered as part of the design of a story, XP recommends


the immediate creation of an operational prototype of that portion of the design called a spike
solution, the design prototype is implemented and evaluated.

Refactoring:

Refactoring is a construction technique that is also a method for design optimization.


Refactoring is the process of changing a software system in such a way that it does not alter the
external behavior of the code yet improves the internal structure. It is a disciplined way to clean
up code [and modify/simplify the internal design] that minimizes the chances of introducing
bugs

The intent of refactoring is to control these modifications by suggesting small design changes
that can radically improve the design

Coding:

[Link] stories are developed and preliminary design work is done, the team does not move to
code, but rather develops a series of unit tests that will exercise each of the stories that is to be
included in the current release.

[Link] the coding activity one of aspect is pair programming. XP recommends that two people
work together at one computer workstation to create code for a story. This provides a
mechanism for realtime problem solving and real-time quality assurance (the code is reviewed
as it is created)

3. As pair programmers complete their work, the code they develop is integrated with the work
of others. In some cases this is performed on a daily basis by an integration team

[Link] pair programmers have integration responsibility. This “continuous integration” strategy
helps to avoid compatibility and interfacing problems and provides a “smoke testing”
environment that helps to uncover errors early.

Testing:

[Link] of unit tests before coding commences is a key element of the XP approach. The unit

tests that are created should be implemented using a framework that enables them to be

automated. This encourages a regression testing strategy whenever code is modified.

[Link] individual unit tests are organized into a “universal testing suite” integration and
validation

testing of the system can occur on a daily basis. This provides the XP team with a continual

indication of progress

3. XP acceptance tests, also called customer tests, are specified by the customer and focus on

overall system features and functionality that are visible and reviewable by the customer.
4. Acceptance tests are derived from user stories that have been implemented as part of a

software release

You might also like