0% found this document useful (0 votes)
23 views19 pages

Software Project Estimation Techniques

Uploaded by

sih9597887395
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views19 pages

Software Project Estimation Techniques

Uploaded by

sih9597887395
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

CS6403-SOFTWARE ENGINEERING

UNIT V MANAGING SOFTWARE PROJECTS

Project Management Concepts – Management spectrum – People – The Product – The Process – Process
and Product Metrics – Metrics in the process and product domain – Software Measurement – metrics for
software Quality – Integrating metrics within the Software Process – Estimation for Software Project.

5.1. ESTIMATION
 Software cost and effort estimation will never be an exact science.
 The variables such as human, technical, political, environmental can affect the ultimate
cost of software and effort applied to develop it.
 To achieve reliable cost and effort estimates, a number of options arise:
1. Delay estimation until late in the project
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost and effort
estimates.
4. Use one or more empirical models for software cost and effort estimation.
A model is based on experience (historical data) and takes the form
d = f (vi)
where,
d is one of a number of estimated values (e.g., effort, cost, project duration)
vi are selected independent parameters (e.g., estimated LOC or FP).
Decomposition techniques
1. FP based
2. LOC based
Empirical models
1. COCOMO-II model
5.1.1 Software Sizing
The accuracy of a software project estimate is predicated on a number of things:
1. The degree to which you have properly estimated the size of the product to be built.
2. The ability to translate the size estimate into human effort, calendar time, and dollars.
3. The degree to which the project plan reflects the abilities of the software team.
4. The stability of product requirements and the environment that supports the software
engineering effort.
Four different approaches to the sizing problem:
• “Fuzzy logic” sizing:
The planner must identify the type of application, establish its magnitude on a
qualitative scale, and then refine the magnitude within the original range.
• Function point sizing:
The planner develops estimates of the information domain characteristics.
• Standard component sizing:
Software is composed of a number of different “standard components” that are
generic to a particular application area. For example, the standard components for an
information system are subsystems, modules, screens, reports, interactive programs,
batch programs, files, LOC, and object-level instructions.

1
The project planner estimates the number of occurrences of each standard
component and then uses historical project data to estimate the delivered size per
standard component.
• Change sizing:
This approach is used when a project encompasses the use of existing software
that must be modified in some way as part of a project. The planner estimates the number
and type of modifications that must be accomplished.
5.1.2 Problem-Based Estimation:
LOC and FP data are used in two ways during software project estimation:
(1) As estimation variables to “size” each element of the software
(2) As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
 Baseline productivity metrics (e.g., LOC/pm or FP/pm) are then applied to the
appropriate estimation variable, and cost or effort for the function is derived.
 Function estimates are combined to produce an overall estimate for the entire project.
Using historical data or intuition, estimate an optimistic, most likely, and pessimistic size value
for each function or count for each information domain value.
The expected value for the estimation variable (size) S can be computed as a weighted average of
the optimistic (sopt), most likely (sm), and pessimistic (spess) estimates.

5.2. Lines of Code (LOC)


 LOC metric is very popular because it is the simplest to use. Using this metric, the project
size is estimated by counting the number of source instructions in the developed program.
 Obviously, while counting the number of source instructions, lines used for commenting
the code and the header lines should be ignored.
 Determining the LOC count at the end of a project is a very simple job. However,
accurate estimation of the LOC count at the beginning of a project is very difficult.
 In order to estimate the LOC count at the beginning of a project, project managers usually
divide the problem into modules and each module into sub modules and so on, until the
sizes of the different leaf-level modules can be approximately predicted.
 To be able to do this, past experience in developing similar products is helpful. By using
the estimation of the lowest level modules, project managers arrive at the total size
estimation.
Advantages:
 LOC is the simplest among all metrics available to estimate project size.
 Many existing methods use LOC as a key input.
 A large body of literature and data based on LOC already exists.
Disadvantages:
1) LOC is dependent upon the programming language.
2) A good problem size measure should consider the overall complexity of the problem and
the effort needed to solve it.
3) This method is well designed but shorter program may get suffered.
4) It does not accommodate non procedural languages.

2
5) It is very difficult to accurately estimate LOC in the final. The LOC count can be
accurately computed only after the code has been fully developed.
EXAMPLE: LOC APPROACH

Average productivity for systems of this type = 620 LOC/pm and Burdened labor rate is Rs.
8000 per month. Find the total estimated project cost and effort.
Answer:
The cost per line of code = Cost / LOC = 8000/620 = Rs.13.
Based on the LOC estimate and the historical productivity data, the total estimated project cost is
(33200*13) Rs. 431,000 and the estimated effort is 54 person-months
5.3 Function point (FP)
 This is in contrast to the LOC metric, where the size can be accurately determined only
after the product has fully been developed.
 The conceptual idea behind the function point metric is that the size of a software product
is directly dependent on the number of different functions or features it supports.
 A software product supporting many features would certainly be of larger size than a
product with less number of features.
 Each function when invoked reads some input data and transforms it to the corresponding
output data.
 Besides using the number of input and output data values, function point metric computes
the size of a software product using three other characteristics of the product. The size of
a product in function points (FP) can be expressed as the weighted sum of these five
problem characteristics.
 The weights associated with the five characteristics were proposed empirically and
validated by the observations over many projects. Function point is computed in two
steps. The first step is to compute the unadjusted function point (UFP).

UFP = (Number of inputs)*4 + (Number of outputs)*5 +


(Number of inquiries)*4 + (Number of files)*10 +
(Number of interfaces)*10
Number of inputs:
 Each data item input by the user is counted. Data inputs should be distinguished from
user inquiries.

3
Number of outputs:
 Each user output that provides application data to the user is counted. E.g. screens,
reports, error messages.
Number of inquiries:
 Number of inquiries is the number of distinct interactive queries which can be made by
the users. These inquiries are the user commands which require specific action by the
system.
Number of files:
 Each logical file is counted. A logical file means groups of logically related data. Thus,
logical files can be data structures or physical files.
Number of interfaces:
 Here the interfaces considered are the interfaces used to exchange information with other
systems. Examples of such interfaces are data files on tapes, disks, communication links
with other systems etc.
 Once the unadjusted function point (UFP) is computed, the technical complexity factor
(TCF) is computed next.
 TCF refines the UFP measure by considering fourteen other factors such as high
transaction rates, throughput, and response time requirements, etc.
 Each of these 14 factors is assigned from 0 (not present or no influence) to 6 (strong
influence). The resulting numbers are summed, yielding the total degree of influence
(DI).
TCF = (0.65+0.01*DI)
 As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35.
FP=UFP*TCF
Advantages:
 Function point metric can be used to easily estimate the size of a software product
directly from the problem specification.
Disadvantages:
 Function point measure does not take into account the algorithmic complexity of a
software. That is, the function point metric implicitly assumes that the effort required to
design and develop any two functionalities of the system is the same.
Feature point metric :
 Feature point metric incorporates an extra parameter algorithm complexity. This
parameter ensures that the computed size using the feature point metric reflects the fact
that the more is the complexity of a function, the greater is the effort required to develop
it and therefore its size should be larger compared to simpler functions.
 Using historical data or (when all else fails) intuition, estimate an optimistic, most likely,
and pessimistic size value for each function or count for each information domain value.
An implicit indication of the degree of uncertainty is provided when a range of values is
specified.
 A three-point or expected value can then be computed. The expected value for the
estimation variable (size) S can be computed as a weighted average of the optimistic
(sopt), most likely

(5.1)
(sm), and pessimistic (spess) estimates.

4
EXAMPLE: FP APPROACH

 The estimated number of FP is derived:


 Organizational average productivity = 6.5 FP/pm.
 Burdened labor rate = $8000 per month, approximately $1230/FP.
 Based on the FP estimate and the historical productivity data, total estimated project cost
is $461,000 and estimated effort is 58 person-months.

5.4. THE MAKE/BUY DECISION


 In many software application areas, it is very cheap to acquire rather than develop
computer software.
 Software engineering managers are faced with a make/ buy decision that can be further
complicated by a number of acquisition options:
(1) Software may be purchased (or licensed) off-the-shelf,
(2) “full-experience” or “partial-experience” software components may be acquired and then
modified and integrated to meet specific needs,
(3) Software may be custom built by an outside contractor to meet the purchaser’s specifications.

 The steps involved in the acquisition of software are defined by the criticality of the
software to be purchased and the end cost.
 In some cases (e.g., low-cost PC software), it is less expensive to purchase and
experiment than to conduct a lengthy evaluation of potential software packages.
In the final analysis, the make/buy decision is made based on the following conditions:
(1) Will the delivery date of the software product be sooner than that for internally
developed Software?
(2) Will the cost of acquisition plus the cost of customization be less than the cost of
developing the software internally?
(3) Will the cost of outside support (e.g., a maintenance contract) be less than the cost of
internal support? These conditions apply for each of the acquisition options.

5
5.4.1. Creating a Decision Tree
The steps just described can be augmented using statistical techniques such as decision tree
analysis.
 For example, Figure depicts a decision tree for a software based system X. In this case,
the software engineering organization can
(1) Build system X from scratch
(2) Reuse existing partial-experience components to construct the system
(3) Buy an available software product and modify it to meet local needs
(4) Contract the software development to an outside vendor

Fig. A decision tree to support the make/buy decision


 If the system is to be built from scratch, there is a 70 percent probability that the job will
be difficult. Using the estimation techniques discussed earlier in this chapter, the project
planner estimates that a difficult development effort will cost $450,000.
 A “simple” development effort is estimated to cost $380,000. The expected value for
cost, computed along any branch of the decision tree, is

where i is the decision tree path. For the build path,

 Following other paths of the decision tree, the projected costs for reuse, purchase, and
contract, under a variety of circumstances, are also shown. The expected costs for these
paths are

 Based on the probability and projected costs that have been noted, the lowest expected
cost is the “buy” option.

6
 It is important to note, however, that many criteria—not just cost— must be considered
during the decision-making process. Availability, experience of the developer/
vendor/contractor, conformance to requirements, local “politics,” and the likelihood of
change are but a few of the criteria that may affect the ultimate decision to build, reuse,
buy, or contract.
5.4.2. Outsourcing
 Sooner or later, every company that develops computer software asks a fundamental
question: “Is there a way that we can get the software and systems we need at a lower
price?” The answer to this question is not a simple one, and the emotional discussions
that occur in response to the question always lead to a single word: outsourcing.
 In concept, outsourcing is extremely simple. Software engineering activities are
contracted to a third party who does the work at lower cost and, hopefully, higher quality.
Software work conducted within a company is reduced to a contract management
activity.
 The decision to outsource can be either strategic or tactical.
 At the strategic level, business managers consider whether a significant portion of all
software work can be contracted to others.
 At the tactical level, a project manager determines whether part or all of a project can be
best accomplished by subcontracting the software work.
 Regardless of the breadth of focus, the outsourcing decision is often a financial one.
Pros:
 Cost savings can usually be achieved by reducing the number of software people and the
facilities (e.g., computers, infrastructure) that support them.
Cons:
 A company loses some control over the software that it needs. Since software is a
technology that differentiates its systems, services, and products, a company runs the risk
of putting the fate of its competitiveness into the hands of a third party.
5.5 . COCOMO MODEL
Any software development project can be classified into one of the following three categories
based on the development complexity:
1) Organic
2) Semidetached
3) Embedded
1) Organic:
 A development project can be considered of organic type, if the project deals with
developing a well understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar types of
projects.
2) Semidetached:
 A development project can be considered of semidetached type, if the development
consists of a mixture of experienced and inexperienced staff. Team members may have
limited experience on related systems but may be unfamiliar with some aspects of the
system being developed.
3) Embedded:
 A development project is considered to be of embedded type, if the software being
developed is strongly coupled to complex hardware, or if the stringent regulations on the
operational procedures exist.

7
COCOMO
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm [1981]. According to
Boehm, software cost estimation should be done through three stages:
(1) Basic COCOMO
(2) Intermediate COCOMO
(3) and Complete COCOMO
(1) Basic COCOMO Model :
The basic COCOMO model gives an approximate estimate of the project parameters. The basic
COCOMO estimation model is given by the following expressions:
a2
Effort = a х (KLOC) PM
1
b2
Tdev = b x (Effort) Months
1
Where
• KLOC is the estimated size of the software product expressed in Kilo Lines of Code,

• a , a , b , b are constants for each category of software products,


1 2 1 2

• Tdev is the estimated time to develop the software, expressed in months,

• Effort is the total effort required to develop the software product, expressed in person
months (PMs).
The effort estimation is expressed in units of person-months (PM). It is the area under the
person-month plot. It should be carefully noted that an effort of 100 PM does not imply that 100
persons should work for 1 month nor does it imply that 1 person should be employed for 100
months, but it denotes the area under the person-month curve.

Fig. Person-month curve


Every line of source text should be calculated as one LOC irrespective of the actual number of
instructions on that line. Thus, if a single instruction spans several lines, it is considered to be
nLOC. The values of a , a , b , b for different categories of products (i.e. organic, semidetached,
1 2 1 2
and embedded) are summarized below. He derived the above expressions by examining
historical data collected from a large number of actual projects.
Estimation of development effort
For the three classes of software products, the formulas for estimating the effort based on the
code size are shown below:
1.05
Organic : Effort = 2.4(KLOC) PM
1.12
Semi-detached : Effort = 3.0(KLOC) PM
1.20
Embedded : Effort = 3.6(KLOC) PM
8
Estimation of development time
For the three classes of software products, the formulas for estimating the development time
based on the effort are given below:
0.38
Organic : Tdev = 2.5(Effort) Months
0.35
Semi-detached : Tdev = 2.5(Effort) Months
0.32
Embedded : Tdev = 2.5(Effort) Months

 Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig. 11.4 shows a plot of estimated effort
versus product size. From fig. 11.4, we can observe that the effort is somewhat super
linear in the size of the software product. Thus, the effort required to develop a product
increases very rapidly with project size.

 The development time versus the product size in KLOC is plotted in fig. 11.5. From fig.
11.5, it can be observed that the development time is a sub linear function of the size of
the product, i.e. when the size of the product increases by two times, the time to develop
the product does not double but rises moderately.
 This can be explained by the fact that for larger products, a larger number of activities
which can be carried out concurrently can be identified. The parallel activities can be
carried out simultaneously by the engineers.
 This reduces the time to complete the project. Further, from fig. 11.5, it can be observed
that the development time is roughly the same for all the three categories of products.
 It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called as nominal effort estimate and nominal duration estimate.
 The term nominal implies that if anyone tries to complete the project in a time shorter
than the estimated duration, then the cost will increase drastically.
 But, if anyone completes the project over a longer period of time than the estimated, then
there is almost no decrease in the estimated cost value.

9
Example:
Assume that the size of an organic type software product has been estimated to be 32,000 lines of
source code. Assume that the average salary of software engineers be Rs. 15,000/- per month.
Determine the effort required to develop the software product and the nominal development
time.
From the basic COCOMO estimation formula for organic software:
1.05
Effort = 2.4 х (32) = 91 PM
0.38
Nominal development time = 2.5 х (91) = 14 months
Cost required to develop the product = 14 х 15,000
= Rs. 210,000/-
(2) Intermediate COCOMO model :
 The intermediate COCOMO model recognizes refines the initial estimate obtained using
the basic COCOMO expressions by using a set of 15 cost drivers (multipliers) based on
various attributes of software development.
 If there are stringent reliability requirements on the software product, this initial estimate
is scaled upward. The project manager to rate these 15 different parameters for a
particular project on a scale of one to three.
 Then, depending on these ratings, he suggests appropriate cost driver values which
should be multiplied with the initial estimate obtained using the basic COCOMO.

In general, the cost drivers can be classified as being attributes of the following items:

 Product attributes
o Required software reliability
o Size of application database
o Complexity of the product
 Hardware attributes
o Run-time performance constraints
o Memory constraints
o Volatility of the virtual machine environment
o Required turnabout time
 Personnel attributes
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
 Project attributes
o Use of software tools
o Application of software engineering methods
o Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low"
to "extra high" (in importance or value). An effort multiplier from the table below applies
to the rating. The product of all effort multipliers results in an effort adjustment factor
(EAF). Typical values for EAF range from 0.9 to 1.4.

The Intermediate COCOMO formula now takes the form:

Effort=a1(KLOC)[Link]
Tdev=b1(Effort)b2 Months
10
(3) Complete COCOMO model :

 A major shortcoming of both the basic and intermediate COCOMO models is that they
consider a software product as a single homogeneous entity. However, most large
systems are made up several smaller sub-systems. These sub-systems may have widely
different characteristics.
 The complete COCOMO model considers these differences in characteristics of the
subsystems and estimates the effort and development time as the sum of the estimates for
the individual subsystems. The cost of each subsystem is estimated separately. This
approach reduces the margin of error in the final estimate.
 The following development project can be considered as an example application of the
complete COCOMO model. A distributed Management Information System (MIS)
product for an organization having offices at several places across the country can have
the following sub-components:
• Database part
• Graphical User Interface (GUI) part
• Communication part

5.6 COCOMO II Model


 A hierarchy of software estimation models bearing the name COCOMO, for
COnstructive COst MOdel.
 The original COCOMO model became one of the most widely used and discussed
software cost estimation models in the industry.
 It has evolved into a more comprehensive estimation model, called COCOMOII.
 Like its predecessor, COCOMO II is actually a hierarchy of estimation models that
address the following areas:
 Application composition model.
o Used during the early stages of software engineering, when prototyping of user
interfaces, consideration of software and system interaction, assessment of
performance, and evaluation of technology maturity are paramount.
 Early design stage model.
o Used once requirements have been stabilized and basic software architecture has
been established.
 Post-architecture-stage model.
o Used during the construction of the software

 COCOMO II models require sizing information. Three different sizing options are
available as part of the model hierarchy:
o Object points, Function points and Lines of code(LOC).
 The COCOMO II application composition model uses object points.
 The object point is an indirect software measure that is computed using counts of the
number of
(1) screens (at the user interface)
(2) Reports
(3) Components likely to be required to build the application.

11
 Each object instance (e.g., a screen or report) is classified into one of three complexity
levels (i.e., simple, medium, or difficult).
 In essence, complexity is a function of the number and source of the client and server
data tables that are required to generate the screen or report and the number of views or
sections presented as part of the screen or report.
 Once complexity is determined, the number of screens, reports, and components are
weighted according to the table illustrated in Figure .

FIG. Complexity weighting for object types.

 The object point count is then determined by multiplying the original number of object
instances by the weighting factor in the figure and summing to obtain a total object point
count.
 When component-based development or general software reuse is to be applied, the
percent of reuse (%reuse) is estimated and the object point count is adjusted:

where NOP is defined as new object points.


 To derive an estimate of effort based on the computed NOP value, a “productivity rate”
must be derived.

for different levels of developer experience and development environment maturity. Once the
productivity rate has been determined, an estimate of project effort is computed using

In more advanced COCOMO II models, a variety of scale factors, cost drivers,and adjustment
procedures are required.

FIG. Productivity rate for object points.

12
Example:
Describe in detail COCOMO model for software cost estimation. Use it to estimate the
effort required to build software for a simple ATM that produces 12 screens, 10 reports
and has 80 software components. Assume average complexity and average developer
maturity. Use application composition model with object points. (NOV/DEC 2016)
Answer:
COCOMO Model Explanation and answer for problem need to write

Formula Note:

Object Point = (Screen * Weighting factor) + (Report * Weighting factor) + (Component


* Weighting factor) (100 - % reuse)
NOP = (Object Points) * ----------------------
100
NOP
PROD =

Person/Month
NOP
Estimated Effort ----------------------
= PROD

Productivity Rate for Object point:


Developer Experience Very Low Low Nominal High Very High

Environment Maturity / Very Low Low Nominal High Very High


Capability

PROD 4 7 13 25 50

Solution for question:

Complexity Weight
Object Type Count
Simple Medium Difficult

Screen 12 2

Report 10 5

3 GL Component 80 10

13
Object Point = 12 *2 + 10*5 + 80*10 = 874

Assume 80% of reuse

(100 - % reuse)
----------------------
NOP = (Object Points)
100
*

= 874 * {(100 -80)/100}

= 874*0.2

NOP = 174.8

Nominal Developer Experience

So, PROD = 13
NOP
Estimated Effort ----------------------
= PROD

= 174.8/13

14
Estimated Effort = 13.45

ANNA UNIVERSITY QUESTION AND ANSWERS


PART A
UNIT V
1. Highlight the activities in Project Planning. (APR/MAY 2015)
 Software scope
 Resources
 Project estimation
 Decomposition
2. State the importance of scheduling activity in project management.
(APR/MAY 2015)
Accurate task duration estimates are defined in order to stabilizes customer relations and
maintain team morale. With defined task durations, the team knows what to expect and
what is expected of them.
3. Define risk and list its types. (NOV/DEC 2015)
Robert Charette presents a conceptual definition of risk: First, risk concerns future
happenings. second, that risk involves change, such as in changes of mind, opinion,

15
actions, or places........[Third,] risk involves choice, and the uncertainty that choice itself
entails. Thus paradoxically, risk, like death and taxes, is one of the few certainties of life.

4. [Link] is the project manager on a project to build a new cricket stadium in


Mumbai, India. After six months of work, the project is 27% complete. At the start
of the project, Koushan estimated that it would cost $50,000,000, What is the earned
value? (NOV/DEC 2015)
Budget at completion, BAC. Hence, BAC = 50,000,000

Project Percent Complete = (EV/BAC) * 100


27 /100 = (EV/50,000,000) x 100
27 = EV/50,000,000
EV = 27 X 50,000,000
EV = 1350000000

Estimate at completion (EAC) = BAC/CPI (The estimated total cost at project


completion.)
Variance at completion (VAC)= BAC-EAC (The estimated variance between actual
total cost and planned total cost at project completion).
5. Will exhaustive kiting guarantee that the program is 100% correct? (APR/MAY
2016)
No. There are many times a program runs correctly as designed, but I think there is no such thing
as 100% reliability even after very exhaustive testing. Many things within a person's computer
can cause a program to not function as designed even if it works for most other users of that
program.
6. What is risk management? (NOV/DEC 2016)
Risk management—assesses risks that may affect the outcome of the project or the quality
of the product.

7. How is productivity and cost related to function points? (NOV/DEC 2016)


Inconsistent productivity rates between projects may be an indication that a
standard process is not being followed. Productivity is defined as the ratio of
inputs/outputs. For software, productivity is defined as the amount of effort required to
deliver a given set of functionality.
The true cost of software is the sum of all costs for the life of the project
including all expected enhancement and maintenance costs. The more invested up front
should reduce per unit cost for future enhancement and maintenance activities. The unit
cost can be hours/FP or $/FP.
8. What are the different types of productivity estimation measures?
(APR/MAY 2017)
 Function Point and Function Point Analysis
 COCOMO
 Cyclomatic Complexity
9. List two customers related and technology related risks. (APR/MAY 2017)
Customer related risk

16
 Have you worked with the customer in the past?
 Does the customer have a solid idea of what is required?
 Technology related risk
 Is the technology to be built new to your organization?
 Do the customer’s requirements demand the creation of new algorithms or input or output
technology?
10. List out the principles of project scheduling. (NOV/DEC 2017)
-Compartmentalization
- Interdependency
- Time allocation
- Effort validation
- Defined responsibilities
- Defined outcomes
- Defined milestones
11. Write a note on Risk Information Sheet (RIS). (NOV/DEC 2017)
The Risk Information Sheet documents a Risk that may during the life-time of a specific
Software Project. Risk Information Sheets can be used in to supplement or in the place of a
formal Risk Mitigation, Monitoring and Management (RMMM) Plan.
12. List two advantages of COCOMO model. APR/MAY 2019
 COCOMO Model is used to estimate the project cost
 COCOMO is easy to interpret, predictable and accurate.
13. Compare project risk and Business risk. APR/MAY 2019
 Project risk - that the building costs may be higher than expected because of an
increase in materials or labor costs.
 Business risk - even if the stadium is constructed on time and within budget that
it will not make money for the business.

14. What is budgeted cost of work scheduled? NOV/DEC 2019


The budgeted cost of work scheduled (BCWS) is determined for each work task
represented in the schedule. During estimation, the work (in person-hours or person-days)
of each software engineering task is planned.
BCWS= sum of BCWSi
Hence, BCWSi is the effort planned for work task i.
15. Write any two differences between “known risk” and predictable risk”. NOV/DEC
2019
Known risk: It can be uncovered after careful evaluation project plan, business and
technical environment in which the project is being developed, other reliable information
resources. E.g. unrealistic delivery date, lack of software poor development environment.

Predictable risks are those risks that can be identified in advance based on pas project
experience. For example: Experienced and skilled staff leaving the organization in
between.

17
ANNA UNIVERSITY QUESTION AND ANSWERS
PART B
1. State the need for Risk Management and explain the activities under Risk Management.
(APRIL/MAY 2015) (NOV/DEC 2015) (APRIL/MAY 2017)
2. Write short notes on the following (APRIL/MAY 2015)
(i) Project Scheduling
(ii) Project Timeline chart and Task network
3. Discuss about COCOMO II model for software estimation.
(NOV/DEC2015)(APRIL/MAY 2017)(NOV/DEC 2019)
4. Write short notes on the following : (2 x8 = 16) (APRIL/MAY 2016)

(i) Make/Buy decision


(ii) COCOMO II
5. An application has the following: 10 low external inputs, 8 high external outputs, 13 low
internal logical files, 17 high external interface files, 11 average external inquires and
complexity adjustment factor of 1.10. What are the unadjusted and adjusted function
point counts ? (APRIL/MAY 2016)

6. Discuss Putnam resources allocation model. Derive the time and effort equations.
(APRIL/MAY 2016)

7. Suppose you have a budgeted cost of a project as Rs. 9,00,000. The project is to be
completed in 9 months. After a month, you have completed 10 percent of the project at a
total expense of Rs. 1,00,000. The planned completion should have been 15 percent. You
need to determine whether the project is on-time and on-budget? Use Earned Value
analysis approach and interpret. (NOV/DEC 2016)

Answer:
Budget at Completion (BAC) = 9,00,000
Actual Cost (AC) = 1,00,000
Planned Completion = 15%
Actual Completion = 10%
Planned Value (PV) = Planned Completion( %) * BAC
= (15/100) * 900000
=1, 35,000

Earned Value (EV) = Actual Completion * BAC


=( 10/100) *900000 = 90,000

Cost Performance Index (CPI) = EV/AC


= 90000/100000
CPI = 0.9
Scheduled performance index SPI = EV/PV
= 90000 / 1, 35,000
= 0.66
Therefore, CPI is close to 1 (i.e., 0.9). This means that 90% of work is performed.

18
But, SPI is less than 1, hence the project team is complexity only 0.66 (approximately 40 mins).
So the project is not on time, correction action should be taken.
8. Consider the following Function point components and their complexity. If the total
degree of influence is 52, find the estimated function points. (NOV/DEC 2016)
Function type Estimated count Complexity
ELF 2 7
ILF 4 10
EQ 22 4
EO 16 5
EI 24 4

Answer:
Function type Estimated count Complexity Product of Count and Complexity

External Interface File – ELF 2 7 14

Internal Logical file – ILF 4 10 40

External Inquiries – EQ 22 4 88

External Output - EO 16 5 80

External Inputs – EI 24 4 96

Count Total 318

FP = UFP *TCF
TCF = (0.65+0.01*DI)
FP = 318 * (0.65 +0.01 * 52)
FP= 372.06

9. Describe in detail COCOMO model for software cost estimation. Use it to estimate the
effort required to build software for a simple ATM that produces 12 screens, 10 reports
and has 80 software components. Assume average complexity and average developer
maturity. Use application composition model with object points. (NOV/DEC 2016)
10. List the features of LOC and FP based estimation models. Compare the two models and
list the advantage of over one other. APR/ MAY 2019

11. Define risk. List types of risk and explain phases in risk management. APR/ MAY 2019,
NOV/DEC 2019

19

You might also like