AD3351 DAA Important Questions Guide
AD3351 DAA Important Questions Guide
IMPORTANT QUESTIONS
2021 REGULATION
UNIT I – INTRODUCTION
2. Differentiate time and space complexity. April / May 2010, Nov / Dec
2011
Time efficiency: indicates how fast the algorithm runs
Space efficiency: indicates how much extra memory the algorithm needs
f(n)€ O(f(n))
f(n) € O(g(n)) iff g(n) € (f(n))
If
f (n) €O(g (n)) and g(n) € O(h(n)) , then f(n) €O(h(n)) similarity with a ≤ b
f f1(n) €O(g1(n)) and f2(n) €O(g2(n)) , then f1(n) + f2(n) €O(max{g1(n),
g2(n)})
6. What do you mean by linear search? Nov / Dec 2011, May / June 2012
Linear Search or Sequential Search searches for the key value in the
given set of items sequentially and returns the position of the key value else returns
-1.
7. What are average, best and worst case efficiency? May / June 2014
The average case efficiency of an algorithm is its efficiency for an average case
input of size n. It provides information about an algorithm behavior on a typicalor
random input.
The best-case efficiency of an algorithm is its efficiency for the best- case
input of size n, which is an input or inputs for which the algorithm runs the fastest
among all possible inputs of that size.
The worst-case efficiency of an algorithm is its efficiency for the worst- case
input of size n, which is an input or inputs of size n for which the algorithm
runs the longest among all possible inputs of that size.
9. What are the components of fixed and variable parts in space complexity?
Nov / Dec 2013
Space Complexity of an algorithm is the amount of memory it needs to run to
completion i.e. from start of execution to its termination. Space need by any
algorithm is the sum of following components:
Fixed Component: This is independent of the characteristics of the
inputs and outputs. This part includes: Instruction Space, Space of simple
variables, fixed size component variables, and constants variables.
Variable Component: This consist of the space needed by component
variables whose size is dependent on the particular problems
instances(Inputs/Outputs) being solved, the space needed by referenced
variables and the recursion stack space is one of the most prominent
components. Also this included the data structure components like Linked list,
heap, trees, graphs etc.
Therefore the total space requirement of any algorithm 'A' can be
provided as
Space(A) = Fixed Components(A) + Variable Components(A)
10. Define O-notation. List the properties of Big Oh notation. Nov/Dec 2011, 12
A function t(n) is said to be in O(g(n)), denoted by t(n) ε O(g(n)), if t(n) is
bounded above by some constant multiple of g(n) for all large n, i.e., if there exists
some positive constant c and some non-negative integer n0 such that
t (n) <=cg (n) for all n >= n0
11. What are little Oh and omega notations? Nov / Dec 2013
Little Oh notation: A function t(n) is said to be in o(g(n)) denoted by t(n) € o(g(n))
ifthere exists some positive constant c and some non-negative integer such that
t(n) < c g(n)
Little Omega notation: It is used to describe the best case analysis ofalgorithms
and concerned with small values of n. The function t(n)=ω(g(n)) iff
n2 quadratic
n3 cubic
2n exponential
n! factorial
16. Compare the order of growth n(n-1) and n2. (May/June 2016)
N n(n-1)/2 n2
Polynomial Quadratic Quadratic
1 0 1
2 1 4
4 6 16
8 28 64
10 45 102
102 4950 104
Complexity Low High
Growth Low high
PART – B
options that are most widely used nowadays for specifying algorithms.
(i) Pseudo code is a mixture of a natural language and programming language.
Pseudo code is usually more precise than natural language.
(ii) Flowchart is a method of expressing an algorithm by a collection of
connected geometric shapes containing descriptions of the algorithm‘s
steps.
Ex: An algorithm to determine a student’s final grade and indicate whether it
is passing or failing.
The final grade is calculated as the average of four marks.
Pseudo code:
Step 1: Input M1,M2,M3,M4
Step 2: GRADE= (M1+M2+M3+M4)/4
Step 3: if GRADE<50
Print FAIL
Else
Print PASS
Proving an Algorithm’s Correctness:
Once an algorithm has been specified, you have to prove its correctness. That
is, you have to prove that the algorithm yields a required result for every
legitimate input in a finite amount of time.
In order to show that an algorithm is incorrect, you need just one instance of
its input for which the algorithm fails
Analyzing an Algorithm:
While analyzing an algorithm, we should consider the following factors.
Time efficiency: It is indicating how fast the algorithm runs.
Space efficiency: It is indicating how much extra memory it uses.
Simplicity: It means generating sequence of instructions which are easy to
understand.
Generality: We should write general algorithms for similar problems.
So we can reuse the same algorithms. If you are not satisfied with the
algorithm‘s efficiency, simplicity or generality you must redesign the
algorithm.
Coding an algorithm:
The coding of an algorithm is done by suitable programming language.
2. Searching
The searching problem deals with finding a given value, called a search key, in
a given set. The searching can be either a straightforward algorithm or binary
search algorithm which is a different form. These algorithms play an important
role in real-life applications because they are used for storing and retrieving
information from large databases. Some algorithms work faster but require more
memory; some are very fast but applicable only to sorted arrays. Searching,
mainly deals with addition and deletion of records. In such cases, the data
structures and algorithms are chosen to balance among the required set of
operations.
3. String processing
A String is a sequence of characters. It is mainly used in string handling
algorithms. Most common ones are text strings, which consists of letters, numbers
and special characters. Bit strings consist of zeroes and ones. The most important
problem is the string matching, which is used for searching a given word in a text.
For e.g. sequential searching and brute- force stringmatching algorithms.
4. Graph problems
One of the interesting area in algorithmic is graph algorithms. A graph is a
collection of points called vertices which are connected by line segments called
edges. Graphs are used for modeling a wide variety of real-life applications such
as transportation and communication networks.
It includes graph traversal, shortest-path and topological sorting algorithms.
Some graph problems are very hard, only very small instances of the problems
can be solved in realistic amount of time even with fastest computers.
There are two common problems: The traveling salesman problem,
finding the shortest tour through n cities that visits every city exactly once
The graph-coloring problem is to assign the smallest number of colors to
vertices of a graph so that no two adjacent vertices are of the same color. It arises
in event-scheduling problem, where the events are represented by vertices that
are connected by an edge if the corresponding events cannot be scheduled in
the same time, a solution to this graph gives an optimal schedule.
5. Combinatorial problems
The traveling salesman problem and the graph-coloring problem are examples of
combinatorial problems. These are problems that ask us to find a combinatorial
object such as permutation, combination or a subset that satisfies certain
constraints and has some desired (e.g. maximizes a value or minimizes a cost).
These problems are difficult to solve for the following facts. First, the number
of combinatorial objects grows extremely fast with a problem‘s size. Second,
there are no known algorithms, which are solved in acceptable amount of time.
6. Geometric problems
Geometric algorithms deal with geometric objects such as points, lines and
polygons. It also includes various geometric shapes such as triangles, circles etc.
The applications for these algorithms are in computer graphic, robotics etc. The
two problems most widely used are the closest-pair problem, given ‗n‗
points in the plane, finds the closest pair among them. The convex-hull
problem is to find the smallest convex polygon that would include all the points
of a given set.
7. Numerical problems
This is another large special area of applications, where the problems involve
mathematical objects of continuous nature: solving equations computing definite
integrals and evaluating functions and so on. These problems can be solved
only approximately. These require real numbers, which can be represented in a
computer only approximately. If can also lead to an accumulation of round-off
errors. The algorithms designed are mainly used in scientific and engineering
applications.
Analysis:
Let us denote C(n) the number of times this comparison is executed
and try to find a formula expressing it as a function of size n. The
algorithm makes one comparison on each execution of the loop, which
is repeated for each value of the loop's variable i within the bounds 1 and
n - 1 (inclusively). Therefore, we get the following sum for C(n):
This is an easy sum to compute because it is nothing else but 1 repeated n –1
times. Thus,
4. Explain in detail about asymptotic notations. (May/June
2016)(Nov/Dec 2016)
ASYMPTOTIC NOTAIONS
When the input size increases, then the time is also increases.
Example:
Let take f(n)=3n+2 and
g(n)=n. We have to prove
f(n) ∈O(g(n)).By the
definition,
f(n)≤c g(n)
When the input size increases, then the time is also increases.
Example:
Let take f(n)=3n+2 and
g(n)=n. We have to prove Why we always choose n0 ≥ 1?
f(n) ∈Ω(g(n)).By the
definition, Because n is an input value. It
f(n)≥c g(n) has to be at least 1.
When the input size increases, then the time is also increases.c1 and c2 are different
constants.
f(n) is bounded by both upper and lower i.e) c1g(n) ≤ f(n) ≤ c2g(n).
Example:
Let take f(n)=3n+2 and
g(n)=n. We have to prove
f(n) ∈θ(g(n)). By the
definition,
c1g(n) ≤ f(n) ≤ c2g(n)c1 * n ≤ 3n+2 ≤ c2 * n
3n+2 ≤ c2 * n
when c2=4and
3n+2≥ c1 * n when c1=1.
Such that, 1*n ≤ 3n+2 ≤ 4*n. Where
n≥[Link] 3n+2 ∈ θ(n).
5. With an algorithm explain in detail about the linear search. Give its efficiency.
SEQUENTIAL SEARCH
Sequential Search searches for the key value in the given set of items sequentially and returns
the position of the key value else returns -1.
Algorithm sequentialsearch(A[0….n-1],K)
//Searches for a given array by sequential search
//Input: An array A[0….n-1} and a search key K
// the index of the first element of A that matches K or -1 if there are no matching
elements i←0
while i<n and A[i]≠K do
i←i+1 if i<n return i else return -1
Analysis:
For sequential search, best-case inputs are lists of size n with their first
elements equal to a search key; accordingly, Cbw(n) = 1.
Average Case Analysis:
The standard assumptions are that :
a. the probability of a successful search is equal top (0 <=p<-=1) and
b. the probability of the first match occurring in the ith position of the
list is the same for every i. Under these assumptions- the average number of
keycomparisons Cavg(n) is found as follows.
In the case of a successful search, the probability of the first
match occurring in the i th position of the list is p / n for every
i,
and the number of comparisons made by the algorithm in such a
situation is obviously i.
Example 1:
//Computes n! recursively
if(n = = 0)
return 1
else
return Factorial (n – 1) * n
Example 2:
The problem has an elegant recursive solution, To move n > 1 disks from peg 1 to peg 3 (with
peg 2 as auxiliary), we first move recursively n - 1 disks from peg 1 to peg 2 (with peg 3 as
auxiliary), then move the largest disk directly from peg 1 to peg 3, and, finally, move
recursively n - 1 disks from peg 2 to peg 3 (using peg 1 as auxiliary). Of course, if n = 1, we
simply move the single disk directly from the source peg to the destination peg.
PART-C
1. Give an algorithm to check whether all the Elements in a given arrayof n elements
are distinct. Find the worst case complexity of the same.
//Output: Returns ―true‖ if all the elements in A are distinct and ―false‖ otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false
return true
2. Give the recursive algorithm which finds the number of binary digits in the
binary representation of a positive decimal integer. Find the recurrence
relation and complexity. (May/June 2016)
ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n‗s binary representation if n =
1 return 1
else return BinRec( n/2 )+ 1
Algorithm analysis
The number of additions made in computing BinRec( n/2 ) is A( n/2 ), plus one
more addition is made by the algorithm to increase the returnedvalue [Link] lead
to the recurrence
A(n) = A( n/2 ) + 1 for n > 1.
Since the recurrence call end when n is equal to 1 and there are noaddition
made then the initial condition is A(1)=0
The standard approach to solving such a recurrence is to solve itonly
for n = 2k
divide_and_conquer ( P2 ),
...
divide_and_conquer ( Pk ) ) );
}
8. Differentiate linear search and binary search. Nov / Dec
2014 Linear search: Linear Search or Sequential Search searches for
the key value in the given set of items sequentially and returns the position
of the key value else returns -1.
Binary search: Binary search is a remarkably efficient algorithm for
searching in a sorted array. It works by comparing a search key K with the
arrays middle element A[m]. If they match the algorithm stops; otherwise the
same operation is repeated recursively for the first half of the array if K <
A[m] and the second half if K > A[m].
10.. Distinguish between quick sort and merge sort. Nov/ Dec 2011
Merge Quick
For merge sort the arrays are In quick sort the arrays are
partitioned partitioned
Conceptual
difficulty
Recursion overhead
Repeated sub problems
PART – B
1. Using exhaustive search solve traveling salesman problem,
knapsack problem and assignment problem for the given data.
EXHAUSTIVE
SEARCH
W = 10
Solution:
1 7 $42
2 3 $12
3 4 $40
4 5 $25
Solution:
Subset Total weight Totalvalue
F 0 $0
{1} 7 $42
{2} 3 $12
{3} 4 $40
{4} 5 $25
{1,2} 10 $36
{1,3} 11 Not feasible
{1,4} 12 Not feasible
{2,3} 7 $52
{2,4} 8 $37
{3,4} 9 $65
{1,2,3} 14 Not feasible
{1,2,4} 15 Not feasible
{1,3,4} 16 Not feasible
{2,3,4} 12 Not feasible
{1,2,3,4} 19 Not feasible
Person1 9 2 7 8
Person2 6 4 3 7
Person3 5 8 1 8
Person4 7 6 9 4
Solution:
C = 9 2 7 8
6 4 3 7
5 8 1 8
7 6 9 9
2. Explain the binary search with suitable example. Nov/ Dec 2011 (Nov/Dec 15)
Write an algorithm to perform binary search on a sorted list of elementsand
analyze the algorithm. April / May 2011
BINARY SEARCH
It is an efficient algorithm for searching in a sorted array. It is an example fordivide and
conquer technique.
Example:
Given set of elements
3 14 27 31 42 55 70 81 98
Search key: K=55
The iterations of the algorithm are given as:
For simplicity, three-way comparisons are considered. This assumes that after one
comparison of K with A [M], the algorithm can determine whether K is smaller, equal to, or larger
than A [M]. The comparisons not only depend on ‗n‗ but also the particular instance of the problem.
The worst case comparison Cw (n) includes all arrays that do not contain a search key, after
one comparison the algorithm considers the half size of the [Link] (n) = Cw (n/2) + 1 for n > 1, Cw
(1) = 1 -------------------------------------------------------- eqn(1)
To solve such recurrence equations, assume that n = 2k to obtain the solution.
Cw ( k) = k +1 = log2n+1
For any positive even number n, n = 2i, where I > 0. now the LHS of eqn (1) is:Cw (n)
= [ log2n]+1 = [ log22i]+1 = [ log 22 + log2i] + 1 = ( 1 + [log2i]) + 1
= [ log2i] +2
The R.H.S. of equation (1) for n = 2 i is
Cw [n/2] + 1 = Cw [2i / 2] + 1
= Cw (i) + 1
= ([log2 i] + 1) + 1
= ([log2 i] + 2
Since both expressions are the same, the assertion is proved.
The worst – case efficiency is in θ (log n) since the algorithm reduces the size of the array
remained as about half the size, the numbers of iterations needed to reduce the initialsize n to the final
size 1 has to be about log2n. Also the logarithmic functions grow so slowly that its values remain
small even for very large values of n.
The average-case efficiency is the number of key comparisons made which is slightly smaller
than the worst case.
. 3.
5. Explain in detail about Strassen’s Matrix Multiplication. (Nov/Dec 15)
STRASSEN’S MATRIX MULTIPLICATION
This is a problem which is used for multiplying two n x n matrixes.
Volker Strassen in 1969 introduced a set of formula with fewernumber of
multiplications by increasing the number of additions.
Based on straight forward or Brute-Force algorithm.
Where,
m1 = ( a00 + a11 ) * (b00 + b11)
m2 = ( a10 + a11 )* b00 m3 = a00 *
(b01 - b11) m4
= a11 * (b10 – b00) m5 = (
a00 + a01 * b11
m6 = (a10 – a00) * (b00 + b01)
m7 = ( a01 - a11 ) * (b10 + b11)
Thus, to multiply two 2-by-2 matrixes, Strassen‗s algorithm requires seven multiplications and
18 additions / subtractions, where as the brute-force algorithm requires eight multiplications and 4
additions. Let A and B be two n-by-n matrixes when n is a power of two. (If not, pad the rows and
columns with zeroes). We can divide A, B and their product
Analysis:
The efficiency of this algorithm, M(n) is the number of multiplications in multiplyingtwo n
by n matrices according to Strassen‗s algorithm. The recurrence relation is as follows:
6. Write a brute force algorithm to find closest points in a set of n points and analgorithm
based on divide and conquer to solve convex hull problem.
April / May 2015Nov/Dec 15
Find the two closest points in a set of n points (in the two-dimensional Cartesian plane).
Brute-force algorithm
Compute the distance between every pair of distinct points and return the indexes of thepoints for
which the distance is the smallest.
Algortihm BruteForceClosestPoint(P)
//Input A list P of n (n≥2) points P1=(x1,y1),………Pn=(xn,yn)
//Output: Indices index1 and index 2 of the closest pair of points dmin←∞for
i←1 to n-1 do
for j←i+1 to n do
d←sqrt((xi-xj)2
(yi-yj)2
// sqrt – square root function if d<+)
dmin
dmin←∞ index1←I; index2←j
return index1, index2
Finding points farthest away from line P1P2 can be done in linear time.
Time efficiency: T(n) = T(x) + T(y) + T(z) + T(v) + O(n), where x + y + z +v <=
n.
Find the two closest points in a set of n points (in the two-dimensional Cartesian plane).It
computes the distance between every pair of distinct points and return the indexes of the points for
which the distance is the smallest.
Algorithm
Step 0 Sort the points by x (list one) and then by y (list two).
Step 1 Divide the points given into two subsets S1 and S2 by a vertical line x = cso that
half the points lie to the left or on the line and half the points lie to the right or on the line.
Step 2 Find recursively the closest pairs for the left and right subsets.
Step 3 Set d = min{d1, d2}
Limit the attention to the points in the symmetric vertical strip of width2d as
possible closest pair. Let C1 and C2 be the subsets of points in the left subset S1
and of the right subset S2, respectively, that lie in this vertical strip. The points
in C1 and C2 are stored in increasing order of their y coordinates, taken from
the second list.
Step 4 For every point P(x,y) in C1, we inspect points in C2 that may be closer to
P than d. There can be no more than 6 such points (because d ≤ d2)!
Time Complexity:
Running time of the algorithm (without sorting) is:
T(n) = 2T(n/2) + M(n), where M(n)
Θ(n)
By the Master Theorem (with a = 2, b = 2, d = 1)
T(n) Θ(n log n)
So the total time is Θ(n log n).
PART-C
1. Explain the method used for performing Multiplication of two large
integers.(May/June 2016)
Some applications like modern cryptography require manipulation of integers thatare over
100 decimal digits long. Since such integers are too long to fit in a single word of amodern computer,
they require special treatment.
In the conventional pen-and-pencil algorithm for multiplying two n-digit integers, each of then digits
of the first number is multiplied by each of the n digits of the second number for the total of n2 digit
multiplications.
The divide-and-conquer method does the above multiplication in less than n2 digit multiplications.
Example:
where
c2 = a1 * b1 is the product of their first halves,
c0 = a0 * b0 is the product of their second halves,c1 =
(a1 + a0) * (b1 + b0) − (c2 + c0)
If n/2 is even, we can apply the same method for computing the products c2, c0, and c1. Thus, if n is
a power of 2, we have a recursive algorithm for computing the product of two n-digit integers. In its
pure form, the recursion is stopped when n becomes 1. It can also be stopped when we deem n small
enough to multiply the numbers of that size directly.
def b21 = 4
def b22 = 2
// Define the "S" matrix.
def s1 = b12 - b22 // 6 def
RESULT:
C= [ 18 14]
[ 62 66]
Dynamic programming – Principle of optimality - Coin changing problem – Warshall’s and Floyd‘s
algorithms – Optimal Binary Search Trees - Multi stage graph - Knapsack Problem and Memory functions.
Greedy Technique – Dijkstra’s algorithm - Huffman Trees and codes - 0/1 Knapsack problem.
PART – A
if n = 0 return C
else return ‖no solution‖
13. What is the best algorithm suited to identify the topography for
a graph. Mention its efficiency factors. (Nov/Dec 2015)
Prim and Kruskal‘s are the best algorithm suited to identify the topography of graph
15. Define the single source shortest path problem. (May/June 2016)
(Nov/Dec 2016)
Dijkstra‗s algorithm solves the single source shortest path problem of finding shortest
paths from a given vertex( the source), to all the other vertices of a weighted graph or
digraph. Dijkstra‗s algorithm provides a correct solution for a graph with non-negative
weights.
PART-B
1. Write down and explain the algorithm to solve all pair shortest path
algorithm. April / May 2010
Write and explain the algorithm to compute all pair source shortestpath
using dynamic programming. Nov / Dec 2010
Explain in detail about Floyd’s algorithm. (Nov/Dec 2016)
Algorithm
//Implemnt Floyd‗s algorithm for the all-pairs shortest-paths problem
// Input: The weight matrix W of a graph
// Output: The distance matrix of the shortest paths lengthsD←
w
for k ← 1 to n do
for i ← 1 to n do for j← 1 to n do
D[i,j] min {D[i,j], D[i,k] + D[k,j]}
Return D.
Example:
2. Write the algorithm to compute the 0/1 knapsack problem usingdynamic programming and
explain Nov / Dec 2010, April / May 2015 Nov/Dec15
KNAPSACK PROBLEM
Given n items of integer weights: w1 w2 … wn , values: v1 v2 … vn and a
knapsack of integer capacity W find most valuable subset of the items that fit into the
knapsack.
Consider instance defined by first i items and capacity j (j W).
Let V[i,j] be optimal value of such an instance. Then max
The Recurrence:
To design a dynamic programming algorithm, we need to derive a recurrence
relation that expresses a solution to an instance of the knapsack problem in terms
of solutions to its smaller sub instances.
Let us consider an instance defined by the first i items, 1≤i≤n, with weights
w1, ... , wi, values v1, ... , vi, and knapsack capacity j, 1 ≤j ≤
W. Let V[i, j] be the value of an optimal solution to this instance, i.e., the value
of the most valuable subset of the first i items that fit into the knapsack of
capacity j.
Divide all the subsets of the first i items that fit the knapsack of capacity j into
two categories: those that do not include the ith item and those that do.
Two possibilities for the most valuable subset for the sub problem P(i, j)
i. It does not include the ith item: V[i, j] = V[i-1, j]
ii. It includes the ith item: V[i, j] = vi+ V[i-1, j – wi] V[i, j] = max{V[i-1, j],
vi+V[i-1, j – wi] }, if j – wi 0
0 1 2 3 4 5
0 0 0 0
w1 = 2, v1= 1 0 0 12
w2 = 1, v2= 2 0 10 12 22 22 2
w3 = 3, v3= 3 0 10 12 22 30 3
w4 = 2, v4= 4 15 25 30 0 10 3
Since V[4, 5] ≠ V[3, 5], item 4 was included in an optimal solution alongwith
an optimal subset for filling 5 - 2 = 3 remaining units of the knapsack capacity.
The latter is represented by element V[3, 3]. Since V[3, 3] = V[2, 3], item3 is
not a part of an optimal subset.
Since V[2, 3] ≠ V[l, 3], item 2 is a part of an optimal selection, which leaves
element
o V[l, 3 -1] to specify its remaining composition.
Similarly, since V[1, 2] ≠ V[O, 2], item 1 is the final part of the optimal solution
{item
o 1, item 2, item 4}.
Efficiency:
The time efficiency and space efficiency of this algorithm are both inΘ(n ).
The time needed to find the composition of an optimal solution is in O(n+ W).
p2 C[i,j]
EXAMPLE
Let us illustrate the algorithm by applying it to the four key set we used at the beginning ofthis
section:
KEY A B C D
Initial Tables:
0 1 2 3 4 0 1 2 3 4
1 0 0.1 1 1
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5
K=1: c(1,2) = c(1,0) + c(2,2) + 0.1 + 0.2 = 0 + 0.2 + 0.1 + 0.2 = 0.5
K=2: c(1,2) = c(1,1) + c(3,2) + 0.1 + 0.2 = 0.1 + 0 + 0.1 + 0.2 = 0.4
0 1 2 3 4 0 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5
K=2: c(2,3) = c(2,1) + c(3,3) + 0.2 + 0.4 = 0 + 0.4 + 0.2 + 0.4 = 1.0
K=3: c(2,3) = c(2,2) + c(4,3) + 0.2 + 0.4 = 0.2 + 0 + 0.2 + 0.4 = 0.8
0 1 2 3 4 0 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 0.8 2 2 3
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5
K=3: c(3,4) = c(3,2) + c(4,4) + 0.4 + 0.3 = 0 + 0.3 + 0.4 + 0.3 = 1.0
K=4: c(3,4) = c(3,3) + c(5,4) + 0.4 + 0.3 = 0.4 + 0 + 0.4 + 0.3 = 1.1
0 1 2 3 4
1 0 0.1 0.4
2 0 0.2 0.8
3 0 0.4 1.0
4 0 0.3
5 0
K=1: c(1,3) = c(1,0) + c(2,3) + 0.1 + 0.2 + 0.4 = 0 + 0.8 + 0.1 + 0.2 + 0.4 = 1.5
K=2: c(1,3) = c(1,1) + c(3,3) + 0.1 + 0.2 + 0.4 = 0.1 + 0.4 + 0.1 + 0.2 + 0.4 = 1.2
K=3: c(1,3) = c(1,2) + c(4,3) + 0.1 + 0.2 + 0.4 = 0.4 + 0 + 0.1 + 0.2 + 0.4 = 1.1
0 1 2 3 4 0 1 2 3 4
2 0 0.2 0.8 2 2 3
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
K=2: c(2,4) = c(2,1) + c(3,4) + 0.2 + 0.4 + 0.3 = 0 + 1.0 + 0.2 + 0.4 + 0.3 = 1.9
K=3: c(2,4) = c(2,2) + c(4,4) + 0.2 + 0.4 + 0.3 = 0.2 + 0.3 + 0.2 + 0.4 + 0.3 = 1.4
K=4: c(2,4) = c(2,3) + c(5,4) + 0.2 + 0.4 + 0.3 = 0.8 + 0 + 0.2 + 0.4 + 0.3 = 1.7
0 1 2 3 4 0 1 2 3 4
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
0 1 2 3 4 0 1 2 3 4
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
Thus, the average number of key comparisons in the optimal tree is equal to
1.7.
Since R(1, 4) = 3, the root of the optimal tree contains the third key, i.e., C.
Its left subtree is made up of keys A and B, and its right subtree contains just
key D.
Since R(1, 2) = 2, the root of the optimal tree containing A and B is B, with A
being its left child (and the root of the one node tree: R(1, 1) = 1).
First, it finds the shortest path from the source to a vertex nearest to it, then to a
second nearest, and so on.
To facilitate the algorithm’s operations, we label each vertex with two labels.
The numeric label d indicates the length of the shortest path from the source to
this vertex found by the algorithm so far; when a vertex is added to the tree, d
indicates the length of the shortest path from the source to that vertex. The other
label indicates the name of the next-to-last vertex on such a path, i.e., the parent of
the vertex in the tree being constructed. With such
labeling, finding the next nearest vertex u∗ becomes a simple task of finding a
fringe vertex with the smallest d value. Ties can be broken arbitrarily.
After we have identified a vertex u∗ to be added to the tree, we need to perform
two operations:
ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = V, E with nonnegative weights and its vertex s
//Output: The length dv of a shortest path from s to v and its penultimate vertex pv for every vertex v in V
u∗ + w(u∗, u) < du
d
u← du∗ + w(u∗, u); pu ← u∗
Decrease(Q, u, du)
5. Let Compute the Huffman coding to compress the data effectively and
encode theString _AB_D. April / May 2015 (Nov/Dec 15)
Character A B C D _
into one (as left and right subtrees) and make its weight equal the sum of the
weights of the two trees.
Mark edges leading to left and right subtrees with 0‗s and 1‗s, respectively.
Example:
character A B C D _
PART-C
1. Find all the solution to the travelling salesman problem (cities and distance shown below) by
exhaustive search. Give the optimal solutions.(May/June 2016)
Tour Length
a ---> b ---> c ---> d ---> a I = 2 + 8 + 1 + 7 = 18
a ---> b ---> d ---> c ---> a I = 2 + 3 + 1 + 5 = 11 optimal
a ---> c ---> b ---> d ---> a I = 5 + 8 + 3 + 7 = 23
a ---> c ---> d ---> b ---> a I = 5 + 1 + 3 + 2 = 11 optimal
a ---> d ---> b ---> c ---> a I = 7 + 3 +8 + 5 = 23 a ---> d ---> c ---> b ---> a I=7
+ 1 + 8 + 2 = 18
PART – A
capacity constraints
0 ≤ xij ≤ uij for every edge (i,j) E
C(n,0)=1
C(n,n)=1
Where n>k>0
PART – B
1. Discuss in detail about stable marriage problem. (Nov/Dec 2016) Explain indetail
about Gale-Shapley Algorithm.
Consider set Y = {m1,…,mn} of n men and a set X = {w1,…,wn} of n women. Each man has
a ranking list of the women, and each woman has a ranking list of the men (with no ties in
these lists) as given below. The same information can also be presented by an n-by-n ranking
matrix The rows and columns of the matrix represent the men and women of the two sets,
respectively. A cell in row m and column w contains two rankings: the first is the position
(ranking) of w in the m's preference list; the second is the position (ranking) of m in the w's
preference list.
Marriage Matching:
A marriage matching M is a set of n (m, w) pairs whose members are selected from disjoint
n- element sets Y and X in a one-one fashion, i.e., each man m fromY is paired with exactly
one woman w from X and vice versa.
Blocking Pair:
A pair (m, w), where m ϵ Y, w ϵ X, is said to be a blocking pair for a marriage matching M
if man m and woman w are not matched in M but they prefer each other to their mates in M.
For example, (Bob, Lea) is a blocking pair for the marriage matching M = (Bob,Ann),
(Jim,Lea), (Tom,Sue)} because they are not matched in M while Bob prefers Lea to Ann and
Lea prefers Bob to Jim.
The stable marriage problem is to find a stable marriage matching formen's and
women's given preferences.
2. How do you compute the maximum flow for the following graphusing
Ford – Fulkerson method? Explain. (May/June 2016)
digraph with n vertices numbered from 1 to n and a set of edges E, with thefollowing
properties:
contains exactly one vertex with no entering edges, called the source
(numbered 1)
contains exactly one vertex with no leaving edges, called the sink
(numbered n)
has positive integer weight uij on each directed edge (i.j), called the
edge capacity, indicating the upper bound on the amount of the material
that can be sent from i to j through this edge
Flow: A flow is an assignment of real numbers xij to edges (i,j) of a givennetwork
that satisfy the following:
Flow-conservation requirements: The total amount of material entering anintermediate
vertex must be equal to the total amount of the materialleaving the vertex
the total amount of the material leaving the source must end up at the sink.
Capacity constraints:
The total outflow from the source or the total inflow into the sink is called the value of the flow (v).
Thus, a (feasible) flow is an assignment of real numbers xij to edges (i, j) of a given network that satisfy
flow-conservation constraints and thecapacity constraints:
0 ≤ xij ≤ uij for every edge (i,j) E
edge ( ← )
If a flow-augmenting path is found, the current flow can be increased by r units byincreasing
xij by r on each forward edge and decreasing xji by r on each backwardedge,
where
r = min {rij on all forward edges, xji on all backward
edges} Assuming the edge capacities are integers, r isa
positive integer On each iteration, the flow value increases
by at least 1
Maximum value is bounded by the sum of the capacities of the edgesleaving the
source;
hence the augmenting-path method has to stop after a finite number of
iterations
The final flow is always maximum; its value doesn‗t depend on asequence of
augmenting paths used
Shortest-Augmenting-Path Algorithm
Generate augmenting path with the least number of edges by BFS as follows.
Starting at the source, perform BFS traversal by marking new (unlabeled) vertices with two
labels:
• first label – indicates the amount of additional flow that can be brought
from the source to the vertex being labeled
• 34second label – indicates the vertex from which the vertex being
labeled was reached, with ―+‖ or ―–‖ added to the second label
to indicate whether the vertex was reached via a forward or backward edge
Labeling of vertex:
The source is always labeled with ∞,-
All other vertices are labeled as follows:
If unlabeled vertex j is connected to the front vertex i of the traversal
queue by a directed edge
+
from i to j with positive unused capacity rij = uij –xij (forward edge),
vertex j is labeled with lj,i ,
where lj = min{li, rij}
If unlabeled vertex j is connected to the front vertex i of the traversalqueue
by a directed edge
-
from j to i with positive flow xji (backward edge), vertex j is labeled
lj,i , where lj = min{li, xji}
If the sink ends up being labeled, the current flow can be augmentedby the
amount
indicated by the
sink‗s first label
The augmentation of the current flow is performed along theaugmenting path
traced by following the vertex second labels from
sink to source; the current flow quantities are increased on theforward edges
and decreased on the backward edges of this path
If the sink remains unlabeled after the traversal queue becomesempty,
the algorithm returns the current flow as maximum and stops.
Defnition of a Cut: Let X be a set of vertices in a network that includes its source but does
not include its sink, and let X, the complement of X, be the rest of the vertices including
the sink. The cut induced by this partition of the vertices is the set of all the edges with a
tail in X and a head in X.
Capacity of a cut is defined as the sum of capacities of the edges that compose
the cut.
A cut and its capacity are denoted by C(X,X)and
c(X,X)
Note that if all the edges of a cut were deleted from the network, there would be
no directed path from source to sink
Efficiency:
• The number of augmenting paths needed by the shortest-augmenting- path
algorithm never exceeds nm/2, where n and m are the number of vertices and edges,
respectively
• Since the time required to find short)est augmenting path by breadth-first search is in
O(n+m)=O(m) for networks represented by their adjacency lists, the time efficiency of
the shortest-augmenting-path algorithm is in O(nm2 for this representation
• More efficient algorithms have been found that can run in close to O(nm)
time, but these algorithms don‗t fall into the iterative- improvement
paradigm
A matching in a graph is a subset of its edges with the property that no two edges share a
vertex A maximum (or maximum cardinality) matching is a matching with the largest
number of edges For a given matching M, a vertex is called free (or unmatched) if it is not
an endpoint of any edge inM; otherwise, a vertex is said to be matched
• If every vertex is matched, then M is a maximum matching
• If there are unmatched or free vertices, then M may be able to be improved
• We can immediately increase a matching by adding an edge connecting
two free vertices (e.g., (1,6) above)
Search for an augmenting path for a matching M by a BFS-like traversal of the graph that starts
simultaneously at all the free verticesin one of the sets V and U, say V
Initialize queue Q with all free vertices in one of the sets (say V)
While Q is not empty, delete front vertex w and label every unlabeledvertex u
adjacent to w as follows:
Maximize P = 3x + 4y subject
to x+3y≤30 ; 2x+y≤20
Outline of Simplex Method: the simplex method to a linear programming problem, it has to be
represented in a special form called the standard form. The standard form has the following
requirements:
[Link] must be a maximization problem.
2. All the constraints (except the non-negativity constraints) must be in the form of linear equations
with nonnegative right-hand sides.
3. All the variables must be required to be nonnegative.
Thus, the general linear programming problem in standard form with m constraints and n unknowns
(n ≥ m) is
maximize c1x1 + . . . + cnxn
subject to ai1x1 + . . . + ainxn = bi, where bi ≥ 0 for i = 1, 2, . . . , m
x1 ≥ 0, . . . , xn ≥0
If all the coordinates of a basic solution are nonnegative, the basic solution is
called a basic feasible solution.
For example, (0, 0, 4, 6) is an extreme point of the feasible region of problem.
For example, (0, 0, 4, 6) is an extreme
Then, replace each of the other rows, including the objective row, by the difference
Step 0 Initialization Present a given linear programming problem in standard form and set up an
initial tableau with nonnegative entries in the rightmost column and m other columns composing the
m × m identity matrix. (Entries in the objective row are to be disregarded in verifying these
requirements.) These m columns define the basic variables of the initial basic feasible solution, used
as the labels of the tableau’s rows.
Step 1 Optimality test If all the entries in the objective row (except, possibly, the one in the
rightmost column, which represents the value of the objective function) are nonnegative— top: the
tableau represents an optimal solution whose basic variables’ values are in the rightmost column
and the remaining, non basic variables’ values are zeros.
Step 2 Finding the entering variable Select a negative entry from among the first n elements of the
objective row. (A commonly used rule is to select the negative entry with the largest absolute value,
with ties broken arbitrarily.) Mark its column to indicate the entering variable and the pivot column.
Step 3 Finding the departing variable For each positive entry in the pivot column, calculate the θ-
ratio by dividing that row’s entry in the rightmost column by its entry in the pivot column. (If all the
entries in the pivot column are negative or zero, the problem is unbounded—stop.) Find the row
with the smallest θ-ratio (ties may be broken arbitrarily),
and mark this row to indicate the departing variable and the pivot row.
Step 4 Forming the next tableau Divide all the entries in the pivot row by its entry in the pivot
column. Subtract from each of the other rows, including the objective row, the new pivot row
multiplied by the entry in the pivot column of the row in question. (This will make all the entries in
the pivot column 0’s except for 1 in the pivot row.) Replace the label of the pivot row by the
variable’s name of the pivot column
and go back to Step 1.
PART-C
Since no material can be lost or added to by going through intermediate vertices of the
network, the total amount of the material leaving the source mustend up at the sink:
∑ x1j =
∑ xjn j: (1,j) є
E j: (j,n) є E
The value of the flow is defined as the total outflow from the source (= the totalinflow
into the sink). The maximum flow problem is to find a flow of the largest value (maximum
flow) for a given network.
• All the edges from the labeled to unlabeled vertices are full, i.e., their flow
amounts are equal to the edge capacities, while all the edges from the
unlabeled to labeled vertices, if any, have zero flow amounts on them.
Lower - Bound Arguments - P, NP, NP- Complete and NP Hard Problems. Backtracking – N-
Queen problem - Hamiltonian Circuit Problem – Subset Sum Problem. Branch and Bound – LIFO
Search and FIFO search - Assignment problem – Knapsack Problem – Traveling Salesman
Problem approximation Algorithms for NP-Hard Problems – Traveling Salesman problem –
Knapsack problem.
PART – A
vertices of the graph exactly once. It is named after the Irish mathematician Sir William
Rowan Hamilton (1805-1865), who became interested in such cycles as an application of
his algebraic discoveries. A Hamiltonian circuit can be also defined as a sequence of n + 1
adjacent vertices vi0,vi1…….vin,vi0 where the first vertex of the sequence is the same as
the last one while all the other n - 1 vertices are distinct
11. Compare backtracking and branch and bound technique. April / May
2015
Backtracking Technique Branch and Bound technique
Backtracking constructs its state-space Branch-and-bound is an algorithm design
tree in the depth-first searchfashion in technique that enhances the idea
the majority of its applications. of generating a state- space tree
with the idea of estimating
the best value obtainable from a
12. Define state space tree. April / May 2015,May/June 2016 The
processing of backtracking is implemented by constructing a tree of
choices being made. This is called the state-space tree. choices made for the firstcomponent
of the solution; the nodes in the second level represent the choicesfor the second
component and so on.
13. How NP – hard problem is different from NP complete? April / May
2015
NP – hard NP complete
If a problem is NP-Hard, this means A decision problem is NP –
that any problem in NP can bereduced to complete when it is both in NP and
the given problem. NP – hard.
14. State the reason for terminating search path at the current node inbranch
bound algorithm (Nov/Dec 2016)
The value of the node's bound is not better than the value of the best solution seenso far.
The node represents no feasible solutions because the constraints of the problem are
already violated.
PART – B
Algorithm
place(k,I)
{
for j := 1 to k-1 do
if(x[j]=I) or(abs(x[j]-I)=abs(j-k))) then return false;return
true;
}
Algorithm Nqueens(k,n)
{
for I:= 1 to n do
{
if( place(k,I) then
{
x[k]:= I;
if(k=n) then write(x[1:n]);
else
Nqueens(k+1,n)
}
}
}
Example
N=4
void Nextvalue(int k)
{
do
{
x[k]=(x[k]+1)%(n+1);
if(!x[k]) return;
if(G[x[k-1]][x[k]])
{
for(int j=1;j<=k-1;j++)
if(x[j]==x[k])
break;
}
}while(1);
}
if(j==k
)
if(k<n)
return;void Hamiltonian(int k)
{
do
{
NextValue(k);
if(!x[k]) return;
if(k==n)
{
for(int i=1;i<=n;i++)
{
Print x[i];
}
}
else Hamiltonian(k+1);
}while(1);
}
Example:
}}
Construction of state space tree for subset sum:
The state-space tree can be constructed
as a binary tree.
The root of the tree represents the starting point, with no decisions about
the given elements made as yet.
Its left and right children represent, respectively, inclusion and exclusion of s1
in a set being sought.
Similarly, going to the left from a node of the first level correspondsto
inclusion of s2, while going to the right corresponds to its exclusion, and so
on. Thus, a path from the root to a node on the ith level of the tree indicates
which of the first I numbers have been included in the subsets represented by
that node.
Record the value of s', the sum of these numbers, in the node. If s'
is equal to d, then there is a solution for this problem.
The result can be both reported and stopped or, if all the solutions need
to be found, continue by backtracking to the node's parent. If s' is not equal to
d, then terminate the node as non promising if either of the following two
inequalities holds:
s' + si+1 > d (the sums' is too large)
< (the sum is too small)
3. Explain how the travelling salesman problem is solved using branch and bound technique.
For example, for the instance above the lower bound is:
lb = [(1+ 3) + (3 + 6) + (1 + 2) + (3 + 4) + (2 + 3)]/2 = 14.
The bounding function can be used, to find the shortest Hamiltonian circuit forthe
given
Root:
First, without loss of generality, consider only tours that start at a.
First level
Second because the graph is undirected, tours can be generated in which b is visited
before c. In addition, after visiting n - 1 = 4 cities, a tour has no choicebut to visit the
remaining unvisited city and return to the starting one.
Lower bound if edge (a,b) is chosen: lb = ceil([(3+1)+(3+6)+(1+2)+(4+3)+(2+3)]/2)=14 Edge
(a,c) is not include since b is visited before c. Lower bound if edge (a,d) is chosen: lb =
ceil([5+1)+(3+6)+(1+2)+(5+3)+(2+3)]/2)=16. Lower bound if edge (a,e) is chosen: lb
= ceil([(8+1)+(3+6)+(1+2)+(4+3)+(2+8)]/2)=19 Since the lower bound of edge (a,b) is
smaller among all the edges, it is included in the solution. The state space tree is expanded
from this node.
Second level
The choice of edges should be made between three vertices: c, d and e.
Lower bound if edge (b,c) is chosen. The path taken will be (a ->b->c):
lb = ceil([(3+1)+(3+6)+(1+6)+(4+3)+(2+3)]/2)=16. Lower bound if edge (b,d) ischosen.
The path taken will be (a ->b->d):
lb = ceil([(3+1)+(3+7)+(7+3)+(1+2)+(2+3)]/2)=16
Lower bound if edge (b,e) is chosen. The path taken will be (a ->b->e):
lb = ceil([(3+1)+(3+9)+(2+9)+(1+2)+(4+3)]/2)=19. (Since this lb is larger than othervalues, so
further expansion is stopped)
The path a->b->c and a->b->d are more promising. Hence the state space tree isexpanded
from those nodes.
Next level
a ->b->c->d->e->a a
->b->c->e->d->a a-
>b->d->c->e->a a-
>b->d->e->c->a
4. Explain in detail about solving knapsack problem using branch and bound
technique. (Nov/Dec 2016)
KNAPSACK
PROBLEM
Given n items of known weights wi and values vi, i = 1, 2, ..., n, and a knapsackof
capacity W, find the most valuable subset of the items that fit in the knapsack. It is convenient
to order the items of a given instance in descending order by their value-to-weight ratios.
Then the first item gives the best payoff per weight unit and the last one gives the worst
payoff per weight unit, with ties resolved
arbitr
arily:
v1/w1≥v2/w2≥…….≥vn/wn
The state-space tree for this problem is constructed as a binary tree as mentioned
below:
Each node on the ith level of this tree, 0≤i≤n, represents all the subsets of n
items that include a particular selection made from the firsti ordered items.
This particular selection is uniquely determined by the path from the rootto the
node:
A branch going to the left indicates the inclusion of the next item,
While a branch going to the right indicates its exclusion.
Record the total weight w and the total value v of this selection in the node, along
with some upper bound ub on the value of any subset that can be obtained
by adding zero or more items to this selection.
W = 10
Node 2 (right of node 1) represents the subsets that do not include item 1.
Ub = 0 + (10-0)*6 =60
Since node 1 has a larger upper bound than the upper boundof node
2, it is more promising for this maximization problem, and branched
from node 1 first.
Node 3 (left of node 1) with item 1 and with item 2
w=4+7=11, v=40;
vi+1/w i+1=5
Here w= 4+7=11>10. This is not a feasible solution since theconstraints
are not satisfied.
Node 4 (right of node 1) with item 1 and without item 2 - w=4; v=40; vi+1/ wi+1=5
ub=40+(10-
4)*5=7
0
Node 5 (left of node 4) with item 1 and item 3 – w=4+5=9; v=40 + 25 = 65;vi+1/ w
o
i+1=4
ub = 65 +(10- 9)*4=6
9
Node 6 (right of node 4) with item 1 and without item 3 – w=4;v=40; vi+1/w
i+1=4 ub =
40 + (10-4)*4=64
The right node yields inferior upper bound. So the left node isselected
for further expansion.
Node 7 (left of node 5) with item 1, 3 and with item 4 – w=3; v=12
Here w= 9+3=12>10. This is not a feasible solution since theconstraints
are not satisfied.
Node 8 (right of node 5) with item 1, 3 and without item
4 w=9; v=65; vi+1/ w i+1=0
ub = 65 + (10- 9)*0=65
Hence the items in the knapsack are {item 1, item 3} with the profit $65.
Select one element in each row of the matrix so that no two selected elements arein the same
column and their sum is the smallest possible. This is done by considering the same small
instance of the problem:
To find a lower bound on the cost of an optimal selection without actually solving the
problem, several methods can be used. For example, it is clear that the cost of any
solution, including an optimal one, cannot be smaller than the sum of the smallest
elements in each of the matrix‗s rows. For the instance given, the lower bound is
lb= 2 +3 + 1 + 4 = 10.
It is important to stress that this is not the cost of any legitimate selection (3
and 1 came from the same column of the matrix); it is just a lower bound on the cost
of any legitimate selection.
Apply the same thinking to partially constructed solutions. For example, for
any legitimate selection that selects 9 from the first row, the lower bound will be
lb = 9 + 3 + 1 + 4 = 17.
This problem deals with the order in which the tree‗s nodes will he generated. Rather
than generating a single child of the last promising node as in backtracking, all the
children of the most promising node among non- terminated leaves in the current tree
are generated.
To find which of the nodes is most promising, compare the lower boundsof the
live node. It is sensible to consider a node with the best bound as most
promising, although this does not, of course, preclude the possibility that an optimal
solution will ultimately belong to a different branchof the state-space tree.
This variation of the strategy is called the best-first branch-and-bound.
Returning to the instance of the assignment problem given earlier, start with the
root that corresponds to no elements selected from the cost matrix. As the lower
bound value for the root, denoted lb is 10.
The nodes on the first level of the free correspond to four elements (jobs) in the
first row of the matrix since they are each a potential selection for the first component
of the solution. So there are four live leaves (nodes 1 through 4) that may contain an
optimal solution. The most promising of them is node 2 because it has the smallest
lower bound value.
By following the best-first search strategy, branch out from that node first by
considering the three different ways of selecting an element from the second row and
not in the second column—the three different jobs that can be assigned to person
b.
Of the six live leaves (nodes 1, 3, 4, 5, 6, and 7) that may contain an optimal solution,
we again choose the one with the smallest lower bound, node 5.
First, consider selecting the third column’s element from c‗s row (i.e., assigning
person c to job 3); this leaves with no choice but to select the element from the
fourth column of d‗s row (assigning person d to job 4). This yield leafs that
corresponds to the feasible solution (a →2, b→1, c→3, d →4) with (The total cost
of 13. Its sibling, node 9, corresponds to the feasible
Solution:
solution {a → 2, b →1, c → 4, d → 3) with the total cost of 25, Since its cost islarger
than the cost of the solution represented by leafs, node 9 is simply terminated.
o Note that if its cost were smaller than 13 then it would have to be replaced with the
information about the best solution seen so far with the data provided bythis node.
o Now, as inspect each of the live leaves of the last state-space tree (nodes 1, 3, 4, 6, and
7 in the following figure), it is discovered that their lower bound values are not
smaller than 13 the value of the best selection seen so far (leaf8).
o Hence all of them are terminated and the solution represented by leaf 8 is
recognized as the optimal solution to the problem.
sa : A – B – C – D – A of
length 10
s* : A – B – D – C – A of
length 8
Multifragment-heuristic algorithm
Step 1 Sort the edges in increasing order of their weights. (Ties can be broken arbitrarily.)
Initialize the set of tour edges to be constructed to the empty set.
Step 2 Repeat this step until a tour of length n is obtained, where n is the number of
cities in the
instance being solved: add the next edge on the sorted edge list to the set of tour edges,
provided this addition does not create a vertex of degree 3 or a cycle of length less than n;
otherwise, skip the edge.
Step 3 Return the set of tour edges.
This algorithm yields the set of edges for the graph shown above:
{(a, b), (c, d), (b, c), (a, d)}.
Minimum-spanning-tree-based algorithms
There are approximation algorithms for the traveling salesman problem that exploit a
connection between Hamiltonian circuits and spanning trees of the same graph. Since
removing an edge from a Hamiltonian circuit yields a spanning tree, we can expect that
the structure of a minimum spanning tree provides a good basis for constructing a shortest
tour approximation. Here is an algorithm that implements this idea in a rather
straightforward fashion.
Twice-around-the-tree algorithm:
vertex except the starting one at the end of the list. (This step is equivalent to making
shortcuts in the walk.) The vertices remaining on the list will form a Hamiltonian circuit,
which is the output of the algorithm.
Christofides algorithm:
It also uses a minimum spanning tree but does this in a more sophisticatedway
than the twice- around-the-tree algorithm.
Stage 1: Construct a minimum spanning tree of the graph
Stage 2: Add edges of a minimum-weight matching of all the odd vertices in theminimum
spanning tree.
Stage 3: Find an Eulerian circuit of the multigraph obtained in stage 2 Stage 3:
Create a tour form the path constructed in Sateg 2 by makingshortcuts to
avoid visiting intermediate vertices more than once.
Step 2: Sort the items in non increasing order of the ratios computed in Step1.
Step 3 Repeat the following operation until no item is left in the sorted list:if the
current item on the list fits into the knapsack, place it in the knapsack; otherwise,
proceed to the next item.
Item Weight Value
1 7 $42
2 3 $12
3 4 $40
4 5 $25
Computing the value-to-weight ratios and sorting the items in non increasingorder of
these efficiency ratios yields
Item Weight Value Value
/
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 3 $12 4
The greedy algorithm will select the first item of weight 4, skip the next item of
weight 7, select the next item of weight 5, and skip the last item of weight 3. The solution
obtained happens to be optimal for this instance
Greedy algorithm for the continuous knapsackproblem:
Step 1 Compute the value-to-weight ratios vi!wi, i = 1, ................. ,n for the items
given.
Step 2 Sort the items in non increasing order of the ratios computed inStep 1.
Step 3 Repeat the following operation until the knapsack is filled to its full capacity or
no item is left in the sorted list: if the current item on the list fits into the knapsack in its
entirety, take it and proceed to the next item; otherwise, take its largest fraction to fill
the knapsack to its full capacity and stop.
Item Weight Value Value
/
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 3 $12 4
The algorithm will take the first item of weight 4 and then 6/7 of thenext
item on the sorted list to fill the knapsack to its full capacity.
Approximation schemes:
For this problem, unlike the traveling salesman problem, there exist polynomial-time
approximation schemes, which are parametric families of algorithms that allow us
to get(k) approximations
Approximation algorithm by S.
This algorithm generates all subsets of k items or less, and for each one that fits into
the knapsack, it adds the remaining items as the greedy algorithm would (i.e., in non
increasing order of their value-to-weight ratios). The subset of the highest value obtained
in this fashion is returned as the algorithm's output.
Example: A small example of an approximation scheme with k = 2and
instances given below:
Item Weight Value Value / Weight
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 1 $4 4
Subset Added items Valu
ϕ 1,3, $69
{1} 3,4 $69
{2} 4 $46
{3} 1,4 $69
{1,2 Not feasible
{1,3 4 $69
{1,4 3 $69
{2,3 Not feasible
{2,4 $46
{3,4 1 $69
Solution: The algorithm yields (1, 3, 4}, which is the optimal solution for thisinstance.
PART-C
1. Explain in detail about P,NP,NP complete and NP hard
PROBLEMS
P:
P is the set of all decision problems which can be solved in polynomialtime.
P problems are questions that have a yes/no answer and can be easilysolved
by a computer.
For example,checkingwhether a number is prime is a relatively easyproblem
to solve.
NP:
There are a lot of programs that don't run in polynomial timeon a
regular computer, but do run in polynomial time on a nondeterministic
Turing machine.
These programs solve problems in NP, which stands for
nondeterministic polynomial time.
NP problems are questions that have yes/no answers that are easy toverify,
but are hard to solve. That means it would take years or centuries for your
computer to come up with an answer.
For example, Given the cities and distances, is there a route thatcovers
all the cities, returning to the starting point, in less than x distance?
Two Stages in NP class problems.
Guessing stage: We can easily guess.
Verifying stage: It is very hard to verify. i.e) High time complexity.
NP COMPLETE:
NP complete problems are special kinds of NP problems. We can takeany kind
of NP problem and twist and bend it until it looks like an NP complete
problem.
A problem is NP complete if the problem is both NP hard, and in NP.
For example, the knapsack problem is NP. It can ask what's the best way to
stuff knapsack if you had lots of different sized pieces ofdifferent
precious metals lying on the ground , and that you can't carryall of them in the
bag.
NP HARD:Figure:Notion of an NP-complete problem. Polynomial-time reductionsof NP
problems to an NP-complete problem are shown by arrows.
Some decision problems cannot be solved at all by any algorithm. Such problems are
called undecidable, as opposed to decidable problems that can be solved by an algorithm.
A famous example of an undecidable problem was given by Alan Turing in 1936.1 The
problem in question is called the halting problem: given a computer program and an input
to it, determine whether the program will halt on that input or continue working indefinitely
on it.
Here is a surprisingly short proof of this remarkable fact. By way of contradiction, assume
that A is an algorithm that solves the halting problem. That is, for any program P and input
I,
This is a contradiction because neither of the two outcomes for program Q is possible, which
completes the proof.