GEETANJALI INSTITUTE OF TECHNICAL STUDIES
Department of Computer Science & Engineering
SUBJECT : ANALYSIS OF ALGORITHMS
[Link] V SEM
BY:
Somya Agrawal
Assistant Professor ,CSE
Somya Agrawal ,Assistant Professor ,CSE 1
UNIT-2
Background: Review of Algorithm, Complexity Order Notations: definitions and calculating
complexity. Divide And Conquer Method: Binary Search, Merge Sort, Quick sort and Strassen's
matrix multiplication algorithms.
Somya Agrawal ,Assistant Professor ,CSE 2
SYLLABUS
SN Contents Hours
1 Introduction: Objective, scope and outcome of the course. 1
Background: Review of Algorithm, Complexity Order Notations: definitions and calculating complexity.
2 6
Divide And Conquer Method: Binary Search, Merge Sort, Quick sort and Strassen's matrix multiplication algorithms.
Greedy Method: Knapsack Problem, Job Sequencing, Optimal Merge Patterns and Minimal Spanning Trees.
3 10
Dynamic Programming: Matrix Chain Multiplication. Longest CommonSubsequence and 0/1 Knapsack Problem.
Branch And Bound: Traveling Salesman Problem and Lower Bound Theory. Backtracking Algorithms and queens problem.
4 8
Pattern Matching Algorithms: Naïve and Rabin Karp string matching algorithms, KMP Matcher and Boyer Moore Algorithms.
Assignment Problems: Formulation of Assignment and Quadratic Assignment Problem.
Randomized Algorithms: Las Vegas algorithms, Monte Carlo algorithms, randomized algorithm for Min-Cut, randomized
5 8
algorithm for 2-SAT. Problem definition of Multicommodity flow, Flow shop scheduling and Network capacity assignment
problems.
Problem Classes Np, Np-Hard And Np-Complete: Definitions of P, NP-Hard and NP-Complete Problems. Decision Problems.
6 Cook's Theorem. Proving NP-Complete Problems - Satisfiability problem and Vertex Cover Problem. Approximation 8
Algorithms for Vertex Cover and Set Cover Problem.
Somya Agrawal ,Assistant Professor ,CSE 3
Contents
• Introduction to Greedy Approach
• Knapsack Problem
• Job Sequencing
• Optimal Merge Patterns
• Minimal Spanning Trees
• Dynamic Programming
• Matrix Chain Multiplication
• Longest Common Subsequence
• 0/1 Knapsack Problem.
4
Somya Agrawal ,Assistant Professor ,CSE
Greedy Method
• A greedy algorithm is an approach for solving a problem by selecting the best
option available at the moment. It doesn't worry whether the current best result
will bring the overall optimal result.
• It is popular for obtaining the optimized solutions.
• Optimization Problem: An optimization problem is the problem of finding the
best solution (optimal solution) from all the feasible solutions (practicable of
possible solutions).
Somya Agrawal ,Assistant Professor ,CSE 5
Greedy Method
• The algorithm never reverses the earlier decision even if the choice is wrong. It
works in a top-down approach.
• At every step, we can make a choice that looks best at the moment, and we get
the optimal solution to the complete problem.
• This algorithm may not produce the best result for all the problems. It's because
it always goes for the local best choice to produce the global best result.
Somya Agrawal ,Assistant Professor ,CSE 6
Characteristics of Greedy Algorithm
• Greedy algorithms are simple and easy to implement.
• They are efficient in terms of time complexity, often providing quick solutions.
• These algorithms do not reconsider previous choices, as they make decisions
based on current information without looking ahead.
Somya Agrawal ,Assistant Professor ,CSE 7
How does the Greedy Algorithm works?
• Greedy Algorithm solve optimization problems by making the best local choice at each step in the hope of finding
the global optimum. It's like taking the best option available at each moment, hoping it will lead to the best overall
outcome.
• STEPS :
1. Start with the initial state of the problem. This is the starting point from where you begin making choices.
2. Evaluate all possible choices you can make from the current state. Consider all the options available at that
specific moment.
3. Choose the option that seems best at that moment, regardless of future consequences. This is the "greedy" part
- you take the best option available now, even if it might not be the best in the long run.
4. Move to the new state based on your chosen option. This becomes your new starting point for the next iteration.
5. Repeat steps 2-4 until you reach the goal state or no further progress is possible. Keep making the best local
choices until you reach the end of the problem or get stuck.
Somya Agrawal ,Assistant Professor ,CSE 8
Example:
• Let's say you have a set of coins with values [1, 2, 5, 10] and you need to give minimum
number of coin to someone change for 39.
• The greedy algorithm for making change would work as follows:
• Step-1: Start with the largest coin value that is less than or equal to the amount to be changed.
In this case, the largest coin less than or equal to 39 is 10.
• Step- 2: Subtract the largest coin value from the amount to be changed, and add the coin to the
solution. In this case, subtracting 10 from 39 gives 29, and we add one 10-coin to the solution.
• Repeat steps 1 and 2 until the amount to be changed becomes 0.
Somya Agrawal ,Assistant Professor ,CSE 9
Somya Agrawal ,Assistant Professor ,CSE 10
Somya Agrawal ,Assistant Professor ,CSE 11
Another Example
• Making the exact change with the minimum number of coins
• Consider the Euro denominations of 1, 2, 5, 10, 20, 50 cents
• Stating with an empty set of coins, add the largest coin possible into the set which does not
go over the required amount.
Somya Agrawal ,Assistant Professor ,CSE 12
Step1 :
To make change for 72:
• Start with 50
Step 2 :
Start with 50
Add a 20
Total 70
Somya Agrawal ,Assistant Professor ,CSE 13
To make change for 72:
• Start with 50
• Add a 20
• Skip the 10 and the 05 but add 02
Total 72
The solution is optimal
Somya Agrawal ,Assistant Professor ,CSE 14
Drawback of Greedy Approach
• The greedy algorithm is not always the optimal solution for every optimization problem.
• When using the greedy approach to make change for the amount 20 with the coin denominations [18,
1, 10]
• The algorithm starts by selecting the largest coin value that is less than or equal to the target amount.
• In this case, the largest coin is 18, so the algorithm selects one 18 coin. After subtracting 18 from 20,
the remaining amount is 2.
• At this point, the greedy algorithm chooses the next largest coin less than or equal to 2, which is 1. It
then selects two 1 coins to make up the remaining amount. So, the greedy approach results in
using one 18 coin and two 1 coins.
• However, the greedy approach fails to find the optimal solution in this case. Although it uses three
coins, a better solution would have been to use two 10 coins, resulting in a total of only two coins (10
Somya Agrawal ,Assistant Professor ,CSE 15
+ 10 = 20).
Knapsack problem
You are given the following-
•A knapsack (kind of shoulder bag) with limited weight capacity.
•Few items each having some weight and value.
The problem states
• Which items should be placed into the knapsack such that-
•The value or profit obtained by putting the items into the knapsack is maximum.
• And the weight limit of the knapsack does not exceed.
Somya Agrawal ,Assistant Professor ,CSE 16
Knapsack Problem Variants
Knapsack problem has the following two variants-
• Fractional Knapsack Problem
• 0/1 Knapsack Problem
Knapsack Variants
Fractional
Knapsack
0/1 Knapsack
Somya Agrawal ,Assistant Professor ,CSE 17
Fractional Knapsack Problem
• In Fractional Knapsack Problem,
• As the name suggests, items are
divisible here.
• We can even put the fraction of any
item into the knapsack if taking the
complete item is not possible.
• It is solved using Greedy Method.
Somya Agrawal ,Assistant Professor ,CSE 18
Fractional knapsack problem
Fractional knapsack problem is solved using greedy method in the following steps-
Step-01: For each item, compute its value / weight ratio.
Step-02: Arrange all the items in decreasing order of their value / weight ratio.
Step-03: Start putting the items into the knapsack beginning from the item with the highest ratio.
Put as many items as you can into the knapsack.
Time Complexity-
• The main time taking step is the sorting of all items in decreasing order of their value / weight
ratio.
• If the items are already arranged in the required order, then while loop takes O(n) time.
• The average time complexity of Quick Sort is O(nlogn).
• Therefore, total time taken including the sort is O(nlogn).
Somya Agrawal ,Assistant Professor ,CSE 19
Algorithm for Fractional Knapsack
Somya Agrawal ,Assistant Professor ,CSE 20
Practice Problem Based On Fractional Knapsack
Problem
For the given set of items and knapsack capacity = 60 kg, find the optimal
solution for the fractional knapsack problem making use of greedy approach.
OR
Item Weight Value
Find the optimal solution for the fractional knapsack problem making use of
greedy approach. 1 5 30
Consider- n = 5 w = 60 kg 2 10 40
(w1, w2, w3, w4, w5) = (5, 10, 15, 22, 25) 3 15 45
(b1, b2, b3, b4, b5) = (30, 40, 45, 77, 90)
4 22 77
OR
5 25 90
A thief enters a house for robbing it. He can carry a maximal weight of 60
kg into his bag. There are 5 items in the house with the following weights
and values. What items should thief take if he can even take the fraction of
any item with him?
Somya Agrawal ,Assistant Professor ,CSE 21
Solution
Step-01: Items Weight Value Ratio
Compute the value / weight ratio for each item- 1 5 30 6
2 10 40 4
3 15 45 3
4 22 77 3.5
5 25 90 3.6
Step-02:
Sort all the items in decreasing order of their value / weight ratio-
I1 I2 I5 I4 I3
(6) (4) (3.6) (3.5) (3)
Somya Agrawal ,Assistant Professor ,CSE 22
Step-03:
Start filling the knapsack by putting the items into it one by one Knapsack Items
Cost
Weight in Knapsack
Now, 60 Ø 0
55 I1 30
Knapsack weight left to be filled is 20 kg but item-4 has a weight of
22 kg. 45 I1, I2 70
Since in fractional knapsack problem, even the fraction of any item
20 I1, I2, I5 160
can be taken.
So, knapsack will contain the following items-
< I1 , I2 , I5 , (20/22) I4 >
Total cost of the knapsack
= 160 + (20/27) x 77
= 160 + 70
= 230 units
Somya Agrawal ,Assistant Professor ,CSE 23
JOB SEQUENCING WITH DEADLINES
• Job scheduling algorithm is applied to schedule the jobs on a single processor to maximize the profits.
• The greedy approach of the job scheduling algorithm states that, Given n number of jobs with a
starting time and ending time, they need to be scheduled in such a way that maximum profit is
received within the maximum deadline.
• Each job takes exactly one unit of time to complete, and only one job can be scheduled at a time. A
job earns its corresponding profit only if it is completed within its deadline.
Objective is to determine
• The maximum profit that can be obtained by scheduling the jobs optimally.
• The total number of jobs completed to achieve this maximum profit.
Somya Agrawal ,Assistant Professor ,CSE 24
Example 2
Step 1 Find the maximum deadline value, dm, from the deadlines given.
DM=4
Step 2 Arrange the jobs in descending order of their profits.
S. No. 1 2 3 4 5
Jobs J4 J5 J2 J3 J1
Deadlines 3 4 2 1 2
Profits 100 80 60 40 20
The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4.
Choose the job with highest profit, J4. It takes up 3 parts of the maximum deadline.
Therefore, the next job must have the time period 1.
Total Profit = 100.
Somya Agrawal ,Assistant Professor ,CSE 25
Step 3: The next job with highest profit is J5. But the time taken by J5 is 4, which exceeds the
deadline by 3. Therefore, it cannot be added to the output set.
Step 4: The next job with highest profit is J2. The time taken by J5 is 2, which also exceeds the
deadline by 1. Therefore, it cannot be added to the output set.
Step 5: The next job with higher profit is J3. The time taken by J3 is 1, which does not exceed the
given deadline. Therefore, J3 is added to the output set.
Total Profit: 100 + 40 = 140
Step 6: Since, the maximum deadline is met, the algorithm comes to an end. The output set of jobs
scheduled within the deadline are {J4, J3} with the maximum profit of 140.
Somya Agrawal ,Assistant Professor ,CSE 26
Example
Consider the following Jobs with their deadlines and profits. Schedule the Jobs in such a way that
they produce maximum profit after being executed .
Solution :
Step 1: Sort all the given jobs in decreasing order of their profit
Somya Agrawal ,Assistant Professor ,CSE 27
Step 2: Value of Maximum Deadline =5
So, draw a Gantt chart with maximum time on chart =5
Now,
•We take each job one by one in the order they appear in Step1.
•We place the job on the Gantt chart as far as possible from 0.
Step 3: We take job J4.
•Since its deadline is 2, we place it in the first empty cell before deadline 2 as:
Step 4:We take job J1.
•Since its deadline is 5, we place it in the first empty cell before deadline 5 as:
Somya Agrawal ,Assistant Professor ,CSE 28
Step 5: We take job J3.
•Since its deadline is 3, we place it in the first empty cell before deadline 3 as:
Step 6: We take job J2.
•Since its deadline is 3, we place it in the first empty cell before deadline 3.
•Since the second and third cells are already filled, we place job J2 in the first cell as:
Step 7: Now, we take job J5.
•Since its deadline is 4, we place it in the first empty cell before deadline 4 as:
Somya Agrawal ,Assistant Professor ,CSE 29
Final Step:
Now,
• The only job left is job J6 whose deadline is 2.
• All the slots before deadline 2 are already occupied.
• Thus, job J6 cannot be completed.
The optimal schedule is:
J2, J4, J3, J5, J1
This is the required order in which the jobs must be completed in order to obtain the maximum
profit.
Maximum Earned Profit
= Sum of profit of all the jobs in the optimal schedule
= Profit of job J2 + Profit of job J4 + Profit of job J3 + Profit of job J5 + Profit of job J1
= 180 + 300 + 190 + 120 + 200
= 990 units
Somya Agrawal ,Assistant Professor ,CSE 30
Example 2
Consider the following Jobs with their deadlines and profits. Schedule the Jobs in such a way that
they produce maximum profit after being executed .
Step 1: Sort all the given jobs in decreasing order based on their profit.
The maximum value for the deadlines is 2. We assume that each job takes unit time to
complete. Hence, we will have 2 time slots and only 2 jobs can be completed within deadline.
Somya Agrawal ,Assistant Professor ,CSE 31
Step 2 : Let us take job J1 . If we look at job J1 its deadline is [Link] means we have to complete
job J1 on or before timeslot 2 in order to earn its profit .
Since its deadline is 2,we have to place the first empty cell before deadline 2,as shown
Step 3: Let us take job [Link] we look job J4 it has deadline 1. This means that we have to complete
job J4 on or before its deadline to earn [Link] its deadline is 1 ,so we can place it in the first
empty cell.
The optimal Schedule is J4,[Link] is the sequence in which jobs must be scheduled to earn
maximum profit.
The Maximum profit earned = 35+70=105
Somya Agrawal ,Assistant Professor ,CSE 32
Optimal Merge Patterns
• The optimal merge pattern problem aims to find the most efficient way to merge multiple sorted files into a
single sorted file, minimizing the total number of comparisons.
• Merge a set of sorted files of different length into a single sorted file. We need to find an optimal solution,
where the resultant file will be generated in minimum time.
• If the number of sorted files are given, there are many ways to merge them into a single sorted file. This
merge can be performed pair wise. Hence, this type of merging is called as 2-way merge patterns.
• As, different pairings require different amounts of time, in this strategy we want to determine an optimal
way of merging many files together. At each step, two shortest sequences are merged.
• To merge a p-record file and a q-record file requires possibly p + q record moves, the obvious choice
being, merge the two smallest files together at each step.
• Two-way merge patterns can be represented by binary merge trees. Let us consider a set of n sorted
files {f1, f2, f3, , fn}. Initially, each element of this is considered as a single node binary tree.
Somya Agrawal ,Assistant Professor ,CSE 33
Example 1
• Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements
respectively.
• If merge operations are performed according to the provided sequence, then
• M1 = merge f1 and f2 => 20 + 30 = 50
• M2 = merge M1 and f3 => 50 + 10 = 60
• M3 = merge M2 and f4 => 60 + 5 = 65
• M4 = merge M3 and f5 => 65 + 30 = 95
Hence, the total number of operations is
• 50 + 60 + 65 + 95 = 270
Somya Agrawal ,Assistant Professor ,CSE 34
• Now, the question arises is there any better solution?
• Sorting the numbers according to their size in an ascending order, we get the following sequence −
• f4 , f3 , f1 , f2 , f5
• Hence, merge operations can be performed on this sequence
• M1 = merge f4 and f3 => 5 + 10 = 15
• M2 = merge M1 and f1 => 15 + 20 = 35
• M3 = merge M2 and f2 => 35 + 30 = 65
• M4 = merge M3 and f5 => 65 + 30 = 95
• Therefore, the total number of operations is
• 15 + 35 + 65 + 95 = 210
• Obviously, this is better than the previous one.
Somya Agrawal ,Assistant Professor ,CSE 35
Example 2
Hence, the solution takes 15 + 35 + 60 + 95 = 205 number of comparisons
Somya Agrawal ,Assistant Professor ,CSE 36
Graphs
• Before we learn about spanning trees, we need to understand
two graphs: undirected graphs and connected graphs.
• An undirected graph is a graph in which the edges do not
point in any direction (ie. the edges are bidirectional).
• A connected graph is a graph in which there is always a path
from a vertex to any other vertex.
Somya Agrawal ,Assistant Professor ,CSE 37
Spanning tree
• A spanning tree is a sub-graph of an undirected connected graph, which includes all the vertices
of the graph with a minimum possible number of edges. If a vertex is missed, then it is not a
spanning tree.
• The edges may or may not have weights assigned to them.
• The total number of spanning tree with n vertices than can be created from a complete graph is
equal to n(n-2).
• If we have n=4,the maximum number of possible spanning tree is equal to 16.
• Thus, 16 spanning trees can be formed from a complete graph with 4 vertices.
Somya Agrawal ,Assistant Professor ,CSE 38
Example of a Spanning Tree
• Let's understand the spanning tree with examples below:
• Let the original graph be:
Somya Agrawal ,Assistant Professor ,CSE 39
• Some of the possible spanning trees that can be created from the above graph are
Somya Agrawal ,Assistant Professor ,CSE 40
Properties of a Spanning Tree
The spanning tree holds the below-mentioned principles:
• The number of vertices (V) in the graph and the spanning tree is the same.
• There is a fixed number of edges in the spanning tree which is equal to one less than the total
number of vertices ( E = V-1 ).
• The spanning tree should not be disconnected, as in there should only be a single source of
component, not more than that.
• The spanning tree should be acyclic, which means there would not be any cycle in the tree.
• The total cost (or weight) of the spanning tree is defined as the sum of the edge weights of all the
edges of the spanning tree.
• There can be many possible spanning trees for a graph.
Somya Agrawal ,Assistant Professor ,CSE 41
MST(Minimal Spanning Tree)
• A minimal spanning tree (MST) is a spanning tree of a connected,
edge-weighted graph that has the minimum possible total edge weight
• A minimum spanning tree is a spanning tree in which the sum of the
weight of the edges is as minimum as possible.
Example of a Minimum Spanning Tree
• Let's understand the above definition with the help of the example
below.
• The initial graph is:
Somya Agrawal ,Assistant Professor ,CSE 42
• The possible spanning trees from the above graph are:
• The minimum spanning tree from the above spanning trees is:
Somya Agrawal ,Assistant Professor ,CSE 43
How to find Minimum spanning tree?
Prim's
Algorithm
MST
Kruskal's
algorithm
Somya Agrawal ,Assistant Professor ,CSE 44
Prim's Algorithm
• Prim’s algorithm is a key method in analysis of algorithm used to find the Minimum Spanning Tree
(MST) of a graph.
• This algorithm works by starting with a single node and gradually adding the smallest possible
edges that connect new nodes to the tree. Here, we will learn how the Prim algorithm works, why
it’s important, and how it can be applied to solve various real-world problems.
Steps
1. Set up the minimum spanning tree using a randomly chosen vertex.
2. Find the minimum of all the edges connecting the tree to further vertices, then add it to the tree.
3. Repeat step 2 as necessary to obtain a minimum spanning tree.
Somya Agrawal ,Assistant Professor ,CSE 45
Example
Find the minimum spanning tree using prims method (greedy approach) for the graph given below
with S as the arbitrary root.
Solution
Step1:The arbitrary root is mentioned to be S, so among all the edges that are connected to S we
need to find the least cost edge.
S → B = 8 V = {S, B}
Somya Agrawal ,Assistant Professor ,CSE 46
Step2:Since B is the last visited, check for the least cost edge that
is connected to the vertex B
B → A = 9 B → C = 16 B → E = 14
Step3 :Since A is the last visited, check for the least cost edge that
is connected to the vertex A.
A → C = 22 A → B = 9 A → E = 11
Somya Agrawal ,Assistant Professor ,CSE 47
Step 4:Since E is the last visited, check for the least cost edge
that is connected to the vertex E.
E → C = 18 E → D = 3
Step 5:Since D is the last visited, check for the least cost edge
that is connected to the vertex D.
D → C = 15 E → D = 3
The minimum spanning tree is obtained with the minimum
cost = 46
Somya Agrawal ,Assistant Professor ,CSE 48
Kruskal's algorithm
• In Kruskal's algorithm, we sort all edges of How to find MST using Kruskal's
algorithm?
the given graph in increasing order.
• Below are the steps for finding MST using
• Then it keeps on adding new edges and nodes Kruskal's algorithm:
• Sort all the edges in a non-decreasing order of their
in the MST if the newly added edge does not weight.
form a cycle. • Pick the smallest edge. Check if it forms a cycle
with the spanning tree formed so far. If the cycle is
• It picks the minimum weighted edge at first not formed, include this edge. Else, discard it.
and the maximum weighted edge at last. • Repeat step 2 until there are (V-1) edges in the
spanning tree.
• Thus we can say that it makes a locally • The graph contains 9 vertices and 14 edges. So, the
minimum spanning tree formed will be having (9 -
optimal choice in each step in order to find
1) = 8 edges.
the optimal solution
Somya Agrawal ,Assistant Professor ,CSE 49
Example
Find MST For the given graph using Minimum Spanning Tree.
Somya Agrawal ,Assistant Professor ,CSE 50
Somya Agrawal ,Assistant Professor ,CSE 51
Somya Agrawal ,Assistant Professor ,CSE 52
Somya Agrawal ,Assistant Professor ,CSE 53
Somya Agrawal ,Assistant Professor ,CSE 54
Somya Agrawal ,Assistant Professor ,CSE 55
What is Dynamic Programming?
• Dynamic programming is a computer programming technique where an algorithmic problem is first
broken down into sub-problems, the results are saved, and then the sub-problems are optimized to find
the overall solution.
• Dynamic Programming is a commonly used algorithmic technique used to optimize recursive solutions
when same subproblems are called again.
• The core idea behind DP is to store solutions to subproblems so that each is solved only once.
• To solve DP problems, we first write a recursive solution in a way that there are overlapping subproblems
in the recursion tree (the recursive function is called with the same parameters multiple times).
• To make sure that a recursive value is computed only once (to improve time taken by algorithm), we
store results of the recursive calls.
• There are two ways to store the results, one is top down (or memoization) and other is bottom up (or
tabulation). Somya Agrawal ,Assistant Professor ,CSE 56
Approaches of Dynamic Programming (DP)
Dynamic programming can be achieved using two approaches:
Top Down
Memorization
Approach
Dynamic
Programming
Approaches
Bottom Up
Tabulation
Approach
Somya Agrawal ,Assistant Professor ,CSE 57
Example of Dynamic Programming (DP)
Consider the problem of finding the Fibonacci sequence:
Function to find nth fibonacci number
int fib(int n) {
if (n <= 1) {
return n;
}
return fib(n - 1) + fib(n - 2);
}
int main() {
int n = 5;
cout << fib(n);
return 0;
}
Somya Agrawal ,Assistant Professor ,CSE 58
Recursion tree of the above recursive solution
Somya Agrawal ,Assistant Professor ,CSE 59
How will Dynamic Programming (DP) Work?
• Identify Subproblems: Divide the main problem into
smaller, independent subproblems, i.e., F(n-1) and F(n-2).
• Store Solutions: Solve each subproblem and store the
solution in a table or array so that we do not have to
recompute the same again.
• Build Up Solutions: Use the stored solutions to build up the
solution to the main problem. For F(n), look up F(n-1) and
F(n-2) in the table and add them.
• Avoid Recomputation: By storing solutions, DP ensures
that each sub problem (for example, F(2)) is solved only
once, reducing computation time.
Somya Agrawal ,Assistant Professor ,CSE 60
Matrix chain multiplication
Somya Agrawal ,Assistant Professor ,CSE 61
Somya Agrawal ,Assistant Professor ,CSE 62
Somya Agrawal ,Assistant Professor ,CSE 63
Longest Common Subsequence
Somya Agrawal ,Assistant Professor ,CSE 64
Example
What is the LCS of X and Y ?
X= ABCB
Y= BDCAB
Somya Agrawal ,Assistant Professor ,CSE 65
Somya Agrawal ,Assistant Professor ,CSE 66
Somya Agrawal ,Assistant Professor ,CSE 67
Somya Agrawal ,Assistant Professor ,CSE 68
Somya Agrawal ,Assistant Professor ,CSE 69
Somya Agrawal ,Assistant Professor ,CSE 70
Somya Agrawal ,Assistant Professor ,CSE 71
0/1 Knapsack problem
• 0-1 Knapsack Problem
• We discussed the fractional knapsack problem using the greedy approach, earlier in this tutorial. It is shown
that the Greedy approach gives an optimal solution for Fractional Knapsack. However, this chapter will cover
the 0-1 Knapsack problem using a dynamic programming approach and its analysis.
• Unlike in fractional knapsack, the items are always stored fully without using the fractional part of them. It's
either the item is added to the knapsack or not. That is why this method is known as the 0-1 Knapsack
problem.
• Hence, in case of 0-1 Knapsack, the value of xᵢ can be either 0 or 1, where other constraints remain the same.
• 0-1 Knapsack cannot be solved by the Greedy approach. The Greedy approach does not ensure an optimal
solution in this method. In many instances, the Greedy approach may give an optimal solution.
Somya Agrawal ,Assistant Professor ,CSE 72
Example
Solution
Using the greedy approach of the 0-1 knapsack problem, the weight that would be stored in the knapsack
would be A + B = 4 with the maximum profit 2 + 4 = 6. However, that solution would not be the optimal one.
Therefore, dynamic programming must be adopted to solve 0-1 knapsack problems.
Step 1
• Construct an adjacency table with the maximum weight of the knapsack as rows and items with their
respective weights and profits as columns.
• Values to be stored in the table are cumulative profits of the items whose weights do not exceed the
maximum weight of the knapsack (the designated value for each row).
• So we add zeroes to the 0ᵗʰ row and 0ᵗʰ column because:
•If the weight of an item is 0, then it weighs nothing.
•If the maximum weight of the knapsack is 0, then no item can be added to the knapsack.
Somya Agrawal ,Assistant Professor ,CSE 73
The remaining values are filled with the maximum
profit achievable with respect to the items and weight
per column that can be stored in the knapsack.
The formula to store the profit values is:
c[i, w] = max { c[i − 1, w], c[i − 1, w − w[i]] + P[i] }
By computing all the values using the formula, the table
obtained would be:
"To find the items to be added in the knapsack,
recognize the maximum profit from the table and
identify the items that make up the profit, in this
example, its {1, 7}."
Somya Agrawal ,Assistant Professor ,CSE 74
Somya Agrawal ,Assistant Professor ,CSE 75
Thank You!!!
Somya Agrawal ,Assistant Professor ,CSE 76