0% found this document useful (0 votes)
77 views28 pages

Divide and Conquer Algorithms Explained

The document discusses the Divide and Conquer algorithm, detailing its method of breaking problems into smaller parts, solving them, and combining results, with examples like Merge Sort and Fibonacci series. It also introduces Dynamic Programming, explaining its principles, applications, and approaches, emphasizing optimal substructure and overlapping subproblems. Additionally, it covers Multistage Graphs and their shortest path algorithms, highlighting the efficiency of Dynamic Programming over brute force methods.

Uploaded by

arunarun12309i
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views28 pages

Divide and Conquer Algorithms Explained

The document discusses the Divide and Conquer algorithm, detailing its method of breaking problems into smaller parts, solving them, and combining results, with examples like Merge Sort and Fibonacci series. It also introduces Dynamic Programming, explaining its principles, applications, and approaches, emphasizing optimal substructure and overlapping subproblems. Additionally, it covers Multistage Graphs and their shortest path algorithms, highlighting the efficiency of Dynamic Programming over brute force methods.

Uploaded by

arunarun12309i
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT – 5

Divide and Conquer: General method, applications

Divide and Conquer algorithm is a problem-solving strategy that involves

 Divide: Break the given problem into smaller non-overlapping problems

 Conquer: Solve Smaller Problems

 Combine: Use the Solutions of Smaller Problems to find the overall result.

Examples of Divide and Conquer are Merge Sort, Quick Sort, Binary Search and Closest Pair of Points.

Let us understand the divide and conquer algorithm with the help of merge sort and Fibonacci series:

1. Merge sort:

Suppose we wish to arrange the letters of the word ‘TECHVIDVAV’ in ascending order. Then,
2. Write a function that generates the Fibonacci series using the divide and conquer approach.

fibonacci(n){

If n < 2

return 1;

else

return fib(n - 1) + fib(n -2);

Applications of Divide and Conquer

Divide and conquer strategy has various application areas such as:

1. Defective chessboard

2. Binary search

3. Finding the maximum and minimum in an array

4. Merge sort

5. Quicksort

6. Strassen’s matrix multiplication

and many more.


Implementation of Min-Max Problem

Min-Max Problem using Divide and Conquer

In this approach, the array is divided into two halves. Then using recursive approach maximum and minimum
numbers in each halves are found. Later, return the maximum of two maxima of each half and the minimum
of two minima of each half.

In this given problem, the number of elements in an array is y – x + 1, where y is greater than or equal to x.

Max - Min(x,y) will return the maximum and minimum values of an array numbers [x….y].

Algorithm: Max - Min(x, y)

if y x ≤ 1 then

return (max(numbers[x],numbers[y]),min((numbers[x],numbers[y]))

else

(max1, min1):= maxmin(x, ⌊((x + y)/2)⌋)

(max2, min2):= maxmin(⌊((x + y)/2) + 1)⌋,y)

return (max(max1, max2), min(min1, min2))

Example:

Let the array be:

A = [5, 3, 8, 2, 7, 6]

Step 1: Divide

Left half: [5, 3, 8]

Right half: [2, 7, 6]

Step 2: Find Min-Max of each half

Left half → min = 3, max = 8

Right half → min = 2, max = 7

Step 3: Combine

Overall min = min(3, 2) = 2

Overall max = max(8, 7) = 8


Result: Minimum = 2 Maximum = 8

Complexity Analysis:

Time Complexity: O(n)

Number of Comparisons: ≈ 3n/2 − 2 (less than the 2n−2 comparisons in the simple linear method)

Space Complexity: O(log n) (due to recursion)

Advantages:

Reduces number of comparisons.

Efficient for large datasets.

Demonstrates clear recursive structure.

Introduction to Dynamic Programming

Dynamic Programming is a commonly used algorithmic technique used to optimize recursive solutions
when same subproblems are called again.

 The core idea behind DP is to store solutions to subproblems so that each is solved only once.

 To solve DP problems, we first write a recursive solution in a way that there are overlapping
subproblems in the recursion tree (the recursive function is called with the same parameters multiple
times)

 To make sure that a recursive value is computed only once (to improve time taken by algorithm), we
store results of the recursive calls.

 There are two ways to store the results, one is top down (or memoization) and other is bottom up (or
tabulation).

When to Use Dynamic Programming (DP)?

Dynamic programming is used for solving problems that consists of the following characteristics:

1. Optimal Substructure:

The property Optimal substructure means that we use the optimal results of subproblems to achieve the
optimal result of the bigger problem.
Example:

Consider the problem of finding the minimum cost path in a weighted graph from a source node to a
destination node. We can break this problem down into smaller subproblems:

 Find the minimum cost path from the source node to each intermediate node.

 Find the minimum cost path from each intermediate node to the destination node.

The solution to the larger problem (finding the minimum cost path from the source node to the destination
node) can be constructed from the solutions to these smaller subproblems.

2. Overlapping Subproblems:

The same subproblems are solved repeatedly in different parts of the problem refer to Overlapping
Subproblems Property in Dynamic Programming.

Example:

Consider the problem of computing the Fibonacci series. To compute the Fibonacci number at index n, we
need to compute the Fibonacci numbers at indices n-1 and n-2. This means that the subproblem of computing
the Fibonacci number at index n-2 is used twice (note that the call for n - 1 will make two calls, one for n-2
and other for n-3) in the solution to the larger problem of computing the Fibonacci number at index n.

You may notice overlapping subproblems highlighted in the second recursion tree for Nth Fibonacci
diagram shown below.

Approaches of Dynamic Programming (DP)

Dynamic programming can be achieved using two approaches:

1. Top-Down Approach (Memoization):

In the top-down approach, also known as memoization, we keep the solution recursive and add a
memoization table to avoid repeated calls of same subproblems.

 Before making any recursive call, we first check if the memoization table already has solution for it.

 After the recursive call is over, we store the solution in the memoization table.

2. Bottom-Up Approach (Tabulation):

In the bottom-up approach, also known as tabulation, we start with the smallest subproblems and
gradually build up to the final solution.

 We write an iterative solution (avoid recursion overhead) and build the solution in bottom-up
manner.

 We use a dp table where we first fill the solution for base cases and then fill the remaining entries of
the table using recursive formula.

 We only use recursive formula on table entries and do not make recursive calls.
Example of Dynamic Programming (DP)

Example 1: Consider the problem of finding the Fibonacci sequence:

Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

Brute Force Approach: To find the nth Fibonacci number using a brute force approach, you would simply
add the (n-1)th and (n-2)th Fibonacci numbers.

// Java program to find


// fibonacci number using recursion.
class GfG {

// Function to find nth fibonacci number


static int fib(int n) {
if (n <= 1) {
return n;
}
return fib(n - 1) + fib(n - 2);
}

public static void main(String[] args) {


int n = 5;
[Link](fib(n));
}
}

Output
5
Below is the recursion tree of the above recursive solution.

The time complexity of the above approach is exponential and upper bounded by O(2n) as we make two
recursive calls in every function.

How will Dynamic Programming (DP) Work?

Let us now see the above recursion tree with overlapping subproblems highlighted with same color. We can
clearly see that that recursive solution is doing a lot work again and again which is causing the time complexity
to be exponential. Imagine time taken for computing a large Fibonacci number.
Here Same Colours denotes overlapping subproblems.

 Identify Subproblems: Divide the main problem into smaller, independent subproblems, i.e., F(n-1)
and F(n-2)

 Store Solutions: Solve each subproblem and store the solution in a table or array so that we do not
have to recompute the same again.

 Build Up Solutions: Use the stored solutions to build up the solution to the main problem. For F(n),
look up F(n-1) and F(n-2) in the table and add them.

 Avoid Recomputation: By storing solutions, DP ensures that each subproblem (for example,
F(2)) is solved only once, reducing computation time.

Fibonacci series:
Write an algorithm in dynamic programming that generates the Fibonacci series.

Member=[ ];
fibonacci(n){
if n in Member
return member[n];
else{
if( n < 2)
fib= 1;
else
fib = fib(n - 1) + fib(n -2);
}
mem[n] = fib;
return fib;
}
Applications of Dynamic Programming

1. Knapsack Problem: One of the classic examples of dynamic programming, the knapsack problem involves
selecting a subset of items with given weights and values to maximize total value without exceeding a weight
limit. This problem has practical applications in resource allocation, budgeting, and logistics.

2. Shortest Path Algorithms: Dynamic programming is used to find the shortest path in weighted graphs, such
as in the Bellman-Ford algorithm and Floyd-Warshall algorithm. These algorithms are essential in routing and
navigation systems, helping to determine the most efficient routes for transportation and delivery services.

3. Bioinformatics: In bioinformatics, dynamic programming is applied to sequence alignment problems, such


as comparing DNA, RNA, or protein sequences. Algorithms like the Needleman-Wunsch and Smith-
Waterman are used to find optimal alignments, which are crucial for understanding genetic relationships and
functions.

4. Resource Allocation: Dynamic programming helps in optimizing resource allocation in various industries,
including manufacturing and project management. It ensures that resources are distributed efficiently across
competing tasks or time periods, maximizing utility while adhering to constraints.

5. Inventory Management: Businesses use dynamic programming to predict inventory needs, optimize stock
levels, and determine reorder points. This application helps maintain the right inventory levels while
minimizing costs, ensuring demand is met without excessive storage or stockouts.

6. Cryptography: Dynamic programming techniques are employed in cryptographic algorithms, such as


dynamic key cryptography and code-based cryptography. These methods enhance security by efficiently
managing encryption and decryption processes.

7. Stage Coach Problem: This foundational example of dynamic programming models a scenario where a
stagecoach must travel from an initial point to a destination, passing through several stages. The goal is to
determine the most cost-effective route, illustrating how DP can optimize sequential decision-making under
constraints.

Multistage Graph Basics

A Multistage graph is a directed, weighted graph in which the nodes can be divided into a set of stages such
that all edges are from a stage to next stage only (In other words there is no edge between vertices of same
stage and from a vertex of current stage to previous stage).

The vertices of a multistage graph are divided into n number of disjoint subsets
S = { S1 , S2 , S3 ........... Sn }, where S1 is the source and Sn is the sink (destination).
The cardinality of S1 and Sn are equal to 1. i.e., |S1| = |Sn| = 1.

We are given a multistage graph, a source and a destination, we need to find shortest path from source to
destination. By convention, we consider source at stage 1 and destination as last stage.
Now there are various strategies we can apply: -

 The Brute force method of finding all possible paths between Source and Destination and then
finding the minimum. That's the WORST possible strategy.

 Dijkstra's Algorithm of Single Source shortest paths. This method will find shortest paths from
source to all other nodes which is not required in this case. So it will take a lot of time, and it doesn't
even use the SPECIAL feature that this MULTI-STAGE graph has.

 Simple Greedy Method - At each node, choose the shortest outgoing path. If we apply this approach
to the example graph given above, we get the solution as 1 + 4 + 18 = 23. But a quick look at the
graph will show much shorter paths available than 23. So, the greedy method fails!

 The best option is Dynamic Programming. So we need to find Optimal Sub-structure, Recursive
Equations and Overlapping Sub-problems.

Optimal Substructure and Recursive Equation: -


We define the notation: - M(x, y) as the minimum cost to T(target node) from Stage x, Node y.

Shortest distance from stage 1, node 0 to


destination, i.e., 7 is M(1, 0).
// From 0, we can go to 1 or 2 or 3 to
// reach 7.
M(1, 0) = min(1 + M(2, 1),
2 + M(2, 2),
5 + M(2, 3))
This means that our problem of 0 ---> 7 is now sub-divided into 3 sub-problems: -

So if we have total 'n' stages and target


as T, then the stopping condition will be:-
M(n-1, i) = i ---> T + M(n, T) = i ---> T

Recursion Tree and Overlapping Sub-Problems: -


So, the hierarchy of M(x, y) evaluations will look something like this :-
In M(i, j), i is stage number and
j is node number
M(1, 0)
/ | \
/ | \
M(2, 1) M(2, 2) M(2, 3)
/ \ / \ / \
M(3, 4) M(3, 5) M(3, 4) M(3, 5) M(3, 6) M(3, 6)
. . . . . .
. . . . . .
. . . . . .

So, here we have drawn a very small part of the Recursion Tree and we can already see
Overlapping Sub-Problems. We can largely reduce the number of M(x, y) evaluations using
Dynamic Programming.

Implementation of Multistage Graph Problem

The below implementation assumes that nodes are numbered from 0 to N-1 from first stage (source)
to last stage (destination). We also assume that the input graph is multistage.

 We use top to bottom approach and use dist[] array to store the value of overlapping sub-problem.
 dist[i] will store the value of minimum distance from node i to node n-1 (target node).
 Therefore, dist[0] will store minimum distance between from source node to target node.

Algorithm

Input: A weighted multistage graph G with s and t as source and target vertices, respectively.
Output: The shortest path from s to t in G.
Set d(t) = 0 and d(v) = ? for all other vertices v in G.
For i = k-1 to 1:
a. For each vertex v in stage i:
i. Set d(v) = min(w(v, u) + d(u)) for all vertices u in stage i+1.
Return d(s) as the shortest path from s to t.
In the above algorithm, we start by setting the shortest path distance to the target vertex t as 0 and all other
vertices as infinity.
We then work backwards from the target vertex t to the source vertex s.
Starting from the second-to-last stage (k-1), we loop over all the vertices in that stage and update their
shortest path distance based on the
shortest path distances of the vertices in the next stage (i+1). We update the shortest path distance of a vertex
v in stage i as the minimum of the sum of its
weight w(v,u) and the shortest path distance d(u) of all vertices u in stage i+1 that are reachable from v.
After we have processed all stages and all vertices, the final shortest path distance d(s) will contain the
shortest path from s to t.
Time Complexity: The time complexity of the given code is O(N^2), where N is the number of nodes in the
graph. This is because the code involves two nested loops that iterate over all pairs of nodes in the graph, and
each iteration performs a constant amount of work (i.e., comparing and updating distances). Since the graph
is represented using an adjacency matrix, accessing an element takes constant time. Therefore, the overall time
complexity of the algorithm is O(N^2).

Space Complexity: The space complexity of the given program is O(N), where N is the number of nodes in
the graph. This is because the program uses an array of size N to store the shortest distance from each node to
the destination node N-1.

Greedy Algorithm: General method, applications

A greedy algorithm solves problems by making the best choice at each step. Instead of looking at all
possible solutions, it focuses on the option that seems best right now.

Problem structure:

Most of the problems where greedy algorithms work follows these two properties:

1) Greedy Choice Property: - This property states that choosing the best possible option at each step will
lead to the best overall solution. If this is not true, a greedy approach may not work.

2) Optimal Substructure: - This means that you can break the problem down into smaller parts, and solving
these smaller parts by making greedy choices helps solve the overall problem.

How to Identify Greedy Problems:

There are two major ways to detect greedy problems -

1) Can we break the problem into smaller parts? If so and solving those parts helps us solve the main
problem, it probably would be solved using greedy approach. For example - In activity selection problem,
once we have selected an activity then remaining subproblem is to choose those activities that start after the
selected activity.
2) Will choosing the best option at each step lead to the best overall solution? If yes, then a greedy
algorithm could be a good choice. For example - In Dijkstra’s shortest path algorithm, choosing the minimum-
cost edge at each step guarantees the shortest path.

Components of Greedy Algorithm

Greedy algorithms have the following five components −

 A candidate set − A solution is created from this set.

 A selection function − Used to choose the best candidate to be added to the solution.

 A feasibility function − Used to determine whether a candidate can be used to contribute to the
solution.

 An objective function − Used to assign a value to a solution or a partial solution.

 A solution function − Used to indicate whether a complete solution has been reached.

Difference between Greedy and Dynamic Programming:

1). Greedy algorithm works when the problem has Greedy Choice Property and Optimal Substructure,
Dynamic programming also works when a problem has optimal substructure, but it also
requires Overlapping Subproblems.

2). In greedy algorithm each local decision leads to an optimal solution for the entire problem whereas in
dynamic programming solution to the main problem depends on the overlapping subproblems.

Some common ways to solve Greedy Problems:

1) Sorting

Job Sequencing:- To maximize profits, we prioritize jobs with higher profits. So we sort them in descending
order based on profit. For each job, we try to schedule it as late as possible within its deadline to leave earlier
slots open for other jobs with closer deadlines.

Activity Selection:- To maximize the number of non-overlapping activities, we prioritize activities


that end earlier, which helps us to select more activities. Therefore, we sort them based on their end times in
ascending order. Then, we select the first activity and continue adding subsequent activities that start after the
previous one has ended.

Disjoint Intervals:- The approach for this problem is exactly similar to previous one, we sort the intervals
based on their start or end times in ascending order. Then, select the first interval and continue adding next
intervals that start after the previous one ends.

Fractional Knapsack:- The basic idea is to calculate the ratio profit/weight for each item and sort the item on
the basis of this ratio. Then take the item with the highest ratio and add them as much as we can (can be the
whole element or a fraction of it).
Kruskal Algorithm:- To find the Minimum Spanning Tree (MST), we prioritize edges with the smallest
weights to minimize the overall cost. We start by sorting all the edges in ascending order based on their
weights. Then, we iteratively add edges to the MST while ensuring that adding an edge does not form a cycle.

2) Using Priority Queue or Heaps

Dijkstra Algorithm:- To find the shortest path from a source node to all other nodes in a graph, we prioritize
nodes based on the smallest distance from the source node. We begin by initializing the distances and using
a min-priority queue. In each iteration, we extract the node with the minimum distance from the priority
queue and update the distances of its neighboring nodes. This process continues until all nodes have been
processed, ensuring that we find the shortest paths efficiently.

Connect N ropes:- In this problem, the lengths of the ropes picked first are counted multiple times in the total
cost. Therefore, the strategy is to connect the two smallest ropes at each step and repeat the process for the
remaining ropes. To implement this, we use a min-heap to store all the ropes. In each operation, we extract
the top two elements from the heap, add their lengths, and then insert the sum back into the heap. We continue
this process until only one rope remains.

Huffman Encoding:- To compress data efficiently, we assign shorter codes to more frequent characters and
longer codes to less frequent ones. We start by creating a min-heap that contains all characters and their
frequencies. In each iteration, we extract the two nodes with the smallest frequencies, combine them into a
new node, and insert this new node back into the heap. This process continues until there is only one node left
in the heap.

3) Arbitrary

Minimum Number of Jumps To Reach End:- In this problem we maintain a variable to store maximum
reachable position at within the current jump's range and increment the jump counter when the current jump
range has been traversed. We stop this process when the maximum reachable position at any point is greater
than or equal to the last index value.

Applications of Greedy Algorithms

 We use Greedy Algorithms in our day-to-day life to find minimum number of coins or notes for a given
amount. We fist begin with largest denomination and try to use maximum number of the largest and
then second largest and so on.

 Dijkstra's shortest path algorithm: Finds the shortest path between two nodes in a graph.

 Kruskal's and Prim's minimum spanning tree algorithm: Finds the minimum spanning tree for a
weighted graph. Minimum Spanning Trees are used in Computer Networks Designs and have many
real-world applications

 Huffman coding: Creates an optimal prefix code for a set of symbols based on their frequencies.

 Fractional knapsack problem: Determines the most valuable items to carry in a knapsack with a
limited weight capacity.

 Activity selection problem: Chooses the maximum number of non-overlapping activities from a set
of activities.
 Job Sequencing and Job Scheduling Problems.

 Finding close to the optimal solution for NP-Hard problems like TSP. ide range of network design
problems, such as routing, resource allocation, and capacity planning.

 Machine learning: Greedy algorithms can be used in machine learning applications, such as feature
selection, clustering, and classification. In feature selection, greedy algorithms are used to select a
subset of features that are most relevant to a given problem. In clustering and classification, greedy
algorithms can be used to optimize the selection of clusters or classes

 Image processing: Greedy algorithms can be used to solve a wide range of image processing problems,
such as image compression, denoising, and segmentation. For example, Huffman coding is a greedy
algorithm that can be used to compress digital images by efficiently encoding the most frequent pixels.

 Combinatorics optimization: Greedy algorithms can be used to solve combinatorial optimization


problems, such as the traveling salesman problem, graph coloring, and scheduling. Although these
problems are typically NP-hard, greedy algorithms can often provide close-to-optimal solutions that
are practical and efficient.

 Game theory: Greedy algorithms can be used in game theory applications, such as finding the optimal
strategy for games like chess or poker. In these applications, greedy algorithms can be used to identify
the most promising moves or actions at each turn, based on the current state of the game.

Counting Coins Problem

The Counting Coins problem is to count to a desired value by choosing the least possible coins and the greedy
approach forces the algorithm to pick the largest possible coin. If we are provided coins of 1, 2, 5 and 10 and
we are asked to count 18 then the greedy procedure will be −

1 − Select one 10 coin, the remaining count is 8

2 − Then select one 5 coin, the remaining count is 3

3 − Then select one 2 coin, the remaining count is 1

4 − And finally, the selection of one 1 coins solves the problem

Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly change the
problem then the same approach may not be able to produce the same optimum result.

For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be absolutely
optimum but for count like 15, it may use more coins than necessary. For example, the greedy approach will
use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be solved by using only 3 coins (7
+ 7 + 1)

Hence, we may conclude that the greedy approach picks an immediate optimized solution and may fail where
global optimization is a major concern.
Knapsack Problem Basics - Implementation of Knapsack Problem

Definition:

The Knapsack Problem is a classic optimization problem in computer science and mathematics.

You are given:

A set of items, each with a weight and a value.

A knapsack (bag) that can carry only a limited maximum weight (W).

Goal: Select items to maximize the total value in the knapsack without exceeding the weight capacity.

Types of Knapsack Problem:

i) 0/1 Knapsack Problem:

Each item can be taken only once or not taken at all. (No fractions allowed.)

You cannot break items: either take the entire item or leave it.

Cannot generally be solved by the greedy method. Needs dynamic programming for an optimal answer.

ii) Fractional Knapsack Problem:

Items can be broken into smaller pieces (fractions allowed).

You can break items into fractions (e.g., take half of a gold bar).

Solved optimally by the greedy method.

Implementation of Knapsack Problem

Given a collection of items, each with a value (profit) and weight, and a knapsack with limited capacity, the
goal is to maximize value without exceeding the weight limit.

Imagine you are a thief with a bag (knapsack) that can hold a limited amount of weight, and you must fill it
from a set of items, each having a weight and a profit (value). Your goal? Maximize the total value in the
bag without exceeding its weight limit.

0/1 Knapsack Problem: In Detail

Problem Statement

Given:

o A set of nnn items, each with a profit pip_ipi and weight wiw_iwi.

o A knapsack with maximum weight capacity WWW.


Goal: Select a subset of items such that their total weight does not exceed WWW and the total profit is
maximized.

Constraint: Each item can be chosen at most once (either 0 or 1 times, hence the name).

Why Greedy Fails for 0/1 Knapsack

The greedy idea (select by highest value/weight ratio) does not always lead to the optimal solution, since
you can’t split an item as in fractional knapsack.

Example Showing Greedy Fails

Suppose: Item Profit Weight

A 60 10

B 100 20

C 120 30

Knapsack capacity W=50W = 50W=50

Greedy picks A ([Link]), then B ([Link]), then tries to pick C but no space left.

Total profit: 60+100=16060 + 100 = 16060+100=160

But the optimal solution: pick B and C (20+30=5020+30=5020+30=50). Total profit = 100+120=220100 +
120 = 220100+120=220; better than greedy.

Method to Solve 0/1 Knapsack (Dynamic Programming)

1. Create a table: Rows for items, columns for possible weights up to WWW.

2. Fill the table: For each item and weight, decide whether including it yields a better total value or not.

3. Result: The bottom-right cell gives the max profit.

Step-by-Step Solution: Greedy Algorithm for Fractional Knapsack Problem

Let's use a concrete example:

Given:

Knapsack capacity W=50W = 50W=50

Items list:
Item Value Weight
(Profit)

1 60 10

2 100 20

3 120 30

Step 1: Compute Value/Weight Ratio

Calculate the ratio for each item:

Item 1: 60/10=6.060/10 = 6.060/10=6.0

Item 2: 100/20=5.0100/20 = 5.0100/20=5.0

Item 3: 120/30=4.0120/30 = 4.0120/30=4.0

Step 2: Sort Items by Ratio (Descending Order)

Order:

1. Item 1 (6.0)

2. Item 2 (5.0)

3. Item 3 (4.0)

Step 3: Add Items to Knapsack

Initialize: Remaining capacity = 50, Total value = 0

Take Item 1:

o Weight: 10 (fits!)

o Take all. Remaining capacity = 50 - 10 = 40

o Total value = 0 + 60 = 60

Take Item 2:

o Weight: 20 (fits!)

o Take all. Remaining capacity = 40 - 20 = 20

o Total value = 60 + 100 = 160


Take Item 3:

o Weight: 30, but only 20 capacity left

o Take 20/30=2/320/30 = 2/320/30=2/3 fraction

o Add value: 120×(20/30)=80120 \times (20/30) = 80120×(20/30)=80

o Now, knapsack is full (remaining capacity = 0)

o Total value = 160 + 80 = 240

Final Solution

Take all of Item 1 (10 units, Value 60)

Take all of Item 2 (20 units, Value 100)

Take 2/3 of Item 3 (20 units, Value 80)

Total value in knapsack = 240

Total weight used = 10 + 20 + 20 = 50

Item Weight Taken Value Added

1 10 60

2 20 100

3 20 (of 30) 80
Key Points to Remember

Always select items with highest value/weight first.

For fractional knapsack, you can take a part of an item.

Greedy choice is optimal for the fractional version, not for 0/1 knapsack.

Summary Table

Aspect Fractional Knapsack 0/1 Knapsack

Item selection Can take fractions Whole items only

Greedy algorithm Yes (optimal) No (not always


works? optimal)

Optimal approach Greedy Dynamic


Programming

Backtracking: General method, Applications

Backtracking is an algorithmic technique used to solve problems incrementally by exploring


all possible solutions. It systematically searches for a solution by building a solution step-by-step and
abandoning a path (backtracking) as soon as it determines that the path cannot lead to a valid solution.

The general steps of backtracking are:

1. Choose: Select a possible option or decision.

2. Explore: Proceed with the chosen option and move to the next step.

3. Check: If the current solution is invalid or incomplete, backtrack by undoing the last
decision and try the next alternative.

4. Repeat: Continue until all possibilities are explored or a solution is found.

Backtracking is often implemented using recursion, where the function calls itself to explore
further possibilities.

Applications of Backtracking

Backtracking is widely used in solving combinatorial and optimization problems. Some


common applications include:

1. N-Queens Problem:

o Place N queens on an N×N chessboard such that no two queens threaten each
other.

o Backtracking helps explore all possible placements and discard invalid


configurations.
2. Graph Coloring:

o Assign colors to vertices of a graph such that no two adjacent vertices share the
same color.

o Used in scheduling problems, map coloring, etc.

3. Hamiltonian Cycle:

o Find a cycle in a graph that visits each vertex exactly once and returns to the
starting vertex.

o Backtracking explores all possible paths to find the cycle.

4. Subset Sum Problem:

o Determine if there exists a subset of numbers that adds up to a given sum.

o Useful in resource allocation and partitioning problems.

5. Sudoku Solver:

o Solve Sudoku puzzles by filling in numbers while ensuring the rules of the game
are followed.

o Backtracking systematically tries numbers in empty cells.

6. Word Search:

o Find a word in a grid of letters by moving horizontally, vertically, or diagonally.

o Backtracking explores all possible paths to form the word.

7. Knight’s Tour Problem:

o Find a sequence of moves for a knight on a chessboard such that it visits every
square exactly once.

o Backtracking explores all possible moves to find a valid sequence.

8. String Permutations:

o Generate all permutations of a given string or array.

o Backtracking ensures all possible arrangements are explored.

9. Maze Solving:

o Find a path from the start to the end of a maze.

o Backtracking explores all possible routes and discards dead ends.


Implementation of Sum of Subset Problem

Given a set[] of non-negative integers and a value sum, the task is to print the subset of the
given set whose sum is equal to the given sum.

Examples:

Input: set[] = {1,2,1}, sum = 3


Output: [1,2],[2,1]
Explanation: There are subsets [1,2],[2,1] with sum 3.

Subset Sum Problem using Backtracking

Subset sum can also be thought of as a special case of the 0–1 Knapsack problem. For each
item, there are two possibilities:

 Include the current element in the subset and recur for the remaining elements with
the remaining Sum.

 Exclude the current element from the subset and recur for the remaining elements.

Finally, if Sum becomes 0 then print the elements of current subset. The recursion’s base
case would be when no items are left, or the sum becomes negative, then simply return.
Algorithm Explanation

1. Start with an empty subset.

2. For each element in the set:

o Include the element in the subset and check if the sum matches the target.
o Exclude the element and move to the next.

3. If the subset sum equals the target, record the subset.

4. Backtrack to explore other possibilities.

Example

nums = [3, 34, 4, 12, 5, 2]

target = 9

Output

For the input nums = [3, 34, 4, 12, 5, 2] and target = 9, the output will be:

Subsets with sum 9: [[3, 4, 2], [4, 5]]

Complexity analysis:

 Time Complexity: O(2n) The above solution may try all subsets of the given set in the
worst case. Therefore, time complexity of the above solution is exponential.

 Auxiliary Space: O(n) where n is recursion stack space.

Branch and Bound: General method, applications

Branch and Bound (B&B) is an optimization technique for combinatorial problems. It


systematically explores subproblems (branches) and uses bounds to eliminate those that cannot
improve upon the best-known solution (pruning).

Key Concepts:

Branching: Decompose a problem into smaller subproblems.

Bounding: Compute a lower (or upper) bound on the subproblem's best possible solution.

Pruning: Discard subproblems if their bound is worse than the current best solution.
Applications

1. Combinatorial Optimization:

Traveling Salesman Problem (TSP): Branch and Bound is used to find the shortest possible
route that visits a set of cities and returns to the origin city. It systematically explores all
possible routes while pruning those that exceed the current best solution.

Knapsack Problem: This algorithm helps in determining the most valuable combination of
items to include in a knapsack without exceeding its weight limit. It evaluates different
combinations and eliminates those that do not yield optimal value.

2. Resource Allocation:

In scenarios where resources need to be distributed optimally among competing demands,


Branch and Bound can efficiently allocate resources to maximize overall utility or minimize
costs.

3. Constraint Satisfaction Problems:

The algorithm is effective in solving problems where a set of constraints must be satisfied, such
as scheduling tasks or assigning resources while adhering to specific limitations.

4. Production Planning:

Branch and Bound is applied in production scheduling and planning to optimize the use of
resources and time, ensuring that production goals are met efficiently.

5. Network Flow Problems:

It is used to solve complex network flow issues, where the goal is to find the optimal way to
route flow through a network while minimizing costs or maximizing throughput.

Implementation of Travelling Salesman Problem

The Travelling Salesperson Problem (TSP) is an optimization problem where a salesperson


must visit a given set of cities exactly once, starting and ending at the same city. The goal is to
find the shortest possible route that covers all the cities and returns to the starting point.

TSP is an NP-hard problem, meaning there is no known efficient solution for large datasets,
but various algorithms can provide exact or approximate solutions.

Why is TSP Algorithm Important?

The algorithm for salesman Travelling problem is important because it represents real-world
problems that many industries face, such as:

 Delivery services need to plan routes to drop off packages at multiple locations.
 Logistics companies need to find the shortest path to transport goods efficiently.

 Manufacturing companies need to reduce the travel of robotic arms in factories.

Solving TSP helps reduce time, fuel costs, and energy, making operations faster and cheaper.

Algorithm Overview for Traveling Salesman Problem

1. Initialization: Set initial best cost (e.g., infinity) and prepare starting node representing an
empty path from the source city.

2. Branching: At each node (partial path), generate branches by extending the path to unvisited
cities.

3. Bounding: Calculate a lower bound on the total route cost achievable from this node (partial
solution).

4. Pruning: If the bound exceeds the current best cost, skip expanding that node.

5. Search: Use best-first or depth-first strategy to explore and update best solution.

6. Termination: Once all viable nodes are explored, return the best-found route.

The following is a common Dynamic Programming based algorithm to solve the


Travelling Salesman Problem (TSP) using the Held-Karp algorithm:

def tsp(dist):
n = len(dist) # Number of cities

# dp[mask][i] will store the minimum distance to visit all cities in 'mask' and end at city 'i'
dp = [[float('inf')] * n for _ in range(1 << n)]
dp[1][0] = 0 # Start from city 0 (only city 0 visited)

# Loop through all subsets of cities represented by 'mask'


for mask in range(1 << n):
for u in range(n):
if mask & (1 << u): # If u is included in the current subset 'mask'
for v in range(n):
if mask & (1 << v) and u != v: # If v is also in the subset and v != u
dp[mask][u] = min(dp[mask][u], dp[mask ^ (1 << u)][v] + dist[v][u])

# Find the minimum cost to return to the starting city (0) from any city
min_cost = float('inf')
for u in range(1, n):
min_cost = min(min_cost, dp[(1 << n) - 1][u] + dist[u][0])

return min_cost
Travelling Salesman Problem: Example

Let's say we have a salesperson who needs to visit four popular Indian cities: Mumbai, Delhi,
Bengaluru, and Chennai, and they want to find the shortest route that visits each city exactly
once and returns to the starting city.

The distances between the cities are:

Problem: The salesperson starts in Mumbai and must visit Delhi, Bengaluru, and Chennai
exactly once, then return to Mumbai. The goal is to find the shortest route.

Possible Routes: Let’s calculate the total distance for a few possible routes:

1. Mumbai → Delhi → Bengaluru → Chennai → Mumbai

 Mumbai → Delhi = 1,400 km

 Delhi → Bengaluru = 2,150 km

 Bengaluru → Chennai = 350 km

 Chennai → Mumbai = 1,330 km

Total distance = 1,400 + 2,150 + 350 + 1,330 = 5,230 km

2. Mumbai → Delhi → Chennai → Bengaluru → Mumbai

 Mumbai → Delhi = 1,400 km

 Delhi → Chennai = 2,200 km

 Chennai → Bengaluru = 350 km

 Bengaluru → Mumbai = 980 km

Total distance = 1,400 + 2,200 + 350 + 980 = 4,930 km


3. Mumbai → Bengaluru → Chennai → Delhi → Mumbai

 Mumbai → Bengaluru = 980 km

 Bengaluru → Chennai = 350 km

 Chennai → Delhi = 2,200 km

 Delhi → Mumbai = 1,400 km

Total distance = 980 + 350 + 2,200 + 1,400 = 4,930 km

The shortest routes are:

 Mumbai → Delhi → Chennai → Bengaluru → Mumbai with a total distance of 4,930


km.

 Mumbai → Bengaluru → Chennai → Delhi → Mumbai with a total distance of 4,930


km.

In both cases, the total travel distance is the same, and this is the most efficient route for the
salesperson to minimize travel distance.

In this example, we used the salesman travelling problem to determine the shortest route
between four major Indian cities. By calculating different possible routes, we found the shortest
one, which helps reduce travel time and costs. However, real-life problems can be much more
complex.

Approaches in TSP Algorithm

The Travelling Salesperson Problem is an NP-hard problem, and finding an optimal solution
becomes difficult as the number of cities increases. Various approaches have been developed
to solve TSP, each with different trade-offs between accuracy and efficiency:

1. Brute-Force Approach

This method explores all possible routes (permutations) between cities, calculates the total
distance for each route, and selects the shortest one. It guarantees an optimal solution but is
inefficient.

 Time Complexity: O(n!), where n is the number of cities.

 Advantages: Guarantees the optimal solution.

 Disadvantages: Extremely slow for large datasets due to factorial growth in possible
routes.

2. Dynamic Programming Approach (Held-Karp Algorithm)

The travelling salesman problem using dynamic programming breaks the problem into smaller
subproblems. It solves these subproblems and stores the results to avoid redundant calculations.
 Time Complexity: O(n² * 2^n).

 Advantages: More efficient than brute-force and provides an exact solution.

 Disadvantages: Still exponential, making it impractical for large numbers of cities,


though it performs better than brute-force.

3. Approximation Algorithms

These algorithms do not guarantee the optimal solution but provide near-optimal solutions
more quickly than exact approaches.

Common approximation algorithms include:

Nearest Neighbor Algorithm:

Starts at a random city and repeatedly visits the nearest unvisited city.

 Time Complexity: O(n²).

 Advantages: Simple and fast.

 Disadvantages: The solution may not be very close to the optimal route.

Greedy Algorithm:

Selects the shortest available edge between cities at each step without forming a cycle.

 Time Complexity: O(n² log n).

 Advantages: Provides quick and simple solutions.

 Disadvantages: May not give the shortest route.

Christofides Algorithm:

Finds a solution that is at most 1.5 times the optimal for metric TSP (where the triangle
inequality holds).

 Time Complexity: O(n³).

 Advantages: Provides a solution close to the optimal.

 Disadvantages: Works only for metric TSP.

You might also like