0% found this document useful (0 votes)
22 views68 pages

Brute Force and Divide & Conquer Methods

Uploaded by

manikanta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views68 pages

Brute Force and Divide & Conquer Methods

Uploaded by

manikanta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Module-2

Brute Force Approaches(Contd..)


Decrease and Conquer
Divide and Conquer
By
Kavyashree G M
Assistant Professor
ISE, DSATM
Brute Force Approaches(Contd..)
• Exhaustive Search:
• Exhaustive search is simply a brute-force approach to combinatorial
problems.
• generating each and every element of the problem domain, selecting
• those of them that satisfy all the constraints, and then finding a
desired element.
• illustrate exhaustive search by applying it to three important
problems: the traveling salesman problem, the knapsack problem,
and the assignment problem.
Traveling Salesman Problem
• The problem asks to find the shortest tour through a given set of n cities that
visits each city exactly once before returning to the city where it started.
• The problem can be conveniently modeled by a weighted graph, with the
graph’s vertices representing the cities and the edge weights specifying the
distances.
• Then the problem can be stated as the problem of finding the shortest
Hamiltonian circuit of the graph.(Hamiltonian circuit is defined as a cycle
that passes through all the vertices of the graph exactly once).
• The total number of permutations needed is still 1/2 (n − 1)!, which makes
the exhaustive-search approach impractical for all but very small values of n.
• Hamiltonian circuit can also be defined as a sequence of n + 1
adjacent vertices vi0, vi1, . . . , vin−1, vi0, where the first vertex of the
sequence is the same as the last one and all the other n − 1 vertices
are distinct.
• Compute the tour lengths, and find the shortest among them.
Knapsack Problem
• Given n items of known weights w1, w2, . . . , wn and values v1,
v2, . . . , vn and a knapsack of capacity W, find the most valuable
subset of the items that fit into the knapsack.
• the number of subsets of an n-element set is , the exhaustive
search leads to algorithm
Decrease and Conquer Approach
⚫ The decrease-and-conquer technique is based on exploiting the
relationship between a solution to a given instance of a problem
and a solution to its smaller instance
⚫ Using this technique, given problem can
recursively be solved either (top-down
approach) approach) or iteratively
⚫ This (bottom-up
technique is a slight variation of divide and
conquer technique
Step
s
1. In divide and conquer technique, given problem of size n is divided into the problem
of size ―n/2 where as in decrease and conquer given problem is changed into the
problem of smaller size like n-1 etc. In other words, change the problem from
larger instance into smaller instance
2. Conquer the problem of smaller size
3. Convert the solution of smaller size problem into a solution for larger size problem
Variations of decrease and
conquer
⚫ There are three major variations of decrease-and-
conquer:
1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease
decrease by a constant
⚫ In the decrease-by-a-constant variation, the size of an instance is
reduced by the same constant on each iteration of the algorithm.
Typically, this constant is equal to one
⚫ Consider, as an example, the exponentiation problem of
computing an where a!=0 and n is a nonnegative integer
⚫ The relationship between a solution to an instance of size n
and an instance of size n−1 is obtained by the obvious formula
an =a (n−1)* a
⚫ So the function f(n)= an can be computed either top down by
using its recursive definition
⚫ boottom up by multiplying a by itself n times
decrease-by-a-constant-factor

⚫ The decrease-by-a-constant-factor technique suggests reducing a problem instance by the


same constant factor on each iteration of the algorithm
⚫ In most applications, this constant factor is equal to two
⚫ For an example, let us revisit the exponentiation problem
If the instance of size n is to compute , the instance of half its
size is to compute

we consider here instances with integer exponents only, the


former does not work for odd n. If n is odd, we have to compute
by using the rule for even-valued exponents and then
multiply the result by a.
variable-size-decrease

⚫ Finally, in the variable-size-decrease variety of decrease-and- conquer, the size-reduction


pattern varies from one iteration of an algorithm to another
⚫ Euclid’s algorithm for computing the greatest common divisor provides a good
example of such a situation
⚫ gcd(m, n)=gcd(n, m mod n)
Insertion Sort
Analysis of Best case time efficiency

Analysis of Worst case time efficiency


Analysis of Average case time efficiency
Directed graph
⚫ A directed graph, or digraph for short, is a graph with directions specified for all its
edges
A directed cycle in a digraph is a sequence of three or more of its vertices that starts and
ends with the same vertex. For example, a, b, a is a directed cycle in the digraph in Figure
4.5a.
⚫ ab, bc, de are tree edges, ba is back edge, ac is forward edge and dc is cross edge
⚫ If a DFS forest of a digraph has no back edges, the digraph is a dag, an acronym for
directed acyclic graph
Topological
sorting
⚫ Topological sorting is list of vertices in a particular order in a directed acyclic graph, such that
for every edge in the graph, the vertex where the edge starts is listed before the vertex where
the edge ends.
⚫ Generally used to specify the order in which a particular task is performed to achieve a
bigger task.
⚫ Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to take
in some degree program.
⚫ The courses can be taken in any order as long as the following
course prerequisites are met:
– C1 and C2 have no prerequisites,
– C3 requires C1 and C2,
– C4 requires C3, and
– C5 requires C3 and C4.
⚫ The student can take only one course per term. In which order should the student take the
courses?
Topological sorting
Contd.
⚫ The situation can be modeled by a digraph in which vertices represent courses and directed
edges indicate prerequisite requirements

⚫ In terms of this digraph, we can list its vertices in such an order that for every edge in the
graph, the vertex where the edge starts is listed before the vertex where the edge ends.
This vertex order is called as Topological order , and problem is known as Topological
Sorting Problem.
Topological sorting of a
Dag
⚫ Moreover, there are two efficient algorithms that both verify whether a digraph is a dag and, if it
is, produce an ordering of vertices that solves the topological sorting problem.
⚫ The first algorithm is a simple application of Depth-first Search: perform a DFS traversal and
note the order in which vertices become dead-ends (i.e., popped off the traversal stack).
⚫ Reversing this order yields a solution to the topological sorting problem, provided, of course, no
back edge has been encountered during the traversal
⚫ If a back edge has been encountered, the digraph is not a dag, and topological sorting of
its vertices is impossible
The Topological order using DFS Method can be obtained as shown below:

Step 1: Select any arbitrary vertex


Step 2: when vertex is visited from first time, it is pushed on to the stack
Step 3: When a vertex becomes dead end, it is removed from the stack.
Step 4: Repeat step 2 and 3 for all the vertices in the graph
Step 5: Reverse the order of deleted items to get the topological sequence .
Fig: (a) Digraph for which the topological sorting problem needs to be solved.
(b) DFS traversal stack with the subscript numbers indicating the popping off
order.
(c) Solution to the problem.
• The first algorithm is a simple application of depth-first search: perform a DFS traversal and note the order in which
vertices become dead-ends (i.e., popped off the traversal stack).

• no back edge has been encountered during the traversal. If a back edge has been encountered, the digraph is not a
dag, and topological sorting of its vertices is impossible.
⚫ The second algorithm is based on a direct implementation of the decrease-(by one)-and-conquer
technique Source Removal Algorithm: repeatedly identify in a remaining digraph a source,
which is a vertex with no incoming edges, and delete it along with all the edges outgoing from
it
⚫ If there are several sources, break the tie arbitrarily
⚫ If there are none, stop because the problem cannot be solved
⚫ The order in which the vertices are deleted yields a solution to the topological sorting problem
Note that the solution obtained by the source-removal algorithm is different from the one obtained by the DFS-based
algorithm. Both of them are correct, of course; the topological sorting problem may have several alternative solutions.
Define topological sorting. Illustrate the topological sorting using DFS method for the
following graph.
Divide and Conquer Algorithms
⚫ Basic Idea

⚫ Divide and Conquer Recurrence

⚫ Merge Sort

⚫ Quick Sort

⚫ Binary Tree Traversal

⚫ Strassen’s Matrix Multiplication

2
Basic Idea
⚫ Divide-and-conquer algorithms work according to the following general
plan:
1. A problem is divided into several sub problems of the same type, ideally of
about equal size.
2. The sub problems are solved typically recursively, sometimes a different algorithm is
employed,
3. If necessary, the solutions to the sub problems are combined to get
solution to the original problem.
Basic Idea
a problem of size
n

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem subproblem
1 2

a solution to
the original problem

5
General Divide and Conquer
Recurrence
Solving General Divide-and-Conquer
Recurrence T(n) = aT(n/b) + f (n)
By Master Theorem:

If T(n) = aT(n/b) + f (n) where f(n)  (nd), d 


0
Then, If a < bd, T(n)  (nd)
If a = T(n)  (nd log
bd, If a n) T(n)  (nlog b
> b d, a)

(Analogous results hold for the O and Ω


notations.)
11
Examples: Solve the Recurrence using
Master theorem T(n) = aT(n/b) + f (n)
1) T(n) = T(n/2) + n
Here a = 1, b = 2, d = 1,
Since a < bd By Master theorem T(n)  (nd )
 T(n)  (n

2) T(n) = 2T(n/2) + 1
Here a = 2, b = 2, d = 0,
Since a > bd By Master theorem T(n)  (n log b a )
log 2
Examples T(n) = aT(n/b) + f (n)
3) T(n) = T(n/2) + 1
Here a = 1, b = 2, d = 0,
Since a = bd By Master theorem T(n)  (nd
log n)
 T(n)  (n0log
n ) T(n) 
(log(n) )

4) T(n) = 4T(n/2) + n
Here a = 4, b = 2, d = 1,
Since a > bd By Master theorem T(n)  (n log b
a )

 T(n)  (n log 2 4 )


13 T(n)  (n 2 )
Examples
5) T(n) = 4T(n/2) + n2
Here a = 4, b = 2, d = 2,
Since a = bd By Master theorem T(n)  (nd log
n)
 T(n)  (n2 log n)

6) T(n) = 4T(n/2) + n3
Here a = 4, b = 2, d = 3,
Since a < bd By Master theorem T(n)  (nd )
 T(n)  (n3)
Mergesort
⚫ Merge sort is a perfect example of a successful application of the divide-and conquer
technique.
It sorts a given array A[0..n − 1] by dividing it into two halves,
divides its input elements according to their position in the array.
A[0….| n/2 | − 1] and A[ |n/2 |..n − 1], sorting each of them recursively,
Then merging the two smaller sorted arrays into a single sorted one.
⚫ Here small(p) is instance with only one element ie, n=1
⚫ If n>1 then, Split array A[0..n-1] in two about equal halves and make copies of each
half in arrays B and C
⚫ Sort arrays B and C recursively
⚫ Merge sorted arrays B and C into array A

24
Mergesort - Procedure
8 3 2 9 7 1 5 4

8 3 2 7 1 5 4
9

8 3 2 7 1 5 4
9

8 3 2 9 7 1 5 4

3 8 2 9 1 7 4 5

2 3 8 1 4 5 7
9

25 1 2 3 4 5 7 8
9
Mergesort- Algorithm

26
Merging of two sorted arrays
⚫ Two pointers (array indices) are initialized to point to the first
elements of the arrays being merged.
⚫ The elements pointed to are compared, and the smaller of
them is moved to a new sorted array being constructed;
⚫ After that, the index of the smaller element is incremented to
point to its immediate successor in the array it was copied
from.
⚫ This operation is repeated until one of the two given arrays
is exhausted, and then the remaining elements of the other
array are copied to the end of the new sorted array.
B[ ]= 6 ,14,26,56,74 C[ ]= 16 ,34,46,76,84 A[ ]= 6,14,16,26,34,46,56,74,76,84
Merge- Algorithm

28
Time Complexity of
mergesort
Complexity of merge
sort
⚫ for the worst case, Cmerge(n) = n − 1, and we have the recurrence
Cworst(n) = 2Cworst(n/ 2) + n − 1 for n > 1, Cworst(1) = 0.
⚫ Also Cbest(n) = 2Cbest(n/2) + (n/2) for n > 1,
⚫ Hence, according to the Master Theorem, Cworst(n) ∈ Ө(n log n) .
And Cbest(n) ∈ Ω(n log n)
⚫ In total Complexity of merge
sort Cworst(n)= O(n logn)
Caverage(n)= Ө(n
logn) Cbest(n)=Ω(n
logn)
Exampl
Sort the
e
word
Major drawback of Merge sort
⚫ Merge sort algorithm is not in place.
It uses 2n locations wherein additional n locations are needed to place
the result of the merge of 2 Arrays.
⚫ That is given list of n items is placed in array A which is to be sorted.
Additional B array to hold n items is used in merge procedure to place
result of merge.
⚫ Hence 2n locations are needed to sort, in the merge sort procedure. Hence
Merge sort algorithm is not in place.
Quick Sort
⚫ Quick sort is the other sorting algorithms that is based on the divide and conquer
approach.
⚫ Merge sort divides its input elements according to their position in the array where as
Quick sort divides them according to their value. For division it uses partition
algorithms.
⚫ In partition procedure, it selects one of the element A[s] from the input list A[0…n-1]
and rearranges the array’s elements so that all the elements to the left of element
A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater
than or equal to it so that A[s] achieves it’s final position.
⚫ A[0] . . .A[s A[ A[s + 1] . . .A[n −
− 1] s] 1]
⚫ all are ≤A[s] A[s all are ≥A[s]
⚫ Obviously, after a partition is] achieved, A[s] will be in its final position in the sorted
array, and we can continue sorting the two sub arrays to the left and to the right of A[s]
independently.
Quicksort - Algorithm
ALGORITHM : Quicksort(A[l..r])
/ / Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined
by its left and right indices l and r
//Output: Subarray A[l..r] sorted in non decreasing
order
if l < r s / / s is a split
←Partition(A[l..r] position
) Quicksort(A[l..s
− 1])
Quicksort(A[s +
1..r])
Partition Procedure
⚫ We start by selecting a pivot—an element with respect to whose value
we are going to divide the subarray. There are several different
strategies for selecting a pivot
⚫ Let us consider the subarray’s first element as pivot ie, p = A[l].
⚫ we will now scan the subarray, comparing the subarray’s elements with
the pivot.
⚫ The left-to-right scan starts with the second element of the sub array
denoted below by index pointer i . Since we want elements smaller than
the pivot to be in the left part of the subarray, this scan skips over
elements that are smaller than the pivot and stops upon encountering
the element greater than or equal to the pivot.
⚫ The right-to-left scan, starts with the last element of the subarray denoted
below by index pointer j. Since we want elements larger than the pivot
to be in the right part of the subarray, this scan skips over elements that
are larger than the pivot and stops on encountering the element smaller
than or equal to the pivot.
Partition Procedure- contd.
⚫ After both scans stop, three situations may arise
1. If scanning indices i and j have not crossed, i.e., i < j, we simply
exchange A[i] and A[j] and resume left-to-right scan by
incrementing i and right-to-left scan decrementing j
respectively:
2. If the scanning indices have crossed over, i.e., i > j, we partition the
subarray by exchanging the pivot with A[j ] and split position s=j is
returned.
3. Finally, if the scanning indices stop while pointing to the same
element, i.e., i = j, we partition the subarray , by exchanging the
pivot with A[j ] and split position s = i = j is returned.
p

A[i]p s A[i]p
Partitioning Algorithm
⚫ ALGORITHM Partition(A[l..r])
//P artitions a subarray by using the first element as a
pivot
//Input: Subarray of array A[0..n − 1], defined by its left and
right indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as
this function’s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p or i > r
repeat j ←j − 1 until A[j ]≤ p
until swap(A[i],A[j
i≥j ])
swap(A[i],A[ //un do last swap when i
j ]) ≥j
swap(p,
A[j ])
Tracing for Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
Pivot=5 Function calls
5 3 1 9 8 2 4 7 i=3, j=6 swap(a[i],a[j]) Quicksort(0,7)
5 3 1 4 8 2 9 7 i=4, j=5 swap(a[i],a[j])
5 3 1 4 2 8 9 7 i=5, j=4 swap(pivot,a[j])
2 3 1 4 5 8 9 7 s=4 Quicksort(0,3)
Pivot=2
2 3 1 4 i=1, j=2 swap(a[i],a[j])
2 1 3 4 i=2, j=1 swap(pivot,a[j])
1 2 3 4 s=1 Quicksort(0,0)
Pivot=3 Quicksort(2,3)
3 i=3,j=2 swap(pivot,a[j])
s=2 Quicksort(2,1)
4 Quicksort(3,3
Pivot=83 8 9 7 i=6, j=7 swap(a[i],a[j]) Quicksort(5,7) )
8 7 9 i=7, j=6
4
swap(pivot,a[j])
7 8 9 s=6 Quicksort(5,5)
1 2 3 4 5 7 8 9 is Out put Quicksort(7,7)
Quicksort- Tree of Recursive Calls
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
Complexity Analysis of Quick sort
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
⚫ The number of key comparisons made before a partition is achieved
is n + 1 if the scanning indices cross over and n if they coincide. If all
the splits happen in the middle of corresponding subarrays, we will
have the best case.
⚫ The number of key omparisons in the best case satisfies the
recurrence
Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.
⚫ According to the Master Theorem,
C best(n) ∈ Ω(n log2 n); solving it exactly
for n = 2k yields Cbest(n) =n log2 n.
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
⚫ In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the
size of the subarray being partitioned. This situation will happen, in
particular, for increasing arrays, i.e., for inputs for which the problem is
already solved!
⚫ if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the
left-to-right scan will stop on A[1] while the right-to-left scan will go all the
way to reach A[0], indicating the split at position 0: So, after making n + 1
comparisons to get to this partition and exchanging the pivot A[0] with itself,
the algorithm will be left with the strictly increasing array A[1..n − 1] to
sort. This sorting of strictly increasing arrays of diminishing sizes will
continue until the last one A[n − 2,n − 1] has been processed.
⚫ The total number of key comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2)/2)-3 ∈ O(n2).
Analysis of Quicksort

⚫ Best case: split in the — Ω(n log


middle n)
⚫ Worst case: sorted array — O(n2)
⚫ Average case: random — Θ(n log
⚫ arrays
Improvements: n)
⚫ better pivot selection: median of three partitioning
⚫ switch to insertion sort on small subfiles
⚫ elimination of recursion

⚫ Quicksort is considered the method of choice for


internal sorting of large files (n ≥ 10000)
46
Comparison of Merge sort with
Quick sort
⚫ Both mergesort and Quick sort are based on the divide and conquer
approach.
⚫ Merge sort divides its input elements according to their position in the array
where as Quick sort divides them according to their value. For division it
uses partition algorithms.
⚫ In mergesort, the division of the problem into two sub problems is
immediate and the entire work happens in combining their solutions
In Quick sort, the entire work happens in the division stage, with no work
required to combine the solutions to the sub problems.
⚫ Complexity of merge Cworst(n)= O(n logn)
sort: Cavg(n)= Ө(n logn)
Cbest(n)=Ω(n logn)

Complexity of Quick Cworst(n)= O(n2)


sort : Cavg(n)= Ө(n
C best(n)=Ω(n logn)
logn)
Quick sort algorithm process in place property whereas Merge sort algorithm is
not in
place
Binary Tree Traversals and Related Properties
• divide-and-conquer technique can be applied to binary trees.
• A binary tree T is defined as a finite set of nodes that is either empty
or consists of a root and two disjoint binary trees
called, respectively the left and right subtree of the root.
We measure the problem’s instance size by the number of nodes n(T ) in a given binary
tree T .
We have the following recurrence relation for A(n(T )):

A(n(T )) = A(n(Tlef t)) + A(n(Tright)) + 1 for n(T ) > 0,


A(0) = 0.
• The extra nodes (shown by little squares in Figure 5.5) are called
external; the original nodes (shown by little circles) are called
internal.
• the number of external nodes x is always 1 more than the number of
internal nodes n: x = n + 1.
• consider the total number of nodes, both internal and external. Since
every node, except the root, is one of the two children of an internal
node, we have the equation: 2n + 1= x + n,
total number of nodes= internal+external=n+x
=n+n+1
=2n+1
• The above equality also applies to any nonempty full binary tree, in
which, by definition, every node has either zero or two children: for a
full binary tree, n and x denote the numbers of parental nodes and
leaves respectively.
• The most important divide-and-conquer algorithms for binary trees are the
three classic traversals: preorder, inorder, and postorder.
• All three traversals visit nodes of a binary tree recursively, i.e., by visiting
the tree’s root and its left and right subtrees. They differ only by the timing
of the root’s visit:
• In the preorder traversal, the root is visited before the left and right
subtrees are visited.
• In the inorder traversal, the root is visited after visiting its left subtree but
before visiting the right subtree.
• In the postorder traversal, the root is visited after visiting the left and right
subtrees.
Preorder: Root Left Right
Inorder: Left Root Right
Postorder: Left Right Root

You might also like