0% found this document useful (0 votes)
19 views9 pages

Understanding Dynamic Programming Concepts

Dynamic Programming (DP) is an algorithmic technique that optimizes complex problems by breaking them into simpler subproblems, utilizing properties like optimal substructure and overlapping subproblems. It employs two main approaches: memoization (top-down) and tabulation (bottom-up) to store results and avoid redundant calculations, significantly improving efficiency. DP is widely applicable across various fields, including operations research and computer science, and is essential for understanding advanced algorithmic strategies.

Uploaded by

shahriarivan91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views9 pages

Understanding Dynamic Programming Concepts

Dynamic Programming (DP) is an algorithmic technique that optimizes complex problems by breaking them into simpler subproblems, utilizing properties like optimal substructure and overlapping subproblems. It employs two main approaches: memoization (top-down) and tabulation (bottom-up) to store results and avoid redundant calculations, significantly improving efficiency. DP is widely applicable across various fields, including operations research and computer science, and is essential for understanding advanced algorithmic strategies.

Uploaded by

shahriarivan91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Dynamic Programming (DP) is a method for solving complex problems by

breaking them down into simpler subproblems. It is particularly useful for


optimization problems where the solution can be constructed efficiently from
solutions to subproblems. The key idea behind dynamic programming is to store
the results of subproblems to avoid redundant computations, which can
significantly reduce the time complexity of algorithms.

Key Concepts of Dynamic Programming:


Optimal Substructure: A problem exhibits optimal substructure if an optimal
solution to the problem can be constructed from optimal solutions of its
subproblems. This means that solving the subproblems optimally will lead to an
optimal solution for the overall problem.

Overlapping Subproblems: This property occurs when a problem can be


broken down into subproblems that are reused multiple times. Instead of solving
the same subproblem repeatedly, dynamic programming solves each
subproblem once and stores its result.

Memoization: This is a top-down approach where you solve the problem


recursively and store the results of subproblems in a table (often a dictionary or
array) to avoid redundant calculations.

Tabulation: This is a bottom-up approach where you solve all possible


subproblems first and store their results in a table, building up to the solution of
the original problem.

Importance of Dynamic Programming:


Efficiency: DP can significantly reduce the time complexity of algorithms from
exponential to polynomial time, making it feasible to solve larger instances of
problems that would otherwise be intractable.

Versatility: Dynamic programming is applicable to a wide range of problems,


including but not limited to:
➢ Fibonacci sequence calculation
➢ Knapsack problem
➢ Shortest path problems (e.g., Dijkstra's algorithm)
➢ Longest common subsequence
➢ Matrix chain multiplication

Real-World Applications: Many real-world problems in fields such as


operations research, economics, bioinformatics, and computer science can be
modeled and solved using dynamic programming techniques.
Foundation for Advanced Algorithms: Understanding dynamic programming
is crucial for grasping more advanced algorithmic techniques and concepts, as it
lays the groundwork for optimization strategies in various domains.

Dynamic Programming (DP) Introduction

Dynamic Programming is a commonly used algorithmic technique used to


optimize recursive solutions when same subproblems are called again.

•The core idea behind DP is to store solutions to subproblems so that each is


solved only once.
•To solve DP problems, we first write a recursive solution in a way that there are
overlapping subproblems in the recursion tree (the recursive function is called
with the same parameters multiple times)
•To make sure that a recursive value is computed only once (to improve time
taken by algorithm), we store results of the recursive calls.
•There are two ways to store the results, one is top down (or memoization) and
other is bottom up (or tabulation).
When to Use Dynamic Programming (DP)?

Dynamic programming is used for solving problems that consists of the following
characteristics:

1. Optimal Substructure:
The property Optimal substructure means that we use the optimal results of
subproblems to achieve the optimal result of the bigger problem.

Example:

Consider the problem of finding the minimum cost path in a weighted graph from
a source node to a destination node. We can break this problem down into
smaller subproblems:
1. Find the minimum cost path from the source node to each intermediate
node.
2. Find the minimum cost path from each intermediate node to the
destination node.
The solution to the larger problem (finding the minimum cost path from the
source node to the destination node) can be constructed from the solutions to
these smaller subproblems.

2. Overlapping Subproblems:
The same subproblems are solved repeatedly in different parts of the problem
refer to Overlapping Subproblems Property in Dynamic Programming.

Example:

Consider the problem of computing the Fibonacci series. To compute the


Fibonacci number at index n, we need to compute the Fibonacci numbers at
indices n-1 and n-2. This means that the subproblem of computing the Fibonacci
number at index n-2 is used twice (note that the call for n – 1 will make two calls,
one for n-2 and other for n-3) in the solution to the larger problem of computing
the Fibonacci number at index n.

You may notice overlapping subproblems highlighted in the second recursion


tree for Nth Fibonacci diagram shown below.

Approaches of Dynamic Programming (DP)


Dynamic programming can be achieved using two approaches:

1. Top-Down Approach (Memoization):

In the top-down approach, also known as memoization, we keep the solution


recursive and add a memoization table to avoid repeated calls of same
subproblems.

Before making any recursive call, we first check if the memoization table already
has solution for it.
After the recursive call is over, we store the solution in the memoization table.

2. Bottom-Up Approach (Tabulation):

In the bottom-up approach, also known as tabulation, we start with the smallest
subproblems and gradually build up to the final solution.
We write an iterative solution (avoid recursion overhead) and build the solution
in bottom-up manner.
We use a dp table where we first fill the solution for base cases and then fill the
remaining entries of the table using recursive formula.
We only use recursive formula on table entries and do not make recursive calls.

Steps to solve a Dynamic programming problem:


1) Identify if it is a Dynamic programming problem.
2) Decide a state expression with the Least parameters.
3) Formulate state and transition relationship.
4) Apply tabulation or memorization.
Step 1: How to classify a problem as a Dynamic Programming Problem?
Typically, all the problems that require maximizing or minimizing certain
quantities or counting problems that say to count the arrangements under
certain conditions or certain probability problems can be solved by using
Dynamic Programming.
All dynamic programming problems satisfy the overlapping subproblems
property and most of the classic Dynamic programming problems also satisfy
the optimal substructure property. Once we observe these properties in a given
problem be sure that it can be solved using Dynamic Programming.
Step 2: Deciding the state
Dynamic Programming problems are all about the state and its transition. This is
the most basic step which must be done very carefully because the state
transition depends on the choice of state definition you make.
State:

A state can be defined as the set of parameters that can uniquely identify a
certain position or standing in the given problem. This set of parameters should
be as small as possible to reduce state space.

Example:
Let’s take the classic Knapsack problem, where we need to maximize profit by
selecting items within a weight limit. Here, we define our state using two
parameters: index and weight (dp[index][weight]). Think of it like this: dp[3][10]
would tell us “what’s the maximum profit we can make by choosing from the first
4 items (index 0 to 3) when our bag can hold 10 units of weight?” These two
parameters (index and weight) work together to uniquely identify each
subproblem we need to solve.

Just like GPS coordinates need both latitude and longitude to pinpoint a
location, our knapsack solution needs both the item range and remaining
capacity to determine the optimal profit at each step.

So, our next step will be to find a relation between previous states to reach the
current state.

Step 3: Formulating a relation among the states


This part is the hardest part of solving a Dynamic Programming problem and
requires a lot of intuition, observation, and practice.

Example:

Given 3 numbers {1, 3, 5}, The task is to tell the total number of ways we can
form a number n using the sum of the given three numbers. (allowing repetitions
and different arrangements).

The total number of ways to form 6 is: 8


1+1+1+1+1+1
1+1+1+3
1+1+3+1
1+3+1+1
3+1+1+1
3+3
1+5
5+1
The steps to solve the given problem will be:

We decide a state for the given problem.


We will take a parameter n to decide the state as it uniquely identifies any
subproblem.
DP state will look like state(n), state(n) means the total number of arrangements
to form n by using {1, 3, 5} as elements. Derive a transition relation between any
two states.
Now, we need to compute state(n).
How to Compute the state?
As we can only use 1, 3, or 5 to form a given number n. Let us assume that we
know the result for n = 1, 2, 3, 4, 5, 6
Let us say we know the result for:
state (n = 1), state (n = 2), state (n = 3) ……… state (n = 6)
Now, we wish to know the result of the state (n = 7). See, we can only add 1, 3,
and 5. Now we can get a sum total of 7 in the following 3 ways:

1) Adding 1 to all possible combinations of state (n = 6)


Eg : [(1 + 1 + 1 + 1 + 1 + 1) + 1]
[(1 + 1 + 1 + 3) + 1]
[(1 + 1 + 3 + 1) + 1]
[(1 + 3 + 1 + 1) + 1]
[(3 + 1 + 1 + 1) + 1]
[(3 + 3) + 1]
[(1 + 5) + 1]
[(5 + 1) + 1]

2) Adding 3 to all possible combinations of state (n = 4);


[(1 + 1 + 1 + 1) + 3]
[(1 + 3) + 3]
[(3 + 1) + 3]

3) Adding 5 to all possible combinations of state(n = 2)


[(1 + 1) + 5]

(Note how it sufficient to add only on the right-side – all the add-from-left-side
cases are covered, either in the same state, or another, e.g. [1 + (1 + 1 + 1 + 3)]
is not needed in state (n=6) because it’s covered by state (n = 4) [(1 + 1 + 1 + 1)
+ 3])

Now, think carefully and satisfy yourself that the above three cases are covering
all possible ways to form a sum total of 7;
Therefore, we can say that result for
state(7) = state (6) + state (4) + state (2)
OR
state(7) = state (7-1) + state (7-3) + state (7-5)
In general,
state(n) = state(n-1) + state(n-3) + state(n-5)

Tabulation vs Memoization

Tabulation and memoization are two techniques used to implement dynamic


programming. Both techniques are used when there are overlapping
subproblems (the same subproblem is executed multiple times). Below is an
overview of two approaches.

Memoization:
Top-down approach
Stores the results of function calls in a table.
Recursive implementation
Entries are filled when needed.
Tabulation:
Bottom-up approach
Stores the results of subproblems in a table
Iterative implementation
Entries are filled in a bottom-up manner from the smallest size to the final size.

Tabulation Memoization

State transition relation is State Transition relation is easy


State
difficult to think to think

Code Code gets complicated when Code is easy to write by


Tabulation Memoization

modifying the underlying


a lot of conditions are
recursive solution.
required

Fast, as we do not have Slow due to a lot of recursive


Speed
recursion call overhead. calls.

If all subproblems must be


If some subproblems in the
solved at least once, a
subproblem space need not be
Subprobl bottom-up dynamic
solved at all, the memoized
em programming algorithm
solution has the advantage of
solving definitely outperforms a top-
solving only those subproblems
down memoized algorithm by
that are definitely required
a constant factor

Unlike the Tabulated version, all


In the Tabulated version, entries of the lookup table are
Table
starting from the first entry, all not necessarily filled in
entries
entries are filled one by one Memoized version. The table is
filled on demand.

Common Algorithms that Use DP:


•Longest Common Subsequence (LCS): This is used in day to day life to find

difference between two files (diff utility)


•Edit Distance: Checks how close to strings are. Can we be useful in

implementing Google’s did you mean type feature.


•Longest Increasing Subsequence: There are plenty of variations of this

problem that arise in real world.


•Bellman–Ford Shortest Path:Finds the shortest path from a given source to

all other vertices.


•Floyd Warshall: Finds shortest path from every pair of vertices.
•Knapsack Problem:Determines the maximum value of items that can be

placed in a knapsack with a given capacity.


•Matrix Chain Multiplication:Optimizes the order of matrix multiplication to

minimize the number of operations.


•Fibonacci Sequence:Calculates the nth Fibonacci number.

Advantages of Dynamic Programming(DP)


Dynamic programming has a wide range of advantages, including:
•Avoids recomputing the same subproblems multiple times, leading to significant
time savings.
•Ensures that the optimal solution is found by considering all possible
combinations.

Applications of Dynamic Programming(DP):


Dynamic programming has a wide range of applications, including:
•Optimization:Knapsack problem, shortest path problem, maximum subarray

problem
•Computer Science: Longest common subsequence, edit distance, string

matching
•Operations Research:Inventory management, scheduling, resource allocation

Characteristics of Dynamic Programming Algorithm


•For any problem, if there is a simple recursive solution and a recursion tree has
same recursive calls multiple times (or overlapping subproblems), we use DP.

You might also like