0% found this document useful (0 votes)
96 views4 pages

LCS Time Complexity Analysis

The document discusses the Longest Common Subsequence (LCS) problem, which involves finding the length of the longest subsequence present in two sequences. It outlines various methods to solve the problem, including brute force, recursion, memorization, and dynamic programming, highlighting their time complexities. The document emphasizes that dynamic programming offers a more efficient solution with a time complexity of O(mn).

Uploaded by

luckymlcvl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • LCS Length Calculation,
  • Computational Complexity,
  • Character Comparison,
  • Algorithm Steps,
  • Auxiliary Space,
  • Algorithmic Challenges,
  • Complexity Improvement,
  • Dynamic Programming Matrix,
  • LCS Problem Statement,
  • Dynamic Programming Techniques
0% found this document useful (0 votes)
96 views4 pages

LCS Time Complexity Analysis

The document discusses the Longest Common Subsequence (LCS) problem, which involves finding the length of the longest subsequence present in two sequences. It outlines various methods to solve the problem, including brute force, recursion, memorization, and dynamic programming, highlighting their time complexities. The document emphasizes that dynamic programming offers a more efficient solution with a time complexity of O(mn).

Uploaded by

luckymlcvl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • LCS Length Calculation,
  • Computational Complexity,
  • Character Comparison,
  • Algorithm Steps,
  • Auxiliary Space,
  • Algorithmic Challenges,
  • Complexity Improvement,
  • Dynamic Programming Matrix,
  • LCS Problem Statement,
  • Dynamic Programming Techniques

Longest Common Subsequence

Tuesday, January 3, 2023 6:49 PM

• LCS Definition:
• LCS Problem Statement: Given two sequences, find the length of longest subsequence
present in both of them. A subsequence is a sequence that appears in the same relative
order, but not necessarily contiguous.
• For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of “abcdefg”.
• Methods:
1. Brute Force Approach
2. Using Recursion
3. Using Memorization
4. Using Dynamic Programming
Brute Force Approach:
• In order to find out the complexity of brute force approach, we need to first know the
number of possible different subsequences of a string with length n, i.e., find the number
of subsequences with lengths ranging from 1,2,..n-1.
• Recall from theory of permutation and combination that number of combinations with 1
element are nC1.
• Number of combinations with 2 elements are nC2 and so forth and so on.
• We know that nC0 + nC1 + nC2 + … nCn = 2n. So a string of length n has 2n-1 different
possible subsequences since we do not consider the subsequence with length 0.
• This implies that the time complexity of the brute force approach will be O(n * 2n).
Note that it takes O(n) time to check if a subsequence is common to both the strings.
This time complexity can be improved using dynamic programming.
Recursion:
LCS(i, j)
IF (A[i] and B[i] == '\0')
RETURN 0;
ELSE IF ( A[i] == B[j])
RETURN 1+ LCS(i+1, j+1);
ELSE
RETURN MAX{ LCS(i-1,j), LCS(i,j-1)} ;
Endfunction
• Time complexity of the above naive recursive approach is O(2n) in worst case and worst
case happens when all characters of X and Y mismatch i.e., length of LCS is 0.
Memorization:
• Time Complexity : O(m*n) ignoring recursion stack space, Auxiliary Space: O(m*n).
Dynamic Programming:
• Time Complexity of the above implementation is O(mn) which is much better than the
worst-case time complexity of Naive Recursive implementation.

Dynamic Programming Page 1


Dynamic Programming Page 2
IF ( A[i] == B[j])
LCS(i, j)= 1+ LCS(i+1, j+1);
ELSE
LCS(i, j)= MAX{ LCS(i-1,j), LCS(i,j-1)} ;

Dynamic Programming Page 3


B A G T \0
0 1 2 3 4 5
0 0 0 0 0 0 0
A 1 0
T 2 0
G 3 0
\0 4 0

B A G T \0
0 1 2 3 4 5
0 0 0 0 0 0 0
A 1 0
T 2 0
G 3 0
\0 4 0

Dynamic Programming Page 4

Common questions

Powered by AI

Dynamic programming improves solution time for the LCS problem by storing intermediate results in a table, which allows reusable access to these results for overlapping subproblems. Unlike the naive recursive approach, which redundantly recalculates the same subproblems, dynamic programming fills a matrix where each cell represents a solved subproblem, either computed directly from the base case or derived from previously filled cell values. This approach transforms an exponential time complexity problem using recursive methods, O(2^n), into a more manageable polynomial time complexity of O(mn), increasing efficiency significantly .

In dynamic programming for the LCS problem, the recurrence relation LCS(i, j) = MAX{LCS(i-1, j), LCS(i, j-1)} is crucial when the characters at positions i and j do not match. This relation helps determine the optimal solution by considering two possibilities: ignoring the character at position i of the first string while considering all characters of the second, and vice versa. By taking the maximum LCS length of these two options, the method ensures the longest subsequence is identified without including non-matching characters, thus building the solution step-by-step by evaluating all potential paths and choosing the optimal one .

The brute force approach to solving the LCS problem has a time complexity of O(n * 2^n) because it involves generating all possible subsequences of both strings and checking each one for commonality. This inefficiency arises from examining every combination of subsequences, which grows exponentially with string length. In contrast, the dynamic programming approach significantly improves efficiency with a time complexity of O(mn), where m and n are the lengths of the two strings. This approach avoids redundant calculations by storing intermediate results and reusing them, reducing the problem's complexity from exponential to polynomial .

Choosing different LCS algorithm strategies significantly impacts computational resources. The brute force approach, with a time complexity of O(n * 2^n), is highly inefficient in both time and space as it attempts every subsequence combination, proving unfeasible for large inputs. The naive recursive strategy, though simpler, has a time complexity of O(2^n) with high stack space usage. Memorization optimizes recursion by storing results, balancing time at O(m*n) against increased auxiliary O(m*n) space. Dynamic programming culminates in optimal performance, efficiently utilizing space and time with O(mn) complexity, storing and referencing solutions iteratively, minimizing resource usage by avoiding unnecessary recalculations inherent with other methods .

Permutation and combination theory plays a critical role in analyzing the brute force method for the LCS problem by quantifying the number of possible subsequences within a string. According to this theory, each subsequence corresponds to a combination of characters, and a string of length n has 2^n - 1 possible non-empty subsequences. This exponential growth is directly linked to the brute force method's inefficiency, as it generates all these combinations to check for commonality between sequences. Understanding these combinatorial calculations helps explain the exponential time complexity, O(n * 2^n), associated with the brute force method .

To determine the number of possible subsequences of a string, one can utilize combinations theory. For a string of length n, each subsequence corresponds to a combination of its elements. The number of subsequences includes all combinations from 1 element up to n elements, calculated using nC1, nC2, ..., nCn. The sum of these combinations, expressed as nC0 + nC1 + ... + nCn, equals 2^n, which reflects all possible subsets of the string's characters including the empty set. For calculating non-empty subsequences, this sum is reduced to 2^n - 1. The brute force approach involves generating all these subsequences, leading to its high time complexity of O(n * 2^n) as the algorithm checks if each subsequence is common to both strings .

Using memorization over dynamic programming for solving the LCS problem carries both benefits and drawbacks. Memorization retains the recursive structure while storing results of subproblems to avoid re-evaluation, capturing the best of recursion's simplicity with reduced time complexity from O(2^n) to O(m*n). However, this method still incurs a considerable recursion stack space, whereas dynamic programming implements an iterative approach, efficiently filling a table without deep recursion, hence saving stack space. Moreover, dynamic programming inherently structures the solution for further customization or adaptations, unlike the memoized approach which adjusts an existing function's operation .

Dynamic programming is more efficient than recursion for solving the LCS problem because it systematically solves smaller subproblems and combines their solutions to address larger ones, thus avoiding the repeated computation of the same subproblems inherent in the recursive approach. Recursion alone has a time complexity of O(2^n) in the worst case, as it may repeatedly solve the same subproblem multiple times. In contrast, dynamic programming stores subproblem solutions in a table, allowing for a time complexity of O(mn) by reusing calculated results and cutting down on redundant overlapping subproblems .

Memorization optimizes the recursive solution to the LCS problem by storing the results of subproblems in a table, preventing the need to recompute results when the same subproblem is encountered again. This conversion from a simple recursive solution to one utilizing memorization reduces the time complexity from exponential to O(m*n), where m and n are the lengths of the two strings. The space complexity is also O(m*n), reflecting the need for an auxiliary storage to save the results of all subproblems .

The naive recursive approach to the LCS problem exhibits poor performance because its time complexity is O(2^n) in the worst case. This inefficiency arises from an exhaustive divide-and-conquer strategy that does not store or reuse intermediate results, causing repeated evaluations of the same subproblems and leading to exponential growth in computation time as the sequences increase in length. Unlike dynamic programming and memorization, which systematically handle subproblem overlap by storing results, the naive recursion recalculates solutions for each function call path .

You might also like