0% found this document useful (0 votes)
19 views53 pages

Introduction to Algorithm Design Basics

Uploaded by

dristikadariya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views53 pages

Introduction to Algorithm Design Basics

Uploaded by

dristikadariya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter I - Introduction to Algorithm Design

Rajan Adhikari

School of Mathematical Sciences

November 8, 2025
Outline
The Algorithm: A Formal Definition
Defining the Algorithm: More than Code
The Foundational Properties of Algorithms
A Practical Analogy: Recipes and Heuristics
Illustrative Example: Find the Largest Number
Performance Analysis: The ”Cost” of Computation
Why We Analyze Algorithms
Primary Metrics
Best, Worst, and Average-Case Scenarios
Asymptotic Notations: The Language of Algorithmic Growth
The Need for Asymptotic Analysis
The Notations: O, Ω, Θ
Critical Insight: Clarifying Notations vs. Cases
A Guided Tour of Common Complexity Classes
Common Classes
Amortized Analysis: A Guaranteed Average for Sequences
Defining Amortization
In-Depth Example: The Dynamic Array
A Practical Guide to Computing Algorithmic Complexity
Defining the Algorithm: More than Code

Formal Definition
In mathematics and computer science, an algorithm is a formal, effective method expressed as
a finite sequence of well-defined, mathematically rigorous instructions. It is fundamentally a
specification for performing a computation or solving a class of specific problems.
▶ An algorithm is not a piece of code; rather, it is the abstract, logical method that the code
implements.
▶ This definition carries a critical distinction from a heuristic, which is a problem-solving
approach that is not guaranteed to produce a correct or optimal result.
▶ An algorithm, in contrast, must be correct.
Defining the Algorithm: More than Code

Hierarchical Distinction
▶ The Problem: The abstract task to be accomplished (e.g., ”Given a list of numbers, find
the largest one”).
▶ The Algorithm: A specific, formal method for solving the problem (e.g., ”Assume the
first number is the largest, then iterate...”).
▶ The Program: A concrete implementation of an algorithm in a specific programming
language.
The Foundational Properties of Algorithms

Donald Knuth’s Five Key Properties


1. Finiteness: The algorithm must terminate after a finite number of steps. This ensures it
will always produce a result and not run indefinitely.
2. Definiteness: Each step must be precisely and unambiguously defined. The actions must
be clear, with no room for subjective interpretation.
3. Input: An algorithm has zero or more well-defined inputs, which are the data it operates
on.
4. Output: An algorithm has one or more well-defined outputs. These outputs must have a
specified, correct relationship to the inputs.
5. Effectiveness: Every operation must be sufficiently basic (a ”rote step”) that it can be
executed exactly and in a finite amount of time.
A Practical Analogy: Recipes and Heuristics

The Recipe Analogy


A well-written cooking recipe exhibits all five properties:
▶ Input: Defined inputs (e.g., 1 cup flour, 2 eggs).
▶ Definiteness: Precise steps (e.g., ”Mix... until homogeneous”).
▶ Effectiveness: Basic steps (e.g., ”Flip pancake,” ”Pour”).
▶ Finiteness: A finite number of steps (e.g., ”...until there is no batter left”).
▶ Output: A well-defined output (a stack of pancakes).

Algorithm vs. Heuristic


▶ Algorithm: A recipe to ”Bake a German Chocolate Cake.”
▶ Heuristic: ”Add spices to the stew until it tastes good.”
The heuristic lacks definiteness (what is ”good”?) and a guaranteed, repeatable output.
Illustrative Example: Designing an Algorithm

Problem: Find the largest number in a list of numbers.


Algorithm (Pseudo-code)
Function Find-Max(List A)
// Input: List A
// Output: The largest value in List A

If List A is empty, return error.


Let max_value = A[0]

For i = 1 to length(A) - 1:
If A[i] > max_value:
max_value = A[i]

Return max_value

This simple algorithm clearly satisfies all five properties.


Illustrative Example: Analysis

▶ The formal properties, particularly Correctness (from the Output property), are the
foundation for performance analysis.
▶ For the Find-Max algorithm to be correct, its output must be the largest value.
▶ The Adversary Argument: If an algorithm claims to find the maximum but does not
inspect every element (in the worst case), an adversary can hide the true maximum in the
one un-inspected location. The algorithm would then fail.

Lower Bound
Therefore, any correct, deterministic algorithm for the Find-Max problem must inspect all n
elements in the worst case.
▶ This establishes a problem complexity or a lower bound for this task.
▶ The problem itself is Ω(n).
▶ Our algorithm inspects every element once, so it runs in O(n) time.
▶ Because its runtime (O(n)) matches the problem’s lower bound (Ω(n)), this algorithm is
asymptotically optimal.
Why We Analyze Algorithms

Goal: Predicting Resource Usage


Performance analysis is the process of predicting the computational resources an algorithm
requires to perform its task.
▶ When multiple algorithms exist to solve the same problem, analysis provides an objective,
quantitative basis for comparing them.
▶ The goal is not to measure the exact execution time in seconds, which is dependent on the
computer, language, and compiler.
▶ Instead, analysis focuses on the algorithm’s intrinsic scalability—how its resource
requirements change as the size of the input grows.
Primary Metrics
Primary Metric 1: Time Complexity
Quantifies the amount of computer time an algorithm takes to run as a function of its input
size, n.
▶ This is not measured in wall-clock time.
▶ It is estimated by counting the number of elementary operations (e.g., assignments,
comparisons) assuming each takes constant time.
▶ The analysis focuses on the rate of growth of this function as n → ∞.

Primary Metric 2: Space Complexity


Quantifies the amount of memory space an algorithm requires, also as a function of its input
size n. A critical distinction is made:
▶ Input Space: Memory required to store the input.
▶ Auxiliary Space: The extra or working memory an algorithm requires during its execution
(e.g., temporary variables, call stack).
In most analysis, ”space complexity” refers to the auxiliary space. This allows for the
meaningful analysis of in-place algorithms.
Best, Worst, and Average-Case Scenarios

An algorithm’s performance can vary for inputs of the same size n.


Three Scenarios
Worst-Case Analysis: The maximum resources the algorithm will use for any input of size n.
This provides a guaranteed upper bound and is the most common analysis.
Best-Case Analysis: The minimum resources the algorithm will use for an input of size n. (e.g.,
finding an item at the beginning of a list).
Average-Case Analysis: The expected resource usage, averaged over all possible inputs of size
n. This is complex and requires assumptions about the probability distribution
of the inputs.
A Critical Point: Defining the Input Size n
The Definition of n is Critical
An incorrect definition of n invalidates the entire analysis.
▶ For sorting an array, n is clearly the number of elements.

Example: Is-Prime(k)
▶ Consider an algorithm Is-Prime(k) that checks if an integer k is prime.
▶ A common mistake is to define the input size as k. An algorithm that performs trial
division from 2 to k would run in O(k) time and be called ”linear.” This is incorrect.
▶ In theoretical computer science, the input size n is the number of bits required to
represent the input.
▶ The number k can be represented in n = log2 k bits.
▶ From this, k = 2n .
▶ The O(k) algorithm is therefore O(2n ) in terms of its true input size n.
▶ It is an exponential-time algorithm, not a linear-time one. This distinction is the basis for
computational complexity theory (e.g., P vs. NP).
The Need for Asymptotic Analysis

▶ A precise count of operations, such as f (n) = 3n2 + 100n + 50, is overly complex and
hardware-dependent.
▶ Asymptotic analysis abstracts this by focusing on the limiting behavior of the function as
n → ∞.
▶ We ”drop constant coefficients and less significant terms”.
▶ In f (n) = 3n2 + 100n + 50, the n2 term is the dominant term.
▶ As n becomes large, the 100n and 50 terms become insignificant.
▶ The constant 3 is dropped as it’s a hardware-dependent speed-up.
▶ This leaves us with the algorithm’s intrinsic rate of growth, n2 .
Big O Notation (O): Asymptotic Upper Bound

Formal Definition
A function f (n) is in the set O(g (n)) if there exist positive constants c and n0 such that:

0 ≤ f (n) ≤ cg (n) for all n ≥ n0

Interpretation
f (n) grows no faster than g (n), to within a constant factor. It’s an upper bound.
Big O Notation (O): Asymptotic Upper Bound (contd.)

Worked Example
▶ Let our algorithm’s exact runtime be f (n) = 3n2 + 10n + 4.
▶ We want to test if it’s O(n2 ), so g (n) = n2 .
▶ Claim: 3n2 + 10n + 4 is O(n2 ).
▶ Proof: We must find constants c, n0 that satisfy the definition.
▶ We need 3n2 + 10n + 4 ≤ c · n2 .
▶ Let’s test c = 4. Is 3n2 + 10n + 4 ≤ 4n2 ?
▶ This simplifies to 10n + 4 ≤ n2 . This is true for n = 11 (114 ≤ 121), so it holds for all
n ≥ 11.
▶ Result: We found c = 4 and n0 = 11. Since we found them, the claim is true.
Big Omega Notation (Ω): Asymptotic Lower Bound

Formal Definition
A function f (n) is in the set Ω(g (n)) if there exist positive constants c and n0 such that:

0 ≤ cg (n) ≤ f (n) for all n ≥ n0

Interpretation
f (n) grows at least as fast as g (n). It’s a lower bound.
Big Omega Notation (Ω): Asymptotic Lower Bound(contd.)

Worked Example
▶ Let’s use the same function: f (n) = 3n2 + 10n + 4.
▶ We want to test if it’s Ω(n2 ), so g (n) = n2 .
▶ Claim: 3n2 + 10n + 4 is Ω(n2 ).
▶ Proof: We must find constants c, n0 .
▶ We need c · n2 ≤ 3n2 + 10n + 4.
▶ Since 10n + 4 is always positive for n ≥ 1, we know that 3n2 will always be less than
3n2 + 10n + 4.
▶ Let’s just pick c = 3.
▶ The inequality becomes 3n2 ≤ 3n2 + 10n + 4, which simplifies to 0 ≤ 10n + 4.
▶ This is true for all n ≥ 1.
▶ Result: We found c = 3 and n0 = 1. The claim is true.
Big Theta Notation (Θ): Asymptotic Tight Bound

Formal Definition
A function f (n) is in the set Θ(g (n)) if there exist positive constants c1 , c2 , and n0 such that:

0 ≤ c1 g (n) ≤ f (n) ≤ c2 g (n) for all n ≥ n0

Interpretation
f (n) grows at the same rate as g (n). It’s a tight bound. A function is Θ(g (n)) if and only if it
is both O(g (n)) and Ω(g (n)).
Big Theta Notation (Θ): Asymptotic Tight Bound(contd.)

Worked Example
▶ Again, let f (n) = 3n2 + 10n + 4 and g (n) = n2 .
▶ Claim: f (n) is Θ(n2 ).
▶ Proof: We need to ”sandwich” f (n) between two bounds.
▶ We need c1 · n2 ≤ 3n2 + 10n + 4 ≤ c2 · n2 .
▶ From our Ω proof, we know the lower bound works with c1 = 3 (for n ≥ 1).
▶ From our O proof, we know the upper bound works with c2 = 4 (for n ≥ 11).
▶ To use both constants, we must pick the larger n0 . So, let n0 = 11.
▶ Result: For n ≥ 11, we have 3n2 ≤ 3n2 + 10n + 4 ≤ 4n2 .
▶ Since we found c1 = 3, c2 = 4, and n0 = 11, the claim is true. f (n) is ”tightly bound” by
n2 .
Critical Insight: Notations vs. Cases

A Widespread and Significant Misconception


The idea that:
▶ O notation means ”Worst Case”
▶ Ω notation means ”Best Case”
▶ Θ notation means ”Average Case”
This is fundamentally incorrect.

The Correct Relationship


▶ Analysis Cases (Best, Worst, Average) are labels for different runtime functions that
describe behavior on specific inputs.
▶ Asymptotic Notations (O, Ω, Θ) are mathematical tools used to describe the bounds
(upper, lower, or tight) of any function, including the runtime functions for each case.
Example: Insertion Sort

One can, and should, use any notation to describe any case.
Insertion Sort Analysis
▶ Best-Case Runtime: On an already-sorted array, the runtime function Tbest (n) is Θ(n).
▶ It is also (correctly) O(n) and Ω(n).
▶ Worst-Case Runtime: On a reverse-sorted array, the runtime function Tworst (n) is Θ(n2 ).

▶ It is also (correctly) O(n2 ) and Ω(n2 ).


▶ Algorithm Runtime (Overall):
▶ It is always bounded above by its worst case, so the algorithm is O(n2 ).
▶ It is always bounded below by its best case, so the algorithm is Ω(n).
▶ Because the upper bound O(n2 ) and lower bound Ω(n) do not match, the Insertion Sort
algorithm as a whole cannot be described with a single Θ notation.
Summary of Asymptotic Notations

Table: Summary of Asymptotic Notations

Notation Name Mathematical Condition Analogy Purpose


O(g (n)) Big O 0 ≤ f (n) ≤ c · g (n) ≤ Asymptotic Upper Bound: ”G
faster than...”
Ω(g (n)) Big Omega 0 ≤ c · g (n) ≤ f (n) ≥ Asymptotic Lower Bound: ”G
least as fast as...”
Θ(g (n)) Big Theta c1 · g (n) ≤ f (n) ≤ c2 · g (n) = Asymptotic Tight Bound: ”GG
the same rate as...”
O(1) (Constant Time)

Description
The algorithm’s runtime is fixed and does not change, regardless of the size of the input n.

Example: Array Index Access


Accessing an element in an array, such as Array[i], is an O(1) operation.
▶ The computer can calculate the exact memory address with a single arithmetic operation:
▶ memory address = base address + (i * element size)
▶ This calculation time is independent of the array’s total length n.
O(log n) (Logarithmic Time)

Description
The runtime grows exceptionally slowly. Each time the input size n doubles, the runtime
increases by only a constant amount. Typical of algorithms that reduce the problem size by a
constant factor (e.g., halving it).

Example: Binary Search


Operates on a sorted array. It compares the target to the middle element and discards half the
array.
▶ Iteration 1: Search space = n
▶ Iteration 2: Search space = n/2
▶ Iteration 3: Search space = n/4
▶ Iteration k: Search space = n/2k
The algorithm terminates when n/2k = 1. Solving for k gives k = log2 n. Since each iteration
is O(1), the total time is O(log n).
O(n) (Linear Time)

The runtime grows directly proportional to the input size n. If the input size doubles, the
runtime (approximately) doubles.
Examples
▶ Linear Search: In the worst case, must check all n elements.
▶ Find Sum or Maximum: Must visit every element in the array at least once.
O(n log n) (Log-Linear Time)

This complexity class is highly efficient and common for optimal comparison-based sorting
algorithms. It represents performing O(n) work O(log n) times.
Example 1: Merge Sort
A classic ”Divide and Conquer” algorithm:
▶ Divide: Recursively splits the array in half. This creates a recursion tree of log n levels.
▶ Conquer & Combine: At each of the log n levels, the algorithm must merge all n
elements. This takes linear O(n) time per level.
▶ Total Time: (Cost per level) × (Number of levels) = O(n) × O(log n) = O(n log n).

Example 2: Quicksort (Average Case)


Also O(n log n) for average and best cases. However, its worst-case performance degrades to
O(n2 ).
O(n2 ) (Quadratic Time)

Description
The runtime is proportional to the square of the input size. Common in algorithms that use
two nested loops, each iterating n times.

Example: Bubble Sort, Insertion Sort, Selection Sort


▶ The outer loop runs n times.
▶ The inner loop runs O(n) times for each outer loop pass.
▶ Total time is O(n) × O(n) = O(n2 ).
(Note: Selection Sort is O(n2 ) in all cases, while Bubble/Insertion Sort have an O(n) best
case).
O(2n ) (Exponential Time)

Description
The runtime doubles with each single addition to the input size n. These algorithms are
extremely slow and become computationally infeasible for even moderately large n.

Example: Naive Recursive Fibonacci


The algorithm is defined by the recurrence fib(n) = fib(n − 1) + fib(n − 2).
▶ A single call to fib(n) makes two recursive calls.
▶ This generates a recursion tree that branches twice at each node and has a height of O(n).
▶ The total number of operations is proportional to the number of nodes in this tree, which
is approximately O(2n ).
▶ It is inefficient because it re-computes the same values (e.g., fib(n − 2)) an exponential
number of times.
Hierarchy of Common Complexity Classes

Table: Hierarchy and Characteristics of Common Complexity Classes

Notation Common Name Growth Rate Example for n = 1, 000, 000 Canonical Algorithm(s)
O(1) Constant Independent of n ∼ 1 operation Array Access
O(log n) Logarithmic Increases by 1 when n doubles ∼ 20 operations Binary Search
O(n) Linear Proportional to n ∼ 1, 000, 000 operations Linear Search
O(n log n) Log-Linear Slightly worse than Linear ∼ 20, 000, 000 operations Merge Sort
O(n2 ) Quadratic Proportional to n × n ∼ 1, 000, 000, 000, 000 operations Bubble Sort
O(2n ) Exponential Doubles when n increases by 1 Computationally infeasible Naive Recursive Fibonacci
O(n!) Factorial Multiplies by n when n increases Computationally infeasible Brute-force Traveling Salesman
Defining Amortization

Amortized analysis is used for algorithms where an occasional operation is very slow, but most
other operations are much faster.
▶ It provides the average performance of each operation in the worst-case over a sequence of
operations.
▶ The high cost of the rare, expensive operation is ”amortized” (or ”spread out”) over the
more frequent, cheap operations.

Amortized vs. Average-Case Analysis


This is a critical distinction!
▶ Average-Case: Relies on probability. Requires assumptions about the input distribution.
An algorithm could still get ”unlucky” and perform poorly.
▶ Amortized Analysis: Is a deterministic method. It provides a guaranteed upper bound. It
makes no assumptions about input probability. It proves that for any possible sequence,
the average cost is low.
Example: The Dynamic Array (Operations)
A dynamic array (std::vector in C++, list in Python) provides an append operation.
”Cheap” Append
If the array has unused capacity (size ¡ capacity), the algorithm adds the new element to the
next available slot.
▶ Actual Cost: O(1)

”Expensive” Append
If the array is full (size == capacity), the algorithm must resize before appending:
1. Allocate a new, larger array (typically 2 × capacity).
2. Copy all n existing elements from the old array to the new one. (Actual Cost: O(n))
3. Add the new element. (Actual Cost: O(1))
The Actual Cost of this single operation is O(n).
▶ A simple worst-case analysis of a single append is O(n).
▶ An amortized analysis proves the cost is O(1).
Example: Dynamic Array (Aggregate Method Proof)

We use the Aggregate Method:


1. Determine the total worst-case cost, T (n), for a sequence of n operations.
2. The amortized cost per operation is T (n)/n.

Proof for n appends (starting from empty array)


Total Cost = (Total cost of all n insertions) + (Total cost of all resize-copies)
▶ Cost of Insertions: Each of the n operations must perform the O(1) insertion.
▶ Total Insertion Cost = n × O(1) = O(n).
▶ Cost of Resizes: Resizes happen at sizes 1, 2, 4, 8, ...
▶ Total Copy Cost = 1 + 2 + 4 + 8 + · · · + n/2.
▶ This is a geometric series that sums to n − 1.
▶ Total Resize Cost = O(n).
▶ Total Cost for n appends = O(n) (insertions) + O(n) (resizes) = O(n).
▶ Amortized Cost per Operation = Total Cost / n = O(n)/n = O(1).
Example: Dynamic Array (Growth Strategy)

The O(1) amortized cost is entirely contingent on the geometric resizing strategy (e.g.,
multiplying by 2).
What if we use an arithmetic strategy?
(e.g., ”when full, add 10 more slots”)
▶ Insertion Cost: Still O(n).
▶ Resize Costs: 10 + 20 + 30 + · · · + O(n).
▶ This is an arithmetic series. An arithmetic series with O(n) terms sums to O(n2 ).
▶ Total Cost: O(n) + O(n2 ) = O(n2 ).
▶ Amortized Cost: O(n2 )/n = O(n).

This demonstrates that the doubling strategy is a deliberate and essential design choice to
ensure the O(1) amortized efficiency.
Part 1: Analyzing Iterative Code

Rule of Sums (Sequential Operations)


If Block 1 (O(f (n))) is followed by Block 2 (O(g (n))), the total complexity is
O(f (n)) + O(g (n)).
▶ This sum is simplified to the dominant term.
▶ e.g., O(n) + O(n2 ) = O(n2 ).

Rule of Products (Nested Operations)


If an outer loop runs O(f (n)) times, and its inner body runs O(g (n)) times, the total
complexity is O(f (n)) × O(g (n)).
▶ e.g., Two nested loops (n times each) → O(n) × O(n) = O(n2 ).

Analyzing Conditional Statements (If/Else)


Cost = O(test) + max(O(Block A), O(Block B))
▶ We take the cost of the more expensive (worst-case) branch.
Part 2: Analyzing Recursive Algorithms
The complexity of recursive algorithms is modeled using a Recurrence Relation.
Divide and Conquer Recurrence

T (n) = aT (n/b) + f (n)


Where:
▶ T (n): Time to solve a problem of size n.
▶ a: The number of recursive calls (subproblems) made.
▶ n/b: The size of each subproblem (e.g., n/2 for halving).
▶ f (n): The cost of the non-recursive work (dividing the problem and combining the
solutions).

Example Recurrences
▶ Binary Search: T (n) = 1T (n/2) + O(1)
▶ Merge Sort: T (n) = 2T (n/2) + O(n)
▶ Naive Fibonacci: T (n) = T (n − 1) + T (n − 2) + O(1)
Solving Recurrence Relations

Method 1: The Substitution Method


A formal proof by mathematical induction.
1. Guess the form of the solution (e.g., T (n) = O(n log n)).
2. Verify the guess by substituting it back into the recurrence.

Method 2: The Master Theorem


A powerful ”recipe” or shortcut for recurrences of the form T (n) = aT (n/b) + f (n).
It compares the cost of the work at the root (f (n)) with the cost of the work at the leaves
(nlogb a ).
Case 1: If the leaf cost (nlogb a ) dominates, T (n) = Θ(nlogb a ).
Case 2: If the costs are ”equal”, T (n) = Θ(nlogb a log n).
Case 3: If the root cost (f (n)) dominates, T (n) = Θ(f (n)).
Note: The Master Theorem cannot be used for recurrences like T (n − 1) (e.g., Fibonacci).
Worked Example 1: Iterative Code

Function Complex-Iterative(n)
sum = 0
Analysis
// Block 1: O(n)
for i = 1 to n: 1. Block 1 (Loop 1): Rule of Products:
sum += A[i] n × O(1) = O(n)
2. Block 2 (If/Else): Rule for Conditionals:
// Block 2: O(n^2) O(test) + max(O(Block A), O(Block B))
if (sum > 1000): O(1) + max(O(n2 ), O(1)) = O(n2 )
// Block A: O(n^2) 3. Total Function Cost: Rule of Sums:
for i = 1 to n: T (n) = O(Block 1) + O(Block 2)
for j = 1 to n: T (n) = O(n) + O(n2 )
print(A[i], A[j]) 4. Final Complexity: Drop non-dominant
term: T (n) = O(n2 )
else:
// Block B: O(1)
print("Sum is small")
Worked Example 2: Recursive (Merge Sort)
1. Formulate Recurrence Relation
▶ Splits n-element array into 2 sub-arrays of size n2 .
▶ Makes two recursive calls: 2T (n/2).
▶ Merges the two sorted halves, which takes linear O(n) time.
▶ Recurrence: T (n) = 2T (n/2) + O(n)

2. Solve using the Master Theorem


Match T (n) = aT (n/b) + f (n):
▶ a = 2 (number of subproblems)
▶ b = 2 (size factor, n/2)
▶ f (n) = O(n) (cost of combine step)

3. Compare Costs & Apply Case


▶ Cost of ”leaves”: nlogb a = nlog2 2 = n1 = O(n)
▶ Cost of ”root”: f (n) = O(n)
Worked Example 3: Dependent Nested Loop
Function Dependent-Loop(n)
count = 0
for i = 1 to n: // Loop 1: Runs n times
for j = 1 to i: // Loop 2: Runs ’i’ times
count += 1 // Body: O(1)
Return count
Analysis
The inner loop (line 3) is dependent on the outer loop; it executes i times. The total number
of operations is the sum of the inner loop’s iterations:
▶ When i = 1, inner loop runs 1 time.
▶ When i = 2, inner loop runs 2 times.
▶ ...
▶ When i = n, inner loop runs n times.
Total operations = 1 + 2 + 3 + · · · + n
▶ This is an arithmetic series, which sums to:
2
Worked Example 4: Logarithmic Loop

Function Logarithmic-Loop(n)
i = 1
while i < n: // Loop runs k times
i = i * 2 // Body: O(1)
Return i

Analysis
We need to find how many iterations, k, it takes for i to become ≥ n. The value of i follows
the sequence: 1, 2, 4, 8, 16, . . . , 2k . The loop terminates when 2k ≥ n.
Worked Example 4: Logarithmic Loop

Function Logarithmic-Loop(n)
i = 1
while i < n: // Loop runs k times
i = i * 2 // Body: O(1)
Return i

Analysis
We need to find how many iterations, k, it takes for i to become ≥ n. The value of i follows
the sequence: 1, 2, 4, 8, 16, . . . , 2k . The loop terminates when 2k ≥ n.
▶ To solve for k, take the logarithm of both sides:
▶ log2 (2k ) ≥ log2 (n)
▶ This simplifies to k ≥ log2 n.
Since the loop runs k times, its time complexity is O(log n).
Worked Example 5: Linearithmic Loop (Nested)

Function Linearithmic-Loop(n)
count = 0
for i = 1 to n: // Loop 1: Runs O(n) times
j = 1
while j < n: // Loop 2: Runs O(log n) times
j = j * 2 // Body: O(1)
count += 1
Return count

Analysis
We use the Rule of Products.
▶ The outer loop (line 2) is linear and runs O(n) times.
▶ The inner loop (line 4) is a logarithmic loop, identical to Example 4. It runs O(log n)
times.
▶ The total complexity is the product of the two loops:
▶ O(n) × O(log n) = O(n log n)
Analysis Exercises (1)

Exercise 1: Definitions
In plain English, what is the difference between an algorithm’s runtime being O(n2 ), Ω(n2 ),
and Θ(n2 )?
Solutions:

Solution 1: Definitions
▶ O(n2 ) (Big O): This is an upper bound. It means the runtime grows no faster than
quadratic. It is a guarantee that the algorithm is not worse.
▶ Ω(n2 ) (Big Omega): This is a lower bound. It means the runtime grows at least as fast as
quadratic. It is a guarantee that the algorithm is not better.
▶ Θ(n2 ) (Big Theta): This is a tight bound. It means the runtime grows at the same rate as
quadratic. It is both O(n2 ) and Ω(n2 ).
Analysis Exercises (2)

Exercise 2: Sequential vs. Nested


An algorithm is composed of two main parts. Part A is a single for loop that iterates n times.
Part B is also a single for loop that iterates n times.
▶ What is the total time complexity if the algorithm runs Part A and then runs Part B?
▶ What is the total time complexity if the algorithm runs Part A, and inside that loop, it
runs Part B for every iteration?
Solutions:

Solution 2: Sequential vs. Nested


▶ Sequential: We use the Rule of Sums. The complexity is
O(Part A) + O(Part B) = O(n) + O(n) = O(2n). After dropping the constant, the total
complexity is O(n).
▶ Nested: We use the Rule of Products. The complexity is
O(Part A) × O(Part B) = O(n) × O(n) = O(n2 ).
Analysis Exercises (3)

Exercise 3: Code Analysis 1


What is the Big O time complexity of the following function?

Function Exercise-1(n)
total = 0
for i = 1 to n:
for j = 1 to n:
total += i * j

for k = 1 to n:
total += k

Return total
Solutions:

Solution 3: Code Analysis 1


1. Analyze the first block (lines 2-4): This is a simple nested loop. By the Rule of
Products, this block is O(n) × O(n) = O(n2 ).
2. Analyze the second block (lines 6-7): This is a single sequential loop. Its complexity is
O(n).
3. Combine the blocks: The two blocks run sequentially, so we use the Rule of Sums. The
total complexity is O(n2 ) + O(n).
4. Find the dominant term: n2 grows faster than n.
Final Complexity: O(n2 )
Analysis Exercises (4)

Exercise 4: Code Analysis 2


What is the Big O time complexity of the following function?
Function Exercise-2(n)
total = 0
for i = 1 to n:
j = i
while j > 0:
total += 1
j = j / 2 // integer division
Return total
Solutions (4)

Solution 4: Code Analysis 2


1. Analyze the outer loop (line 2): This loop runs n times. Its complexity is O(n).
2. Analyze the inner loop (lines 4–6): This is a logarithmic loop. The variable j is
repeatedly divided by 2. In the worst-case iteration of the outer loop, i will be equal to n.
Therefore, the inner loop will run a maximum of log2 n times. The complexity of the inner
loop is O(log n).
3. Combine the loops: The loops are nested, so we use the Rule of Products.
Final Complexity: O(outer) × O(inner) = O(n) × O(log n) = O(n log n)
Conclusion

▶ Algorithmic analysis is a formal discipline that moves from a precise definition of an


algorithm (finiteness, definiteness) to a quantitative measure of its performance.
▶ The discipline provides a rigorous framework for abstracting computational cost away from
hardware and implementation specifics, focusing instead on intrinsic scalability.
▶ This framework is built upon Time and Space Complexity, which are defined as functions
of input size n.
▶ The mathematical language of Asymptotic Notations (O, Ω, Θ) provides the tools to
formally describe the bounds of these functions, allowing for a precise classification of
algorithms into distinct performance hierarchies.
Conclusion (contd.)

▶ This analysis requires a nuanced understanding of the difference between mathematical


notations (like O) and analysis scenarios (like worst-case).
▶ Specialized techniques are needed for different structures:
▶ Amortized Analysis offers a deterministic guarantee of average performance over a
sequence of operations (e.g., Dynamic Array).
▶ Rules of Sums and Products allow for the composition of simple iterative code blocks.
▶ Recurrence Relations and the Master Theorem provide the formal machinery to analyze
complex recursive, divide-and-conquer strategies. *
▶ Ultimately, algorithmic analysis is not purely academic. It is the fundamental tool that
enables engineers to distinguish, before code is written, between an algorithm that will be
instantaneous and one that will never finish.
Thank You

You might also like