🌳 Binary Trees – Interview Notes
(Bullet Format)
🔹 Introduction
● Non-linear data structure → hierarchical (unlike arrays, stacks,
queues, linked lists).
● Binary Tree (BT) → each node has at most 2 children (left & right).
● Real-world analogy → file systems (folders → subfolders → files).
🔹 Basic Terminology
● Node → stores data + left child + right child reference.
● Root → topmost node, entry point to tree.
● Parent / Child → relationship between connected nodes.
● Leaf node → node with no children.
● Internal node → node with ≥1 child.
● Siblings → nodes with same parent.
● Height of node → longest path from node to a leaf.
● Height of tree → height of root node.
● Depth of node → distance from root to the node.
● Ancestors → nodes from a given node up to root.
● Descendants → all nodes under a given node (subtree).
🔹 Types of Binary Trees
✅ Full Binary Tree (Strict BT)
● Every node has 0 or 2 children.
● No node with only one child.
● Properties:
○ If internal nodes = i, total nodes = 2i + 1.
○ Height h → max nodes = 2^(h+1) - 1.
● Useful for theoretical analysis.
✅ Complete Binary Tree
● All levels fully filled except last.
● Last level filled left to right.
● Properties:
○ Height = ⌊log2(n)⌋.
○ Leaves appear at last or second-last level.
● Applications: Heaps (Min/Max) → priority queues.
✅ Perfect Binary Tree
● All internal nodes → 2 children.
● All leaves → same level.
● Properties:
○ Total nodes = 2^(h+1) - 1.
○ Leaf nodes = 2^h.
○ Nodes at level i = 2^i.
● Both full and complete at the same time.
✅ Balanced Binary Tree
● Height difference between left & right subtree ≤ 1.
● Ensures logarithmic height → efficient operations.
● Examples: AVL Tree, Red-Black Tree.
● Time complexity: Search/Insert/Delete = O(log n).
✅ Degenerate Tree (Pathological Tree)
● Every parent has only one child.
● Shape becomes like a linked list.
● Height = n.
● Search/Insert/Delete = O(n).
● Occurs if sorted data is inserted into naive BST.
🔹 Traversals
● DFS (Depth First Search):
○ Inorder (L → Root → R) → yields sorted order for BST.
○ Preorder (Root → L → R) → copying trees, expression tree
prefix.
○ Postorder (L → R → Root) → deleting/freeing trees, postfix
evaluation.
● BFS (Level Order):
○ Traverses level by level using Queue.
○ Used for shortest path in unweighted graphs.
👉 All traversals → O(n).
🔹 Key Properties & Formulas
● Max nodes at level i = 2^i.
● Max nodes in a tree of height h = 2^(h+1) - 1.
● Min height of n nodes = ⌊log2(n)⌋.
● Max height of n nodes = n-1 (degenerate).
● In a full binary tree:
○ Leaf nodes = (n+1)/2.
○ Internal nodes = (n-1)/2.
🔹 Applications of Trees
● Heaps → priority queues, scheduling, graph algorithms.
● BST → efficient searching, range queries.
● Huffman Tree → data compression.
● Syntax / Parse Trees → compilers, expression evaluation.
● Trie (Prefix Tree) → dictionary lookups, autocomplete.
● B-Trees / B+ Trees → database indexing.
● Game Trees → AI (minimax in chess, tic-tac-toe).
● File systems → folder hierarchies.
🔹 Common Interview Questions
● Difference between Full, Complete, and Perfect binary trees.
● Why is inorder traversal of BST sorted?
● Reconstruct tree from Preorder + Inorder.
● Find Lowest Common Ancestor (LCA).
● Find height/diameter of tree.
● Check if a tree is balanced.
● Convert BT → Doubly Linked List.
● Serialize & Deserialize a tree.
● Max width of a tree.
● Handle skewed/degenerate cases.
🔹 Advanced Tree Variants
● AVL Tree → self-balancing BST using rotations.
● Red-Black Tree → balanced tree used in map, set in C++.
● Segment Tree → range queries (sum, min, max).
● Fenwick Tree (BIT) → efficient prefix sums.
● Trie → fast string searching, autocomplete.
● B-Tree / B+ Tree → optimized for databases & filesystems.
🔹 High-Impact One-Liners (Interview Impressions)
● "Inorder traversal of BST always produces a sorted sequence."
● "Perfect Binary Tree is both full and complete."
● "Heaps are complete binary trees but not BSTs."
● "Balanced trees guarantee O(log n) time; degenerate trees degrade to
O(n)."
● "Full Binary Tree: node has 0 or 2 children; Complete: filled left to
right; Perfect: completely filled, all leaves same level."
Do you want me to now create a separate compact “revision sheet” (just
formulas + key differences + diagrams suggestion) so you can revise in 2–3
minutes before interview?
You said:
in detail every point bullet point
ChatGPT said:
Perfect 👍 — you want every bullet point detailed, so that even if an
interviewer goes from easy → tough questions, you’ll have depth ready.
I’ll expand each bullet into rich, content-heavy points with explanations +
edge cases + examples where needed.
🌳 Binary Trees – Detailed Interview
Notes
🔹 Introduction
● A Binary Tree is a non-linear hierarchical data structure where each
node has at most two children: a left child and a right child.
● Difference from linear structures (array, linked list, stack, queue):
○ Linear → data arranged sequentially, traversed one direction at
a time.
○ Tree → data arranged hierarchically, multiple paths for
traversal.
● Real-world analogy → folder structure in an OS (Root = Drive, Children
= Folders, Leaves = Files).
● Importance in interviews → Foundation for BSTs, Heaps, Tries, Segment
Trees, Red-Black Trees, etc.
🔹 Basic Terminology
● Node → a basic unit containing:
○ Data (value),
○ Left pointer,
○ Right pointer.
● Root → topmost node, starting point for traversal/search.
● Parent / Child →
○ A node connected to another node one level below is the parent.
○ The lower node is the child.
● Leaf Node → node with no children.
○ Example: in a BST, smallest/largest element often ends up as a
leaf.
● Internal Node → any node with at least one child.
● Siblings → nodes sharing the same parent.
● Height of a node → longest path from node to a leaf.
● Height of a tree → height of the root node.
● Depth of a node → distance (#edges) from root to the node.
● Ancestors → all nodes on the path from a node up to the root.
● Descendants → all nodes in a node’s subtree.
🔹 Types of Binary Trees
✅ Full Binary Tree
● Each node has either 0 or 2 children, never exactly 1.
● Example: decision trees, Huffman coding trees.
● Properties:
○ If number of internal nodes = i, then total nodes = 2i + 1.
○ If tree has height h, then max nodes = 2^(h+1) - 1.
● Edge Case → A tree with only a root node is still a full binary tree.
✅ Complete Binary Tree
● All levels completely filled except possibly the last.
● Last level → filled from left to right without gaps.
● Example: Heap (priority queues in OS scheduling).
● Properties:
○ Height = ⌊log2(n)⌋.
○ Nodes filled sequentially → efficient array representation.
● Key difference from Full: A Complete BT allows nodes with 1 child, but
arrangement follows strict left-to-right rule.
✅ Perfect Binary Tree
● All internal nodes → exactly 2 children.
● All leaf nodes → at same depth/level.
● Properties:
○ Total nodes = 2^(h+1) - 1.
○ Leaf nodes = 2^h.
○ Nodes at level i = 2^i.
● Example: Tournament brackets (knockout matches).
● Edge Case → Every Perfect BT is both Full + Complete.
✅ Balanced Binary Tree
● For every node, the height difference between left and right subtrees ≤
1.
● Ensures O(log n) performance for search/insert/delete.
● Examples: AVL Tree, Red-Black Tree, Treap.
● Edge Case → Strict balance not always needed; "almost balanced" still
ensures efficiency.
✅ Degenerate (Pathological) Tree
● Every node has only one child (either left or right).
● Structure is like a linked list.
● Height = n (worst case).
● Search/Insert/Delete = O(n).
● Occurs when inserting sorted data into an unbalanced BST.
🔹 Tree Traversals
DFS (Depth First Search)
● Inorder (Left → Root → Right):
○ For BST → produces sorted order.
○ Used in expression trees (infix evaluation).
● Preorder (Root → Left → Right):
○ Useful for copying/cloning a tree.
○ Used in expression trees (prefix notation).
● Postorder (Left → Right → Root):
○ Useful for deleting/freeing memory of tree.
○ Used in expression trees (postfix evaluation).
BFS (Breadth First Search)
● Level-order traversal using a queue.
● Traverses nodes level by level.
● Used in finding shortest path in unweighted graphs, serialization of
trees.
👉 All traversal algorithms take O(n) time since every node is visited once.
🔹 Key Properties & Formulas
● Maximum nodes at level i = 2^i.
● Maximum nodes in tree of height h = 2^(h+1) - 1.
● Minimum height of tree with n nodes = ⌊log2(n)⌋.
● Maximum height of tree with n nodes = n-1 (degenerate case).
● In a Full Binary Tree:
○ Leaf nodes = (n+1)/2.
○ Internal nodes = (n-1)/2.
🔹 Applications
● Heaps → Complete BT, used in priority queues, scheduling.
● BSTs → Efficient searching, insertion, deletion, range queries.
● Huffman Coding Tree → Optimal prefix codes for compression.
● Syntax Trees / Parse Trees → Compilers for expression evaluation.
● Trie (Prefix Tree) → Fast string searching, autocomplete.
● Segment Trees → Efficient range queries (sum, min, max).
● Fenwick Tree (BIT) → Efficient prefix sums.
● B-Trees/B+ Trees → Database indexing, filesystem management.
● Game Trees → Minimax algorithm in AI (chess, tic-tac-toe).
● Decision Trees → Machine learning models.
🔹 Common Interview Questions
● Distinguish between Full, Complete, and Perfect binary trees.
● Why does inorder traversal of BST give sorted output?
● Construct tree from Preorder + Inorder traversals.
● Find Lowest Common Ancestor (LCA) of two nodes.
● Find height/diameter of a binary tree.
● Check if a tree is height-balanced.
● Serialize and Deserialize a binary tree.
● Convert BT into Doubly Linked List.
● Find maximum width of a binary tree.
● How to handle skewed/degenerate tree cases.
🔹 Advanced Variants
● AVL Tree:
○ Self-balancing BST.
○ Height difference (balance factor) always −1, 0, or +1.
● Red-Black Tree:
○ Balanced BST with color properties.
○ Used in STL (map, set in C++).
● Segment Tree:
○ For range queries (sum/min/max).
○ Time complexity: O(log n) for updates and queries.
● Fenwick Tree (BIT):
○ Space-efficient structure for prefix sums.
● Trie:
○ Fast string search, prefix-based queries.
● B-Trees / B+ Trees:
○ Multi-way balanced search trees.
○ Widely used in databases and file systems.
📌 Binary Tree Representation in C++ (Notes)
● Definition & Representation
○ A binary tree is a hierarchical data structure where each node
has at most two children: left and right.
○ In C++, binary trees are represented using pointers that connect
nodes together.
○ Each node stores:
■ Data (value of the node).
■ Pointers to children (left, right).
🔹 Node Structure in C++
● Implemented using a struct or class.
● Contains:
○ Data field → stores value (can be int, string, object, or even
a template type T).
○ Left pointer → points to left child node (or NULL if absent).
○ Right pointer → points to right child node (or NULL if absent).
👉 Extra Insight:
● Using NULL is old style (C-style). In modern C++, prefer nullptr for
type safety.
You can also make the structure generic using templates:
template <typename T>
struct Node {
T data;
Node* left;
Node* right;
Node(T val) : data(val), left(nullptr), right(nullptr) {}
};
●
🔹 Node Constructor
● Constructor initializes the node with:
○ data = val; → assigns the value.
○ left = right = NULL; (or nullptr) → ensures no children
initially.
● This prevents dangling pointers and keeps the structure safe.
👉 Tricky Interview Tip:
Always initialize pointers to nullptr. If not, they may contain garbage
addresses leading to undefined behavior.
🔹 Node Connection (Building Tree)
● When creating a binary tree:
1. A root node is created with new Node(value).
2. To attach children: assign root->left or root->right to new
nodes.
3. Each new node is allocated in heap memory dynamically (new).
Example:
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->right = new Node(5);
This creates:
1
/ \
2 3
\
5
👉 Extra Insight:
● This is a manual connection method. In real projects/interviews, you
may be asked to insert nodes dynamically (like in Binary Search Tree
insertion or from array representation).
● Memory management is crucial: dynamically allocated nodes (new) should
be freed (delete) to avoid memory leaks (or use smart pointers like
std::unique_ptr).
🔹 Traversals (Interview Must-Know)
Once you have the structure, you should know standard tree traversal methods:
● DFS Traversals
○ Preorder (Root → Left → Right)
○ Inorder (Left → Root → Right)
○ Postorder (Left → Right → Root)
● BFS Traversal (Level Order) → uses a queue.
👉 Extra Insight:
● Inorder Traversal of a Binary Search Tree (BST) gives elements in
sorted order.
● Traversals are one of the most common interview coding questions.
🔹 Memory Management
● Nodes are created using new, so memory is allocated on the heap.
● Must use delete (or write a destructor) to free memory.
● Safer modern approach: smart pointers (std::unique_ptr<Node>).
Example:
#include <memory>
struct Node {
int data;
std::unique_ptr<Node> left, right;
Node(int val) : data(val), left(nullptr), right(nullptr) {}
};
🔹 Real-World Uses
● Binary Trees are the base for advanced structures:
○ Binary Search Trees (BSTs).
○ Heaps (for priority queues).
○ Syntax Trees (compilers).
○ Huffman Encoding Trees (compression).
○ Decision Trees (ML).
👉"Binary
Interview One-Liner:
trees are the foundation of most hierarchical and efficient
searching data structures like BST, AVL, Heaps, and are widely used in
parsing, expression evaluation, and memory management."
🌳 Binary Tree Traversals
🔹 Categories of Traversals
● Depth-First Search (DFS)
○ Explores as deeply as possible along each branch before
backtracking.
○ Implemented using recursion or an explicit stack.
○ Variants:
■ Inorder (L → Root → R)
■ Preorder (Root → L → R)
■ Postorder (L → R → Root)
○ Good for structural exploration and expression tree evaluation.
● Breadth-First Search (BFS / Level Order Traversal)
○ Explores the tree level by level, left to right.
○ Implemented using a queue.
○ Useful for shortest path in unweighted graphs/trees,
serialization, etc.
🔹 Inorder Traversal (L → Root → R)
● Steps:
○ Recursively traverse left subtree.
○ Visit current node.
○ Recursively traverse right subtree.
● Properties:
○ In a Binary Search Tree (BST), inorder traversal gives sorted
ascending order.
○ Useful for printing sorted data or validating BST property.
● Complexity:
○ Time: O(n) (every node visited once).
○ Space: O(h) recursion stack (h = tree height). Worst case O(n) in
skewed tree.
● Extra Interview Points:
○ Can be implemented iteratively with a stack.
○ Morris Traversal → Inorder traversal without recursion/stack,
uses O(1) extra space by temporarily modifying tree links.
🔹 Preorder Traversal (Root → L → R)
● Steps:
○ Visit current node.
○ Recursively traverse left subtree.
○ Recursively traverse right subtree.
● Properties:
○ Useful for tree serialization/deserialization (LeetCode:
Serialize and Deserialize Binary Tree).
○ Root is always the first element in preorder → helps reconstruct
a tree with inorder + preorder.
● Complexity: Same as inorder (O(n) time, O(h) space).
● Extra Interview Points:
○ Preorder is used in copying a tree or creating an expression
prefix notation (Polish notation).
○ With inorder + preorder, the tree can be uniquely reconstructed.
🔹 Postorder Traversal (L → R → Root)
● Steps:
○ Recursively traverse left subtree.
○ Recursively traverse right subtree.
○ Visit current node.
● Properties:
○ Used for deleting/freeing a tree safely (children first, then
parent).
○ Helpful in expression tree evaluation (Postfix / Reverse Polish
notation).
○ Root is always the last element in postorder.
● Complexity: O(n) time, O(h) space.
● Extra Interview Points:
○ With inorder + postorder, a binary tree can be uniquely
reconstructed.
○ Trickier to implement iteratively (requires two stacks or
modified traversal).
🔹 Level Order Traversal (BFS)
● Steps:
○ Start with root in a queue.
○ Dequeue node → visit it.
○ Enqueue its children (left then right).
○ Repeat until queue is empty.
● Properties:
○ Visits nodes level by level.
○ Useful for problems like:
■ Finding shortest path in unweighted trees.
■ Zig-zag traversal (alternate left-right).
■ Level-wise averages, max-sum-level, etc.
● Complexity:
○ Time: O(n) (each node processed once).
○ Space: O(w) (w = max width of tree, worst case O(n) for a full
tree).
🔹 Interview Comparison Table
Traversal Order Uses / Applications
Inorder L → Root → Sorted order in BST, validate BST
R
Preorder Root → L → Tree serialization,
R reconstruction, copy tree
Postorder L → R → Delete tree, expression evaluation
Root
Level Level by BFS problems, shortest path,
Order level zig-zag
🔹 Common Interview Questions
● Given a BST, return sorted elements → Use Inorder.
● Reconstruct a tree from preorder + inorder or postorder + inorder.
● Check if a tree is symmetric → Use Level Order / DFS.
● Find height of tree → DFS (postorder).
● Print tree level-wise → BFS.
● Serialize & Deserialize → Preorder + null markers or Level order with
queue.
✅ With this structure, you can confidently explain definitions, algorithms,
complexities, and real-life applications → which is what interviewers love.
Perfect👍 Let’s build interview-ready notes for Preorder Traversal of Binary
Tree in the exact rich format you want:
🌲 Preorder Traversal (Root → Left → Right)
🔹 Concept / Intuition
● Preorder is a Depth-First Traversal (DFS) technique.
● Order of visiting nodes:
1. Root → Process the current node.
2. Left Subtree → Recursively visit all nodes in the left subtree.
3. Right Subtree → Recursively visit all nodes in the right
subtree.
● Essentially: Root first, then children.
● Commonly used when we want to copy a tree structure or serialize a
tree.
🔹 Base Case
● If the current node is NULL, stop recursion (no more nodes to explore).
🔹 Pseudoalgorithm (Step-by-step)
1. If the current node is NULL → return.
2. Process the current node (store/print its data).
3. Recursively call preorder on the left child.
4. Recursively call preorder on the right child.
🔹 Pseudocode
function preorder(node):
if node == NULL:
return
process([Link]) // visit root
preorder([Link]) // traverse left subtree
preorder([Link]) // traverse right subtree
🔹 C++ Code Example
#include <iostream>
#include <vector>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
// Preorder traversal function
void preorder(Node* root, vector<int>& result) {
if (root == NULL) return;
result.push_back(root->data); // Step 1: process root
preorder(root->left, result); // Step 2: traverse left
preorder(root->right, result); // Step 3: traverse right
}
int main() {
// Build tree
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
vector<int> result;
preorder(root, result);
cout << "Preorder Traversal: ";
for (int val : result) cout << val << " ";
return 0;
}
🔹 Walkthrough of Example
For the tree:
1
/ \
2 3
/ \
4 5
● Visit 1 → store 1
● Traverse left → Visit 2 → store 2
● Traverse left → Visit 4 → store 4
● Backtrack → Visit 5 → store 5
● Backtrack → Traverse right → Visit 3 → store 3
➡ Result = [1, 2, 4, 5, 3]
🔹 Complexity Analysis
● Time Complexity:
○ Each node is visited once → O(N)
● Space Complexity:
○ For recursion stack → O(H) where H = height of tree
○ Worst case: skewed tree → O(N)
○ Best case: balanced tree → O(log N)
🔹 Interview Insights
● Recursive vs Iterative:
1. Recursive is simpler.
2. Iterative uses a stack explicitly (important follow-up question).
● Applications:
1. Serialization/Deserialization of trees.
2. Constructing expression trees (prefix evaluation).
3. Copying or cloning tree structures.
● Follow-up questions:
1. Can you implement it iteratively using a stack?
2. How would you modify traversal order to get Inorder or Postorder?
3. What happens in skewed trees (linked-list-like structure)?
🌲 Preorder Traversal (Iterative – Using Stack)
🔹 Concept / Intuition
● Recursive traversal uses function call stack internally.
● We can simulate recursion manually using an explicit stack.
● Idea:
○ Start from root, push it into a stack.
○ Pop a node → process it.
○ Push its right child first, then left child.
○ (So left child is processed first, because stack is LIFO).
● This gives Root → Left → Right order.
🔹 Pseudoalgorithm (Step-by-step)
1. If the root is NULL → return (empty tree).
2. Initialize an empty stack. Push the root node into it.
3. While stack is not empty:
○ Pop the top node.
○ Process (visit) the popped node.
○ Push its right child (if exists).
○ Push its left child (if exists).
4. Repeat until stack is empty.
🔹 Pseudocode
function preorder_iterative(root):
if root == NULL:
return
stack = new Stack()
[Link](root)
while stack is not empty:
node = [Link]()
process([Link])
if [Link] != NULL:
[Link]([Link])
if [Link] != NULL:
[Link]([Link])
🔹 C++ Code Example
#include <iostream>
#include <stack>
#include <vector>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
vector<int> preorderIterative(Node* root) {
vector<int> result;
if (root == NULL) return result;
stack<Node*> st;
[Link](root);
while (![Link]()) {
Node* curr = [Link]();
[Link]();
result.push_back(curr->data); // Visit current node
// Push right first so that left is processed first
if (curr->right) [Link](curr->right);
if (curr->left) [Link](curr->left);
}
return result;
}
int main() {
// Build tree
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
vector<int> result = preorderIterative(root);
cout << "Preorder Traversal (Iterative): ";
for (int val : result) cout << val << " ";
return 0;
}
🔹 Walkthrough of Example
Tree:
1
/ \
2 3
/ \
4 5
● Start: Push [1].
● Pop 1 → Visit 1 → Push right(3), left(2). Stack = [3, 2].
● Pop 2 → Visit 2 → Push right(5), left(4). Stack = [3, 5, 4].
● Pop 4 → Visit 4. Stack = [3, 5].
● Pop 5 → Visit 5. Stack = [3].
● Pop 3 → Visit 3. Stack = [].
➡ Result = [1, 2, 4, 5, 3]
🔹 Complexity Analysis
● Time Complexity: O(N) → each node processed once.
● Space Complexity:
○ Stack stores at most O(H) nodes, where H = height of tree.
○ Worst case (skewed tree): O(N)
○ Balanced tree: O(log N)
🔹 Interview Insights
● Iterative approach shows you understand recursion internally uses
stack.
● Trick: Always push right child first → ensures left child is visited
before right.
● Comparison with Recursive:
1. Recursive → shorter, cleaner, but stack hidden.
2. Iterative → slightly more complex, but no function call overhead
(better for very deep trees).
● Follow-up questions:
1. Can you implement Inorder / Postorder iteratively?
2. Can you implement Morris Traversal (preorder without
stack/recursion, O(1) space)?
3. Which traversal (preorder, inorder, postorder, level-order) is
best for serializing a tree?
🌳 Inorder Traversal (Recursive)
🔹 Intuition / Algorithm
● Inorder traversal is a Depth-First Traversal (DFS) technique for binary
trees.
● The traversal order is:
Left Subtree → Root Node → Right Subtree
Key Points:
1. Start at the root node.
2. Recursively traverse the left subtree until reaching the leftmost node.
3. Once left subtree is done, visit (process) the current node.
4. Recursively traverse the right subtree.
5. Stop when you reach a NULL (base case).
🔹 Step-by-Step Algorithm
1. Base Case: If root == NULL, return (no node to process).
2. Recursive Case:
○ Recursively call inorder on root->left.
○ Process root->data.
○ Recursively call inorder on root->right.
🔹 Pseudoalgorithm
function inorder(node):
if node == NULL:
return
inorder([Link]) // Step 1: Left Subtree
process([Link]) // Step 2: Current Node
inorder([Link]) // Step 3: Right Subtree
🔹 Pseudocode
inorder_traversal(root):
if root == null:
return
inorder_traversal([Link])
print([Link])
inorder_traversal([Link])
🔹 C++ Implementation
#include <iostream>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
void inorderTraversal(Node* root) {
if (root == NULL) return; // Base case
inorderTraversal(root->left); // Step 1: Left
cout << root->data << " "; // Step 2: Visit Node
inorderTraversal(root->right); // Step 3: Right
}
int main() {
// Example Tree
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
cout << "Inorder Traversal: ";
inorderTraversal(root); // Output: 4 2 5 1 3
return 0;
}
🔹 Walkthrough Example
Tree:
1
/ \
2 3
/ \
4 5
Traversal:
1. Start at 1. Go left → 2.
2. Go left again → 4.
○ Left child = NULL → visit 4.
3. Back to 2 → visit 2.
4. Go right → 5. Visit 5.
5. Back to 1 → visit 1.
6. Go right → 3. Visit 3.
➡ Result: 4 2 5 1 3
🔹 Complexity Analysis
● Time Complexity: O(N) → Each node visited exactly once.
● Space Complexity:
○ Due to recursive call stack: O(H) where H is tree height.
○ Worst case (skewed tree): O(N)
○ Best case (balanced tree): O(log N)
🔹 Real-World Uses
● Inorder traversal of a Binary Search Tree (BST) → always yields sorted
order.
● Used in:
○ Printing sorted elements from BST.
○ Checking BST properties.
○ Tree-based expression evaluation (infix expressions).
🔹 Interview Insights
● Recursive inorder is the simplest DFS implementation.
● Follow-ups interviewers may ask:
1. Implement iterative inorder using stack.
2. Implement Morris Inorder Traversal (O(1) space).
3. What changes for preorder and postorder traversals?
4. Why does inorder traversal give sorted order for BSTs?
🌳 Iterative Inorder Traversal
(Binary Tree)
🔹 Intuition
● In inorder traversal (Left → Node → Right), recursion automatically
handles the traversal order using the call stack.
● In the iterative version, we use an explicit stack data structure to
simulate recursion.
● The trick is:
○ Keep going left until you can’t anymore (push nodes on the
stack).
○ When left ends, process (visit) the node.
○ Then move to its right subtree.
● Repeat until both the stack is empty and the current node is NULL.
🔹 Step-by-Step Algorithm
1. Create an empty stack.
2. Initialize current = root.
3. While current != NULL or stack not empty:
○ Push current onto the stack and go left (current =
current->left).
○ If current == NULL, pop from stack → process node → move right
(current = node->right).
4. Continue until all nodes are visited.
🔹 Pseudoalgorithm
function inorder_iterative(root):
stack = empty
current = root
while stack is not empty OR current != NULL:
if current != NULL:
push current to stack
current = [Link]
else:
current = pop from stack
process([Link])
current = [Link]
🔹 Pseudocode
inorder_iterative(root):
stack = []
current = root
while current != null or stack not empty:
while current != null:
[Link](current)
current = [Link]
current = [Link]()
print([Link])
current = [Link]
🔹 C++ Implementation
#include <iostream>
#include <stack>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
void inorderIterative(Node* root) {
stack<Node*> st;
Node* current = root;
while (current != NULL || ![Link]()) {
// Step 1: Go to extreme left
while (current != NULL) {
[Link](current);
current = current->left;
}
// Step 2: Backtrack
current = [Link]();
[Link]();
cout << current->data << " "; // Visit node
// Step 3: Go right
current = current->right;
}
}
int main() {
// Example Tree
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
cout << "Inorder Iterative Traversal: ";
inorderIterative(root); // Output: 4 2 5 1 3
return 0;
}
🔹 Walkthrough Example
Tree:
1
/ \
2 3
/ \
4 5
Simulation:
● Start: current = 1 → push(1), go left.
● current = 2 → push(2), go left.
● current = 4 → push(4), go left → NULL.
● Pop(4), print 4. → go right (NULL).
● Pop(2), print 2. → go right (5).
● Push(5) → left NULL → Pop(5), print 5.
● Pop(1), print 1. → go right (3).
● Push(3) → left NULL → Pop(3), print 3.
➡ Output: 4 2 5 1 3 ✅
🔹 Complexity
● Time Complexity: O(N) (every node pushed & popped once).
● Space Complexity: O(H) (stack stores nodes along height of tree).
○ Skewed tree → O(N)
○ Balanced tree → O(log N)
🔹 Interview Insights
● This shows you understand recursion under the hood.
● Follow-up questions interviewers ask:
○ Can you do it without stack and recursion? (
O(1) space).
👉 Morris Traversal,
○ What happens in skewed trees (worst case)?
○ Compare recursive vs iterative vs Morris.
🌳 Postorder Traversal (Recursive)
🔹 Intuition
● Postorder traversal is a Depth-First Traversal (DFS) technique where
nodes are visited in the order:
Left Subtree → Right Subtree → Root Node
● Key idea: visit the children before the parent.
● Often used in scenarios where deletion or freeing memory needs to
happen bottom-up.
🔹 Step-by-Step Algorithm
1. Base Case: If current node == NULL, return.
2. Recursive Steps:
○ Traverse the left subtree recursively.
○ Traverse the right subtree recursively.
○ Process the current node (print or add to array).
🔹 Pseudoalgorithm
function postorder(node):
if node == NULL:
return
postorder([Link]) // Step 1: Left Subtree
postorder([Link]) // Step 2: Right Subtree
process([Link]) // Step 3: Current Node
🔹 Pseudocode
postorder_traversal(root):
if root == null:
return
postorder_traversal([Link])
postorder_traversal([Link])
print([Link])
🔹 C++ Implementation
#include <iostream>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
void postorderTraversal(Node* root) {
if (root == NULL) return; // Base case
postorderTraversal(root->left); // Left subtree
postorderTraversal(root->right); // Right subtree
cout << root->data << " "; // Visit current node
}
int main() {
// Example Tree
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
cout << "Postorder Traversal: ";
postorderTraversal(root); // Output: 4 5 2 3 1
return 0;
}
🔹 Walkthrough Example
Tree:
1
/ \
2 3
/ \
4 5
● Start at root 1.
● Traverse left subtree → 2.
○ Traverse left → 4 → visit 4.
○ Traverse right → 5 → visit 5.
○ Back to 2 → visit 2.
● Traverse right subtree → 3 → visit 3.
● Back to root → visit 1.
➡ Output: 4 5 2 3 1 ✅
🔹 Complexity Analysis
● Time Complexity: O(N) → each node visited once.
● Space Complexity:
○ Recursive stack: O(H) where H = height of the tree.
○ Worst case (skewed tree): O(N)
○ Balanced tree: O(log N)
🔹 Real-World Applications
● Tree Deletion: Delete nodes bottom-up.
● Expression Trees: Evaluate postfix expressions.
● File System Operations: Delete files/folders recursively.
● Memory Management: Free memory in hierarchical structures.
🔹 Interview Insights
● Recursive postorder is simple, but iterative is tricky (requires two
stacks or modified single stack method).
● Follow-ups often include:
1. Implement Iterative Postorder Traversal.
2. Compare recursive vs iterative (space and time).
3. Use postorder to evaluate expression trees.
🌳 Iterative Postorder Traversal
🔹 Intuition
● Postorder order: Left → Right → Root.
● Recursion implicitly keeps track of visited nodes. Iteratively, we need
explicit tracking.
● Two-stack method:
○ First stack is used for normal DFS (root → right → left).
○ Second stack reverses the order to Left → Right → Root.
● One-stack method:
○ Track previously visited nodes to decide when to process the
root.
○ More complex but uses less space.
🔹 Approach 1: Two Stacks
Step-by-Step Algorithm
1. Create two stacks: s1 and s2.
2. Push root to s1.
3. While s1 is not empty:
○ Pop node from s1 → push it to s2.
○ Push left child (if exists) to s1.
○ Push right child (if exists) to s1.
4. Once s1 is empty, pop all nodes from s2 → visit them.
Pseudocode
postorder_iterative(root):
if root == NULL: return
s1 = empty stack
s2 = empty stack
[Link](root)
while s1 not empty:
node = [Link]()
[Link](node)
if [Link] != NULL: [Link]([Link])
if [Link] != NULL: [Link]([Link])
while s2 not empty:
node = [Link]()
print([Link])
C++ Implementation
#include <iostream>
#include <stack>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
void postorderIterativeTwoStacks(Node* root) {
if (!root) return;
stack<Node*> s1, s2;
[Link](root);
while (![Link]()) {
Node* node = [Link](); [Link]();
[Link](node);
if (node->left) [Link](node->left);
if (node->right) [Link](node->right);
}
while (![Link]()) {
cout << [Link]()->data << " ";
[Link]();
}
}
int main() {
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
cout << "Postorder Traversal (Two Stacks): ";
postorderIterativeTwoStacks(root); // Output: 4 5 2 3 1
return 0;
}
🔹 Complexity (Two Stacks)
● Time Complexity: O(N) → each node pushed/popped twice.
● Space Complexity: O(N) → two stacks storing nodes.
🔹 Approach 2: Single Stack (Space Optimized)
Step-by-Step Algorithm
1. Initialize an empty stack and current = root.
2. Loop while stack not empty or current != NULL:
○ Push left children to stack.
○ Peek the top of the stack → check if it has a right child that
is unvisited:
■ If yes → move current to right child.
■ Else → pop and visit.
○ Keep track of last visited node to avoid revisiting.
Pseudocode
postorder_iterative_one_stack(root):
stack = empty
lastVisited = NULL
current = root
while current != NULL or stack not empty:
if current != NULL:
[Link](current)
current = [Link]
else:
peekNode = [Link]()
if [Link] != NULL and lastVisited !=
[Link]:
current = [Link]
else:
visit(peekNode)
lastVisited = [Link]()
This is trickier, but space is reduced to O(H) instead of O(N).
🔹 Interview Insights
● Two stacks → simpler to explain and code, safe for first
implementation.
● One stack → more optimal, shows deep understanding of traversal logic.
● Postorder iterative is a favorite “hard” interview question.
.
🌳 Level-Order Traversal (2D Vector)
🔹 Intuition
● Level-order traversal visits nodes level by level, from top to bottom
and left to right.
● A queue is the natural data structure for BFS:
○ Enqueue root → process nodes level-wise → enqueue children.
● For storing nodes per level, we use a temporary vector for each level
and then append it to the main 2D vector.
● This approach is extremely useful in problems like:
○ Zigzag level order traversal
○ Bottom-up level order
○ Finding averages per level
🔹 Step-by-Step Algorithm
1. Initialize:
○ Empty queue q to store nodes.
○ Empty 2D vector ans to store level-wise traversal.
○ If root is NULL → return ans.
2. Enqueue the root node.
3. While the queue is not empty:
○ Determine size = [Link]() → number of nodes at current level.
○ Initialize level vector.
○ For size nodes:
■ Dequeue front node.
■ Add its value to level.
■ Enqueue left and right children (if exist).
○ Append level vector to ans.
4. Return ans after traversal completes.
🔹 Pseudocode
function levelOrder(root):
if root == NULL: return []
ans = []
queue = empty queue
[Link](root)
while queue not empty:
size = [Link]()
level = []
for i in 1 to size:
node = [Link]()
[Link]()
level.push_back([Link])
if [Link] != NULL: [Link]([Link])
if [Link] != NULL: [Link]([Link])
ans.push_back(level)
return ans
🔹 C++ Implementation
#include <iostream>
#include <vector>
#include <queue>
using namespace std;
struct Node {
int data;
Node* left;
Node* right;
Node(int val) : data(val), left(NULL), right(NULL) {}
};
vector<vector<int>> levelOrderTraversal(Node* root) {
vector<vector<int>> ans;
if (!root) return ans;
queue<Node*> q;
[Link](root);
while (![Link]()) {
int size = [Link]();
vector<int> level;
for (int i = 0; i < size; ++i) {
Node* node = [Link](); [Link]();
level.push_back(node->data);
if (node->left) [Link](node->left);
if (node->right) [Link](node->right);
}
ans.push_back(level);
}
return ans;
}
int main() {
Node* root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);
vector<vector<int>> result = levelOrderTraversal(root);
cout << "Level-Order Traversal:\n";
for (auto &level : result) {
for (int val : level) cout << val << " ";
cout << "\n";
}
return 0;
}
🔹 Walkthrough Example
Tree:
1
/ \
2 3
/ \
4 5
● Level 0: [1] → enqueue 2, 3
● Level 1: [2,3] → enqueue 4, 5
● Level 2: [4,5] → no children
● Final Output (2D vector): [[1], [2,3], [4,5]] ✅
🔹 Complexity Analysis
● Time Complexity: O(N) → each node visited once.
● Space Complexity:
○ Queue stores at most max nodes at a level → O(W) where W =
maximum width of tree.
○ 2D vector stores all nodes → O(N).
🔹 Interview Insights
● Level-order traversal is the go-to BFS pattern in trees.
● Useful for problems like:
○ Maximum per level, sum per level, average per level
○ Zigzag traversal (alternate levels reversed)
○ Vertical order traversal (with coordinate tracking)
🔹 Maximum Depth (BFS / Level-Order)
Pseudocode
function maxDepth(root):
if root == NULL: return 0
queue = empty queue
[Link](root)
level = 0
while queue not empty:
size = [Link]()
level += 1
for i = 1 to size:
node = [Link]()
[Link]()
if [Link] != NULL: [Link]([Link])
if [Link] != NULL: [Link]([Link])
return level
Complexity Analysis
● Time Complexity: O(N) → visit each node once.
● Space Complexity: O(W) → max width of the tree (queue size at a
level).
🔹 Maximum Depth (DFS / Recursive)
Pseudocode
function maxDepth(root):
if root == NULL:
return 0
leftDepth = maxDepth([Link])
rightDepth = maxDepth([Link])
return 1 + max(leftDepth, rightDepth)
Complexity Analysis
● Time Complexity: O(N) → visit each node once.
● Space Complexity: O(H) → recursive call stack, H = height of the tree.
○ Worst case (skewed tree) → O(N)
○ Balanced tree → O(log N)
🔹 Balanced Binary Tree Check (Optimized,
Postorder)
Pseudocode
function checkBalance(root):
if root == NULL:
return 0 // height of empty tree
leftHeight = checkBalance([Link])
if leftHeight == -1: return -1 // left subtree unbalanced
rightHeight = checkBalance([Link])
if rightHeight == -1: return -1 // right subtree unbalanced
if abs(leftHeight - rightHeight) > 1:
return -1 // current node unbalanced
return 1 + max(leftHeight, rightHeight) // height of current node
function isBalanced(root):
return checkBalance(root) != -1
Complexity Analysis
● Time Complexity: O(N) → each node visited once, heights computed
during traversal.
● Space Complexity: O(H) → recursion stack, H = height of the tree.
○ Worst case (skewed tree) → O(N)
○ Balanced tree → O(log N)
🔹 Diameter of Binary Tree (Optimized, Postorder)
Pseudocode
function height(node, diameter_ref):
if node == NULL:
return 0
leftHeight = height([Link], diameter_ref)
rightHeight = height([Link], diameter_ref)
// Diameter at this node
currentDiameter = 1 + leftHeight + rightHeight
diameter_ref = max(diameter_ref, currentDiameter)
return 1 + max(leftHeight, rightHeight) // Height of current node
function diameterOfBinaryTree(root):
diameter = 0
height(root, diameter)
return diameter
Complexity Analysis
● Time Complexity: O(N) → each node visited once, heights computed
during traversal.
● Space Complexity: O(H) → recursion stack, H = height of the tree.
○ Worst case (skewed tree) → O(N)
○ Balanced tree → O(log N)
🔹 Problem Breakdown / Goals
Main Problem: Given a binary tree, perform the boundary traversal in
anticlockwise direction starting from the root.
Breakdown into subgoals:
1. Add Root Node: Start traversal by adding the root value (if not NULL).
2. Left Boundary (excluding leaves):
○ Traverse from root → leftmost node.
○ Add only non-leaf nodes.
○ If left child is NULL, move to right child.
3. Leaf Nodes (bottom boundary):
○ Traverse entire tree in preorder.
○ Add all leaf nodes to result.
4. Right Boundary (excluding leaves, reverse order):
○ Traverse from root → rightmost node.
○ Add non-leaf nodes to a temporary stack/vector.
○ Append these nodes in reverse order to the result.
🔹 Pseudocode
function isLeaf(node):
return [Link] == NULL and [Link] == NULL
function addLeftBoundary(root, res):
node = [Link]
while node != NULL:
if not isLeaf(node): [Link]([Link])
if [Link] != NULL: node = [Link]
else: node = [Link]
function addLeafNodes(node, res):
if node == NULL: return
if isLeaf(node): [Link]([Link])
addLeafNodes([Link], res)
addLeafNodes([Link], res)
function addRightBoundary(root, res):
temp = empty list
node = [Link]
while node != NULL:
if not isLeaf(node): [Link]([Link])
if [Link] != NULL: node = [Link]
else: node = [Link]
// Add right boundary in reverse
for i from [Link]()-1 to 0:
[Link](temp[i])
function boundaryTraversal(root):
if root == NULL: return []
res = []
if not isLeaf(root): [Link]([Link])
addLeftBoundary(root, res)
addLeafNodes(root, res)
addRightBoundary(root, res)
return res
🔹 Complexity Analysis
● Time Complexity: O(N) → each node visited exactly once (left boundary,
right boundary, leaves).
● Space Complexity: O(H) → recursion stack for leaf traversal, plus
temporary storage for right boundary.
○ H = height of tree
○ Worst case skewed tree → O(N)
○ Balanced tree → O(log N)
Problem Breakdown / Goals
Main Problem: Given a binary tree, return the vertical order traversal
starting from the leftmost column to the rightmost. Nodes in the same
vertical line should appear in level order.
Breakdown into subgoals:
1. Assign coordinates to each node:
○ x → vertical column (left negative, right positive)
○ y → level/depth (root = 0, increases downward)
2. Organize nodes in a data structure:
○ Use a nested map: map<x, map<y, multiset>> to store nodes at
each vertical and level.
○ multiset ensures nodes at the same position are sorted.
3. Traverse tree using BFS:
○ Queue stores (node, x, y)
○ Start with root (0,0)
○ For left child → (x-1, y+1)
○ For right child → (x+1, y+1)
○ Insert node values into the map.
4. Prepare result vector:
○ Iterate over map<x> in order from leftmost to rightmost column.
○ Flatten each column’s multiset into a vector.
○ Append these vectors into the final 2D result.
🔹 Pseudocode
function verticalOrderTraversal(root):
if root == NULL: return []
nodesMap = map<x, map<y, multiset>> // nested map
queue = empty queue
[Link]((root, 0, 0)) // node, x, y
while queue is not empty:
node, x, y = [Link]()
nodesMap[x][y].insert([Link])
if [Link] != NULL:
[Link](([Link], x-1, y+1))
if [Link] != NULL:
[Link](([Link], x+1, y+1))
ans = empty list
for x in nodesMap in sorted order:
col = empty list
for y in nodesMap[x] in sorted order:
for val in nodesMap[x][y]:
[Link](val)
[Link](col)
return ans
🔹 Complexity Analysis
● Time Complexity: O(N log N)
○ O(N) for BFS traversal
○ O(log N) for map insertion per node in nested map
● Space Complexity: O(N)
○ Queue stores up to N nodes
○ Map stores all N nodes
🔹 Problem Breakdown / Goals
Main Problem: Return the top view of a binary tree — the set of nodes visible
from the top.
Breakdown into subgoals:
1. Assign vertical coordinates (x) to nodes:
○ Root = 0
○ Left child → x - 1
○ Right child → x + 1
2. Use BFS traversal to ensure level order:
○ Level order guarantees that the first node at each vertical
coordinate is the topmost node.
3. Map to store top view nodes:
○ map<x, node> stores the first node encountered at each vertical
index.
4. Traverse the tree:
○ Enqueue (node, x)
○ For each node:
■ If vertical index x is not in map, insert node value.
■ Enqueue left and right children with updated vertical
indices.
5. Prepare result vector:
○ Iterate over the map in ascending order of vertical index.
○ Push node values into ans vector.
🔹 Pseudocode
function topView(root):
if root == NULL: return []
ans = empty list
topMap = empty map<int, int> // key = vertical index, value =
node value
queue = empty queue
[Link]((root, 0)) // node, vertical index
while queue is not empty:
node, x = [Link]()
if x not in topMap:
topMap[x] = [Link]
if [Link] != NULL:
[Link](([Link], x-1))
if [Link] != NULL:
[Link](([Link], x+1))
for x in sorted(topMap keys):
[Link](topMap[x])
return ans
🔹 Complexity Analysis
● Time Complexity: O(N log N)
○ BFS traversal → O(N)
○ Map insertion / sorting → O(log N) per node
● Space Complexity: O(N)
○ Queue stores up to N nodes
○ Map stores at most N nodes (one per vertical index)
🔹 Problem Breakdown / Goals
Main Problem: Find the Lowest Common Ancestor (LCA) of two given nodes x and
y in a binary tree.
Breakdown into subgoals:
1. Base Case Handling:
○ If the current node is NULL, or equal to x or y, return the node
itself.
○ This handles leaf nodes and ensures recursion can propagate found
nodes upward.
2. Recursive Search:
○ Recur for left and right subtrees to find x and y.
3. Combine Results:
○ Both left and right return non-null:
■ Current node is LCA (as x is in one subtree and y in the
other).
○ Only left is non-null:
■ Return left (LCA found in left subtree).
○ Only right is non-null:
■ Return right (LCA found in right subtree).
4. Return LCA:
○ Propagate the LCA found up the recursion stack until the root
call returns it.
🔹 Pseudocode
function LCA(root, x, y):
if root == NULL or root == x or root == y:
return root
left = LCA([Link], x, y)
right = LCA([Link], x, y)
if left != NULL and right != NULL:
return root // Both sides found nodes → current node is LCA
else if left != NULL:
return left // LCA found in left subtree
else:
return right // LCA found in right subtree
🔹 Complexity Analysis
● Time Complexity: O(N)
○ Each node is visited once in the recursion.
● Space Complexity: O(H)
○ Auxiliary space due to recursion stack, where H = height of the
tree.
✅ Key Points / Notes
● Works for any binary tree, not just BST.
● Base case is critical: returning NULL vs node ensures correct
propagation.
● Intuition: If both subtrees have target nodes, the current node must be
the common ancestor.