0% found this document useful (0 votes)
118 views4 pages

Understanding Greedy Algorithms

Uploaded by

papu varsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views4 pages

Understanding Greedy Algorithms

Uploaded by

papu varsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

UNIT IV ALGORITHM DESIGN TECHNIQUES


Dynamic Programming: Matrix-Chain Multiplication – Elements of Dynamic
Programming – Longest Common Subsequence- Greedy Algorithms: – Elements of the
Greedy Strategy- An Activity-Selection Problem - Huffman Coding.

GREEDY ALGORITHMS
The greedy method is one of the strategies like Divide and conquer used to solve
the problems. This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's understand
through some terms.
The Greedy method is the simplest and straightforward approach. It is not an
algorithm, but it is a technique. The main function of this approach is that the decision is
taken on the basis of the currently available information. Whatever the current information
is present, the decision is made without worrying about the effect of the current decision
in future.
This technique is basically used to determine the feasible solution that may or may
not be optimal. The feasible solution is a subset that satisfies the given criteria. The optimal
solution is the solution which is the best and the most favorable solution in the subset. In
the case of feasible, if more than one solution satisfies the given criteria then those solutions
will be considered as feasible, whereas the optimal solution is the best solution among all
the solutions.
The general structure of a greedy algorithm can be summarized in the following steps:
1. Identify the problem as an optimization problem where we need to find the best
solution among a set of possible solutions.
2. Determine the set of feasible solutions for the problem.
3. Identify the optimal substructure of the problem, meaning that the optimal solution
to the problem can be constructed from the optimal solutions of its subproblems.
4. Develop a greedy strategy to construct a feasible solution step by step, making the
locally optimal choice at each step.
5. Prove the correctness of the algorithm by showing that the locally optimal choices
at each step lead to a globally optimal solution.
Characteristics of Greedy method
The following are the characteristics of a greedy method:
● To construct the solution in an optimal way, this algorithm creates two sets where
one set contains all the chosen items, and another set contains the rejected items.
● A Greedy algorithm makes good local choices in the hope that the solution should
be either feasible or optimal.

MC4101 - ADVANCED DATA STRUCTURES AND ALGORITHMS


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Components of Greedy Algorithm


The components that can be used in the greedy algorithm are:
● The feasible solution: A subset of given inputs that satisfies all specified constraints
of a problem is known as a “feasible solution”.
● Optimal solution: The feasible solution that achieves the desired extremum is
called an “optimal solution”. In other words, the feasible solution that either
minimizes or maximizes the objective function specified in a problem is known as
an “optimal solution”.
● Feasibility check: It investigates whether the selected input fulfils all constraints
mentioned in a problem or not. If it fulfils all the constraints then it is added to a set
of feasible solutions; otherwise, it is rejected.
● Optimality check: It investigates whether a selected input produces either a
minimum or maximum value of the objective function by fulfilling all the specified
constraints. If an element in a solution set produces the desired extremum, then it is
added to a selected of optimal solutions.
● Optimal substructure property: The globally optimal solution to a problem
includes the optimal sub solutions within it.
● Greedy choice property: The globally optimal solution is assembled by selecting
locally optimal choices. The greedy approach applies some locally optimal criteria
to obtain a partial solution that seems to be the best at that moment and then find
out the solution for the remaining sub-problem.

The local decisions (or choices) must possess three characteristics as mentioned below:
● Feasibility: The selected choice must fulfill local constraints.
● Optimality: The selected choice must be the best at that stage (locally optimal
choice).
● Irrevocability: The selected choice cannot be changed once it is made.

Greedy choice property:


This property says that the globally optimal solution can be obtained by making a
locally optimal solution (Greedy). The choice made by a Greedy algorithm may depend on
earlier choices but not on the future. It iteratively makes one Greedy choice after another
and reduces the given problem to a smaller one.
Optimal substructure:
A problem exhibits optimal substructure if an optimal solution to the problem
contains optimal solutions to the subproblems. That means we can solve subproblems and
build up the solutions to solve larger problems.

MC4101 - ADVANCED DATA STRUCTURES AND ALGORITHMS


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Applications of Greedy Algorithm


● It is used in finding the shortest path.
● It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
● It is used in a job sequencing with a deadline.
● This algorithm is also used to solve the fractional knapsack problem.
Pseudo code of Greedy Algorithm
Algorithm Greedy (a, n)
{
Solution : = 0;
for i = 0 to n do
{
x: = select(a);
if feasible(solution, x)
{
Solution: = union(solution , x)
}
return solution;
}
}

Advantages of the Greedy Approach:


● The greedy approach is easy to implement.
● Typically have less time complexity.
● Greedy algorithms can be used for optimization purposes or finding close to
optimization in case of Hard problems.
● Greedy algorithms can produce efficient solutions in many cases, especially when
the problem has a substructure that exhibits the greedy choice property.
● Greedy algorithms are often faster than other optimization algorithms, such as
dynamic programming or branch and bound, because they require less computation
and memory.
● The greedy approach is often used as a heuristic or approximation algorithm when
an exact solution is not feasible or when finding an exact solution would be too
time-consuming.
● The greedy approach can be applied to a wide range of problems, including
problems in computer science, operations research, economics, and other fields.

MC4101 - ADVANCED DATA STRUCTURES AND ALGORITHMS


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

● The greedy approach can be used to solve problems in real-time, such as scheduling
problems or resource allocation problems, because it does not require the solution
to be computed in advance.
● Greedy algorithms are often used as a first step in solving optimization problems,
because they provide a good starting point for more complex optimization
algorithms.
● Greedy algorithms can be used in conjunction with other optimization algorithms,
such as local search or simulated annealing, to improve the quality of the solution.
Disadvantages of the Greedy Approach:
● The local optimal solution may not always be globally optimal.
● Greedy algorithms do not always guarantee to find the optimal solution, and may
produce suboptimal solutions in some cases.
● The greedy approach relies heavily on the problem structure and the choice of
criteria used to make the local optimal choice. If the criteria are not chosen carefully,
the solution produced may be far from optimal.
● Greedy algorithms may require a lot of preprocessing to transform the problem into
a form that can be solved by the greedy approach.
● Greedy algorithms may not be applicable to problems where the optimal solution
depends on the order in which the inputs are processed.
● Greedy algorithms may not be suitable for problems where the optimal solution
depends on the size or composition of the input, such as the bin packing problem.
● Greedy algorithms may not be able to handle constraints on the solution space, such
as constraints on the total weight or capacity of the solution.
● Greedy algorithms may be sensitive to small changes in the input, which can result
in large changes in the output. This can make the algorithm unstable and
unpredictable in some cases.

MC4101 - ADVANCED DATA STRUCTURES AND ALGORITHMS

Common questions

Powered by AI

Greedy algorithms solve the 'job sequencing with a deadline' problem effectively by sorting jobs in decreasing order of profit and placing them into timeslots as close to their deadlines as possible, thereby maximizing profit. This strategy requires checking the feasibility and potential profit of scheduling each job within its deadline constraints. Challenges arise when jobs have overlapping deadlines or when prioritizing based solely on profit does not consider the impact of unreserved slots, potentially leading to suboptimal profit maximization .

The structure of a problem significantly impacts the success of greedy algorithms in finding optimal solutions. Problems with an underlying greedy choice property and optimal substructure, such as many network flow and spanning tree problems, enable greedy algorithms to efficiently find optimal solutions. However, if a problem lacks these properties or if the optimal solution depends on precise ordering or future decision impacts, the greedy approach may fail to find the optimal solution. Thus, understanding the underlying structure is vital to predicting the efficacy of a greedy approach for a given problem .

The greedy algorithm solves the 'fractional knapsack problem' efficiently by selecting items based on the highest value-to-weight ratio until the knapsack is full, allowing fractions of items to be chosen, which maximizes total value. This is optimal for the fractional version due to the problem's structure, directly benefiting from the greedy choice property. However, the approach is limited in its application, as it does not work for the '0/1 knapsack problem,' where items can't be divided. In such cases, the greedy choice may yield suboptimal solutions because it can't evaluate combinations of whole items .

The greedy choice property of a greedy algorithm allows it to solve optimization problems by making locally optimal choices with the hope that these choices will lead to a globally optimal solution. This property is instrumental as it simplifies the problem-solving process by focusing on immediate gains, reducing the need for extensive calculations associated with checking future consequences. However, this method has limitations, as it does not guarantee a globally optimal solution. For instance, if the problem lacks the optimal substructure or the correct criteria for making choices, the greedy approach might lead to suboptimal solutions. Additionally, the locally optimal choice made cannot be revoked, which might prevent reaching a globally optimal solution .

The 'optimal substructure' property is crucial in dynamic programming as it indicates that a problem can be broken down into smaller subproblems, which can be solved independently to construct the optimal solution. This property enables dynamic programming to ensure optimal solutions by solving and combining optimal solutions of subproblems. In contrast, the greedy approach focuses on making local optimal choices without necessarily solving all subproblems, which can sometimes lead to suboptimal global solutions. Thus, dynamic programming is more reliable for optimal solutions in structured problems, while greedy algorithms are faster but less comprehensive .

The 'feasibility check' is key in executing greedy algorithms because it ensures that each locally optimal choice conforms to problem constraints. It is crucial for maintaining the progression towards a feasible solution, which may or may not be optimal. The check validates whether the current selection, combined with previously chosen elements, meets all constraints specified by the problem before incorporation into the solution set. This validation ensures that the solution remains practical and viable at each step, facilitating the potential discovery of an optimal solution if conditions are favorable .

The concept of 'irrevocability' in greedy algorithms means that once a choice is made, it cannot be undone or revised. This affects decision-making as it binds subsequent decisions to previous ones, committing the algorithm to a particular path through the solution space. While this contributes to the algorithm's speed and simplicity, it can also lead to suboptimal solutions if early decisions are not optimal, as there is no backtracking mechanism to correct them. Thus, ensuring that each choice is optimal at the current stage is critical to avoiding potential pitfalls in solution quality .

Greedy algorithms are often used as a heuristic approach because they are easy to implement and provide a good starting point for more complex optimization methods. They are particularly useful in scenarios where exact solutions are infeasible due to time constraints or computational resources required. However, they might not provide satisfactory results in problems where the solution depends heavily on future decisions or when the problem structure does not exhibit the greedy choice property, leading to potentially suboptimal solutions. Additionally, if the criteria for making local choices are not carefully chosen, the results may deviate significantly from the optimum .

The greedy strategy can be advantageous in solving the 'minimum spanning tree' problem because it efficiently constructs the minimum spanning tree by making the locally optimal choice of adding the shortest edge that does not form a cycle. Algorithms like Prim's and Kruskal's leverage this strategy, demonstrating fast and effective performance. However, the greedy strategy can be disadvantageous if not all edges are examined, as in graphs with particular constraints, which might lead to incorrect cycles or disconnected components. This necessitates verifying the cycle condition thoroughly, a step that could become computationally expensive if not handled optimally .

Greedy algorithms are advantageous because they are typically easier to implement and have less time complexity compared to dynamic programming. They are efficient when the problem exhibits the greedy choice property, allowing them to be used in real-time applications such as scheduling or resource allocation. However, greedy algorithms may not always find the optimal solution, being sensitive to problem structure and criteria for local decisions. Conversely, dynamic programming is more robust in providing optimal solutions when the problem has an optimal substructure, but it requires more computational resources and memory, making it potentially time-consuming for real-time applications .

You might also like