0% found this document useful (0 votes)
34 views19 pages

Local Search Algorithms Overview

The document discusses local search algorithms, focusing on iterative improvement methods such as hill climbing and simulated annealing. It explains the mechanics of these algorithms, their advantages, and drawbacks, including the challenges posed by local maxima and plateaus. Additionally, it highlights the applications of these algorithms in various optimization problems.

Uploaded by

Op Kushwaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views19 pages

Local Search Algorithms Overview

The document discusses local search algorithms, focusing on iterative improvement methods such as hill climbing and simulated annealing. It explains the mechanics of these algorithms, their advantages, and drawbacks, including the challenges posed by local maxima and plateaus. Additionally, it highlights the applications of these algorithms in various optimization problems.

Uploaded by

Op Kushwaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Artificial Intelligence

Topic: Local Search Algorithms And Optimistic


Problems
by
CONTENT

 Introduction to Local search algorithms and optimistic problems


 Iterative improvements
 Hill Climbing
 Stimulated Annealing
Local Search Algorithms/Iterative Improvement
 Local search – Keep track of single current state
– Move only to neighbouring states
– Iteratively try to improve it /them
– Ignore paths
 Advantages: – Use very little memory
– Can often find reasonable solutions in large or infinite (continuous)
state spaces. Keep a single "current" state, or small set of states
 For example,
TSP: every permutation of the set of cities is a configuration for the traveling
salesperson problem
• The goal is to find a “close to optimal” configuration satisfying constraints
Iterative Improvement

 The general idea is to start with a complete


configuration and to make modifications to
improve its quality.
 Iterative improvement algorithms try to find
peaks on a surface of states where height is
defined by the evaluation function.

Mount Everest in a thick fog while suffering


from amnesia.
Local Search Algorithms/Iterative Improvement
• Hill-climbing
• Simulated annealing algorithms
Hill-climbing Search
solution

Current state

• Iteratively moves in the direction of increasing value.


• Does not maintain a search tree.
• Data structure need only record the state and its evaluation.
• Greedy Local Search algorithm
• No backtracking.
Algorithm
function Hill-Climbing( problem) returns a state that is a local maximum
inputs: problem, a problem
local variables: current, a node
next, a node
current ←Make-Node(Initial-State[problem])
loop do next ← a highest-valued successor of current
if Value[next] < Value[current] then return State[current]
current ← next
 when there is more than one best
end
successor to choose from, the algorithm
can select among them at random
 Doesn’t backtrack, since it doesn’t
remember where it’s been
EXAMPLE on A* ALGORITHM (cont.)
EXAMPLE: Use A* search algorithm to find the solution.
Initial state: S, Goal state: G1 or G2 S
node h(n) node h(n) node h(n) 4 1
A 11 D 8 H 7
B 5 E 4 I 3 A g(A)=4 g(B)=1 B
C 9 F 2 2
1 2 3
g(E)=3
It. Node Priority queue
Solution: expanded C D E g(F)=3 F
0 S=0 g(C)=5 g(D)=6 4 3 2 1
1 S A=15, B=6

2 B E=7, F=6, A=15 H G1


I G2

g(H)=7 g(G1)=6 g(I)=6 g(G2)=5


3 F I=9, G2=5, E=7, A=15

4 G2 G2

S , B , F , G2
Hill Climbing

S
4 1

A B
1 2 5 3

C D E F
4 3 2 1
H G1
I G2
S
4 1

A B
1 2 5 3

C D E F
4 3 2 1
H G1
I G2
S
4 1

A B
1 2 5 3

C D E F
4 3 2 1
H G1
I G2
Drawbacks
State Space Landscape
 Global maximum - a state that maximizes the
objective function over the entire landscape.
 Local maximum - a state that maximizes the
objective function in a small area around it.
 Plateau - a state such that the objective function is
constant in an area around it.
 Shoulder - a plateau that has an uphill edge.
 Flat - plateau whose edges go downhill.
 Ridge - sequences of local maxima.

 Local maxima
 Plateaux
 Ridges
Random-restart Hill-climbing
• It conducts a series of hill-climbing searches from randomly generated
initial states
• It saves the best result found.
• It can use a fixed number of iterations, or can continue until the best
saved result has not been improved for a certain number of iterations.
• If there are only a few local maxima: This algorithm will find a good
solution very quickly.
• If the problem is NP-complete: exponential number of local maxima:
Algorithm may get stuck.
Simulated annealing
• IDEA: Escape local maxima by allowing some "bad"
moves but gradually decrease their probability
• The probability is controlled by a parameter called
temperature
• Higher temperatures allow more bad moves than lower
temperatures
• Annealing: Lowering the temperature gradually
Quenching: Lowering the temperature rapidly
Basic ideas:
• Like hill-climbing identify the quality of the local
improvements
• Instead of picking the best move, pick one randomly
• say the change in objective function is d
• if d is positive, then move to that state
• otherwise:
• move to this state with probability proportional to
d
• thus: worse moves (very large negative d) are
executed less often
• however, there is always a chance of escaping from local
maxima
• over time, make it less likely to accept locally bad moves
function SIMULATED-ANNEALING(problem, schedule) returns a solution
state
inputs: problem, a problem
schedule, a mapping from time to "temperature“
static: current, a node

Algorithm next, a node


T, a "temperature" controlling the probability of downward steps
current MAKE-NODE(lNITIAL-STATE[problem])
for t 1 to ∞ do
T schedule[t]
if T=0 then return current
next a randomly selected successor of current
ΔE VALUE[next] - VALUE[current]
if ΔE > 0 then current next
else current next only with probability e ΔE/T
Applications

• Traveling salesman
• Graph partitioning
• Graph colouring,

etc…..
Questions…..?

• Explain about the Hill climbing algorithm with its


drawback and how it can be overcome?
• Explain Simulated Annealing Algorithm.
Thank You

You might also like