Artificial Intelligence
Informed Searches
Dr. Muhammad Awais
Outline
• Greedy best first search
• A* Search
• Hill Climbing Search
• Simulated annealing
• Local beam search
• Genetic algorithm (GA)
• Minmax algorithm (adversarial search)
• Alpha-beta pruning
Greedy best first search
• Expand the node that is closet to the goal
• (Closest to goal) state that is likely to lead to
goal
• Node evaluation by heuristic function
• f(n) = h(n)
• h(n) = Estimated cost of the cheapest path
from node n to goal node
o Straight line distance heuristic (shortest
path heuristic)
o Good degree (job heuristic)
o Paddle press speed heuristic
Greedy best first search
• h(n) takes node as input
• h(n) only considers the state concerning the
currently considered node
• h(n) = Straight line distance = hSLD
• hSLD can not be computed from problem
description (Informed Search)
• Greedy best first search resembles DFS in
complexity, i.e. O(bm)
• Disadvantage of Greedy best first search (DFS)
• Not optimal
• Incomplete
• Greedy best first search
Arad to Bucharest (Greedy best first search)
Arad to Bucharest (Greedy best first search)
Arad to Bucharest (Greedy best first search)
418
450
A* Search
• Node evaluation by the combination of two cost
functions, i.e.
• f(n) = g(n) + h(n)
• g(n) : Cost to reach the node n from start node
• h(n) : Cost to reach the goal from node n
• A* is optimal given h(n) is admissible heuristic (AH)
• AH never overestimates the cost to reach the goal
• Heuristic h(n) is consistent
• If h(n) ≤ C (n, a, n’) + h(n’)
• C(n, a, n’) : Step cost from n to n’ by action a
• h(n’) : Estimated cost from n’ to goal node
A* Search
A* Search
A* Search
Hill Climbing Search
• Search algorithm loop continuously in the increasing
direction value
• Terminates when reaches Peak
• No neighbors have higher value
• No tree structure is maintained (used)
• Stochastic hill climbing
• Random selection of uphill moves (neighbors)
• Selection probability can vary w.r.t uphill
steepness
• First-choice hill climbing
• Random generation of successor states
• Generation until a better successor state is found
Hill Climbing Search
Simulated annealing
• Heat treatment that alters a material to increase its ductility and to
make it more workable
• Maximization of objective function
• A move is always accepted if the situation is improved
• Otherwise the acceptance probability is less than 1
• Acceptance probability (otherwise case) decreases w.r.t move
badness
• ∆E = Value [next] – Value[current]
• ∆E corresponds to the amount of badness of a move
• Acceptance probability also decrease w.r.t. to T
• T goes down
• Bad move may be allowed initially (high T)
• Bad move permission is unlikely at end (low T)
• At high temperatures, explore parameter space
• At lower temperatures, restrict exploration
• Slow change (schedule) in T helps find global optimum with
probability approaching to 1
Simulated annealing
Local beam Search
1. Begins with K randomly generated states
2. At each step successor of K states are generated
3. If goal is found then algorithm halts
4. Otherwise k best states are selected from the
successors
5. Algorithm is repeated from step 2
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Genetic algorithm (GA)
• Successor states are generated by combining two
parent states
• GA begins with a set of k randomly generated states
(population)
• A state is represented by a string of finite alphabet
• Each state is rated by an evaluation function (Fitness
function)
• The states are randomly selected, randomness in
accordance with fitness function.
• For mating between two states crossover point is
randomly chosen
• Each location is subject to random mutation with an
independent probability
Genetic algorithm (GA)
Genetic algorithm (GA)
• 24, 23, 20 and 11 are the fitness values calculated by a fitness function for
the respective population candidate
• Similarly 31%, 29%, 26% and 14% is the selection probability for the
respective population candidate, i.e.,
• f(x1,2,3,4) = 24,23,20,11
• Σ f(x1,2,3,4) = 24+23+20+11 = 78
• f(xi) / Σ f(x1,2,3,4) =
• 24 /78 = 0.31, 23/78 = 0.23, 20/78 = 0.26, 11/78 = 0.14
Minmax Algorithm
• Adversarial search algorithm
• Minmax algorithm operates in a DFS fashion
• Minmax value of a node is the utility of being in the
corresponding state
• Assumption : Both players paly optimally
• Max will prefer to move to a maximum value state
• Min will prefer to move to a minimum value state
Minmax Algorithm
Minmax Algorithm
S0
Max
S01 S02
Min S03
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
Minmax Algorithm
S0
S01 S02 S03
3
S011 S012 S013 S02 S022 S023 S031 S032 S033
1
3 12 8 2 4 6 14 5 2
Minmax Algorithm
S0
S01 S02 S03
3 2
S011 S012 S013 S02 S022 S023 S031 S032 S033
1
3 12 8 2 4 6 14 5 2
Minmax Algorithm
S0
S01 S02 S03
3 2 2
S011 S012 S013 S02 S022 S023 S031 S032 S033
1
3 12 8 2 4 6 14 5 2
Minmax Algorithm
S0
3
S01 S02 S03
3 2 2
S011 S012 S013 S02 S022 S023 S031 S032 S033
1
3 12 8 2 4 6 14 5 2
S0
Minmax Algorithm
S0
S01, 02, 03 S0
S01
S011, 021, 031 S01
S01
S011
S011 S012 S013
1- V Max-value(S0)
3 12 8
Minmax Algorithm
S0
S01
S011 S012 S013
1- V Max-value(S0)
3 12 8
2- V = -∞
3- V Max( V = -∞ ,Min-value(S01))
S0
Minmax Algorithm
S0 S01, 02, 03 S0
S01
S011, 021, 031 S01
S01
S011
1- V Max-value(S0)
2- V = -∞
3- V Max( V = -∞ ,Min-value(S01))
4- V = ∞
S011 S012 S013 5- V Min( V = ∞ ,Max-value(S011))
7- V Min( V = ∞ , 3)
3 12 8
8- V Min( V = 3 ,12)
9- V Min( V = 3, 8)
Minmax Algorithm
S0
S01 3
1- V Max-value(S0)
2- V = -∞
3- V Max( V = -∞ ,Min-value(S01))
4- V = ∞
5- V Min( V = ∞ ,Max-value(S011))
S011 S012 S013 7- V Min( V = ∞ ,3)
8- V Min( V = 3 ,12)
3 12 8 9- V Min( V = 3, 8)
10- return V = 3 after 3rd time calling of
Max-Value in Min function
Minmax Algorithm
S0
S01
S02
3
S021 S022 S023
2 4 6
11- V Max( V = 3 ,Min-value(S02))
Minmax Algorithm
S0
S01
S02
3 2 12- V = ∞
13- V Min( V = ∞ ,Max-value(S021))
14- V Min( V = ∞, 2)
15- V Min( V = 2 , 4)
16- V Min( V = 2, 6)
17- return V = 2 at step 3
S021 S022 S023
2 4 6
Minmax Algorithm
S0
S01
18- V Max( V = 3 ,Min-value(S03))
3
19- V = ∞ 2
20- V Min( V = ∞ , Max-value(S031)) S02 2 S03
21- V Min( V = ∞, 14)
22- V Min( V = 14 , 5)
23- V Min( V = 5, 2)
24- return V = 2 at step 3
S031 S032 S033
14 5 2
Minmax Algorithm
S0
3
S01 S02 S03
3 2 2
S011 S012 S013 S02 S022 S023 S031 S032 S033
1
3 12 8 2 4 6 14 5 2
3
AlphaBeta pruning Algorithm
MinMax problem :
1. Has to visit exhausts the state space
2. States to be examined are exponential with respect to the moves
3. Exponential expected states to be visited can not be removed
Alpha Beta pruning :
1. Cut the exponential factor to possible smallest value
2. αß applied to minmax provide possibly effecient result w.r.t minmax
3. αß prunes the minmax tree-branches that cannot possibly influence
the final decision
AlphaBeta pruning Algorithm
At a max node:
a = largest child utility found so far
b = b of parent
At a min node:
a = a of parent
b = smallest child utility found so far
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
S01
S011 α=-∞
β=+∞
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
S01
S011 α=-∞
β=+∞
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
S01
3
S011 α=-∞
β=+∞
v -∞ 3 -∞
β=Min(+∞,3)β=3
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
S01
3
S011 α=-∞
β=3
v -∞ 3 -∞
β=Min(+∞,3)β=3
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
S01
12 α=-∞
S012
β=3
3 -∞
β=Min(3,12)β=3
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01
S01
8 α=-∞
S013
β=3
3 -∞
β=Min(3,8)β=3
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=-∞
β=+∞
S0
S01 vMax(-∞, 3)
v +∞ 3 +∞
3 α=Max(-∞,3)α=3
S01
α=-∞
S013
β=3
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=3
β=+∞
S0
S01 vMax(-∞, 3)
v +∞ 3 +∞
3 α=Max(-∞,3)α=3
S01
α=-∞
S013
β=3
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=3 α=3
β=+∞ β=+∞
S0
S02
S02
S021
α=3
β=+∞
AlphaBeta pruning Algorithm
α=-∞
β=+∞
α=3 α=3
β=+∞ β=+∞
S0
S02
S02
S021
vMin(+∞,2)v=2
2 3 TRUE α=3
LOOP BREAKS β=+∞
AlphaBeta pruning Algorithm
At a max node:
a = largest child utility found so far
b = b of parent
At a min node:
a = a of parent
b = smallest child utility found so far
AlphaBeta pruning Algorithm Example
At a max node:
a = largest child utility found so far
b = b of parent
α=-∞ S0
At a min node:
β=+∞
a = a of parent
b = smallest child utility found so far
S01 S02
S03
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
AlphaBeta pruning Algorithm Example
At a max node:
a = largest child utility found so far
b = b of parent
α=-∞ S0 At a min node:
β=+∞ a = a of parent
b = smallest child utility found so far
α=-∞ S01 S02
S03
β=+∞
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
AlphaBeta pruning Algorithm Example
At a max node:
a = largest child utility found so far
α=-∞ S0 b = b of parent
β=+∞ At a min node:
a = a of parent
b = smallest child utility found so far
α=-∞ S01 S02
S03
β=+∞
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=3
AlphaBeta pruning Algorithm Example
At a max node:
a = largest child utility found so far
b = b of parent
α=-∞ S0
β=+∞ At a min node:
a = a of parent
b = smallest child utility found so far
α=-∞ S01 S02
S03
β=3
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=12
AlphaBeta pruning Algorithm Example
At a max node:
a = largest child utility found so far
b = b of parent
α=Max(-∞, β=3) α=3 S0 At a min node:
β=+∞ a = a of parent
b = smallest child utility found so far
α=-∞ S01 S02
S03
β=3
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=8
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 S02
S03
β=3
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=8
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 α=3 S02 S03
β=3
β=+∞
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=2
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 α=3 S02 S03
β=3
β=2
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=2
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 α=3 S02 S03
β=3
β=2
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=2
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 α=3 S02 α=3 S03
β=3
β=2 β=14
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=14
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 α=3 S02 α=3 S03
β=3
β=2 β=5
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=5
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
α=-∞ S01 α=3 S02 α=3 S03
β=3
β=2 β=2
S011 S012 S013 S021 S022 S023 S031 S032 S033
3 12 8 2 4 6 14 5 2
β=2