Informed Search
Introduction to Artificial Intelligence
© G. Lakemeyer
G. Lakemeyer
Winter Term 2024/25
Best-First Search
Search methods differ in their strategies which node to expand next
Uninformed fixed strategies without information about the cost
search: from a given node to a goal.
uses information about the cost from a given node
© G. Lakemeyer
Informed search: to a goal in the form of an evaluation function f ,
assigning each node a real number.
Best-First Search: always expand the node with the “best” f -value.
h(n) = estimated cost from state at node n to a goal
Greedy Search: state. Expand node n where h(n) is minimal.
Use f = h.
AI/WS-2024/25 2 / 20
Greedy Search Example
Straight−line distance
Oradea to Bucharest
71
Neamt Arad 366
Bucharest 0
Zerind 87
75
151 Craiova 160
Iasi Dobreta 242
Arad
140
Eforie 161
92 Fagaras 178
Sibiu Fagaras
99 Giurgiu 77
118 Hirsova 151
Vaslui
80 Iasi 226
Rimnicu Vilcea Lugoj
Timisoara 244
Mehadia 241
142
111 Pitesti 211 Neamt 234
Lugoj 97 Oradea 380
70 98 Pitesti 98
85 Hirsova
Mehadia 146 101 Urziceni Rimnicu Vilcea 193
86 Sibiu 253
75 138 Bucharest Timisoara 329
Dobreta 120 Urziceni
90
80
Craiova Eforie Vaslui 199
© G. Lakemeyer
Giurgiu Zerind 374
Arad
h=366
Arad
Sibiu Timisoara Zerind
h=253 h=329 h=374
Arad
Sibiu Timisoara Zerind
h=329 h=374
Arad Fagaras Oradea Rimnicu Arad
h=366 h=178 h=380 h=193
Sibiu Timisoara Zerind
h=329 h=374
Arad Fagaras Oradea Rimnicu
h=366 h=380 h=193
Sibiu Bucharest
h=253 h=0
AI/WS-2024/25 3 / 20
A∗
combines uniform cost search with greedy search.
g (n) = actual cost from the initial state to n.
h(n) = estimated cost from n to the nearest goal.
f (n) := g (n) + h(n).
f (n) = is the estimated cost of the cheapest path which passes through n.
© G. Lakemeyer
Let h∗ (n) be the actual cost of the optimal path from n to the nearest goal.
Admissible Heuristic
h is called admissible if we have for all n:
h (n ) ≤ h ∗ (n ) .
We require for A∗ that h is admissible.
AI/WS-2024/25 4 / 20
Example A∗
Arad
f=0+366 Arad
=366
Sibiu Timisoara Zerind
f=140+253 f=118+329 f=75+374 Arad
=393 =447 =449
Sibiu Timisoara Zerind
f=118+329 f=75+374
=447 =449
Arad Fagaras Oradea Rimnicu Arad
f=280+366 f=239+178 f=146+380 f=220+193
=646 =417 =526 =413
Sibiu Timisoara Zerind
f=118+329 f=75+374
=447 =449
© G. Lakemeyer
Arad Fagaras Oradea Rimnicu
f=280+366 f=239+178 f=146+380
=646 =417 =526
Craiova Pitesti Sibiu
f=366+160 f=317+98 f=300+253
=526 =415 =553
Note: in the example f is monotone nondecreasing.
The following can always be guaranteed:
Path-Max Equation
Let n, n′ be nodes, where is n parent of n′ . Then let
f (n′ ) = max (f (n), g (n′ ) + h(n′ )).
AI/WS-2024/25 5 / 20
Example Why Path-Max Eq. Makes Sense
© G. Lakemeyer
AI/WS-2024/25 6 / 20
Example Why A∗ is Optimal
© G. Lakemeyer
AI/WS-2024/25 7 / 20
Contour Lines in A∗
N
Z
I
A
380 S
F
V
© G. Lakemeyer
400
T R
L P
H
M U
B
420
D
E
C
G
AI/WS-2024/25 8 / 20
Another Example Illustrating Contour Lines
© G. Lakemeyer
AI/WS-2024/25 9 / 20
Theorem: A∗ is Optimal for Admissible h
© G. Lakemeyer
AI/WS-2024/25 10 / 20
Heuristic Function 1
5 4 5
1 4
2 3
6 1 88 6
8 84
7 3 22 7 6 25
Start State Goal State
h1 = number of tiles in the wrong position.
© G. Lakemeyer
h2 = sum of the distances to the goal location for all tiles
(Manhattan Distance)
AI/WS-2024/25 11 / 20
Heuristic Function 1
5 4 5
1 4
2 3
6 1 88 6
8 84
7 3 22 7 6 25
Start State Goal State
h1 = number of tiles in the wrong position.
© G. Lakemeyer
h2 = sum of the distances to the goal location for all tiles
(Manhattan Distance)
Effective branching factor b∗ : If A∗ generates N nodes with solution depth d,
then b∗ is the branching factor of a uniform tree of depth d with N + 1 nodes,
i.e.
N + 1 = 1 + b∗ + (b∗ )2 + . . . + (b∗ )d
b∗ is a measure for the goodness of h: the closer b∗ is to 1 the better.
AI/WS-2024/25 11 / 20
Heuristic Function 2
Search Cost Effective Branching Factor
d IDS A*(h1 ) A*(h2 ) IDS A*(h1 ) A*(h2 )
2 10 6 6 2.45 1.79 1.79
4 112 13 12 2.87 1.48 1.45
6 680 20 18 2.73 1.34 1.30
© G. Lakemeyer
8 6384 39 25 2.80 1.33 1.24
10 47127 93 39 2.79 1.38 1.22
12 364404 227 73 2.78 1.42 1.24
14 3473941 539 113 2.83 1.44 1.23
16 – 1301 211 – 1.45 1.25
18 – 3056 363 – 1.46 1.26
20 – 7276 676 – 1.47 1.27
22 – 18094 1219 – 1.48 1.28
24 – 39135 1641 – 1.48 1.26
AI/WS-2024/25 12 / 20
How to Find a Heuristic
General Strategy:
Simplify the problem
Compute the exact solution for the simplified problem
Use the solution cost as heuristic
© G. Lakemeyer
AI/WS-2024/25 13 / 20
How to Find a Heuristic
General Strategy:
Simplify the problem
Compute the exact solution for the simplified problem
Use the solution cost as heuristic
© G. Lakemeyer
For example:
h1 is the solution cost for the simplified 8-puzzle where tiles can be
placed at an arbitrary position with a single action.
h2 corresponds to the exact solution, if tiles can be moved to an arbitrary
position but actions are restricted to moving a tile to a neighboring
position.
AI/WS-2024/25 13 / 20
Pattern Databases
7 2 4 1 2
5 6 3 4 5
8 3 1 6 7 8
Start State Goal State
2 4
© G. Lakemeyer
1 2
5 6 3 4
5 6
8 3 1 7 8
Start State Goal State
Idea: Compute the exact solution for each pattern with four numbers and use
that value as heuristic. When more than one pattern applies, use the
maximum value.
Better than Manhattan!
AI/WS-2024/25 14 / 20
Iterative Deepening A∗
Combination of A∗ and iterative deepening search.
Instead of fixed level increase use f -costs instead.
© G. Lakemeyer
AI/WS-2024/25 15 / 20
IDA∗ Algorithm
function IDA*( problem) returns a solution sequence
inputs: problem, a problem
static: f-limit, the current f - COST limit
root, a node
root MAKE-NODE(INITIAL-STATE[problem])
f-limit f - COST(root)
loop do
solution, f-limit DFS-CONTOUR(root, f-limit)
1
if solution is non-null then return solution
© G. Lakemeyer
if f-limit = then return failure; end
function DFS-CONTOUR(node, f-limit) returns a solution sequence and a new f - COST limit
inputs: node, a node
1
f-limit, the current f - COST limit
static: next-f, the f - COST limit for the next contour, initially
if f - COST[node] > f-limit then return null, f - COST[node]
if GOAL-TEST[problem](STATE[node]) then return node, f-limit
for each node s in SUCCESSORS(node) do
solution, new-f DFS-CONTOUR(s, f-limit)
if solution is non-null then return solution, f-limit
next-f MIN(next-f, new-f); end
return null, next-f
AI/WS-2024/25 16 / 20
Hill Climbing
evaluation
current
state
© G. Lakemeyer
function HILL-CLIMBING( problem) returns a solution state
inputs: problem, a problem
static: current, a node
next, a node
current MAKE-NODE(INITIAL-STATE[problem])
loop do
next a highest-valued successor of current
if VALUE[next] < VALUE[current] then return current
current next
end
AI/WS-2024/25 17 / 20
Another Problem with Local Search: Ridges
© G. Lakemeyer
AI/WS-2024/25 18 / 20
Beam Search
Beam search is a refinement of local search with random restarts.
Start with k randomly chosen states.
Generate all successors of all states.
© G. Lakemeyer
Pick k best states.
Iterate.
Focuses search on the most promising states generated.
AI/WS-2024/25 19 / 20
Simulated Annealing
function SIMULATED-ANNEALING( problem, schedule) returns a solution state
inputs: problem, a problem
schedule, a mapping from time to “temperature”
static: current, a node
next, a node
T, a “temperature” controlling the probability of downward steps
© G. Lakemeyer
1
current MAKE-NODE(INITIAL-STATE[problem])
for t 1 to do
T schedule[t]
if T=0 then return current
next a randomly selected successor of current
∆E VALUE[next] – VALUE[current]
if ∆E > 0 then current next
else current next only with probability e∆E/T
AI/WS-2024/25 20 / 20