Problem Solving
Search
Course Page: [Link]
Bookmark it and visit regularly
Why Search?
• It is just a useful technique to solve some problems
• Searching is a good way of approximating
• Uncertainty in knowledge
• I know the “best” route to take from my house to LABIM mall. There is no
uncertainty. However, a traveler visiting for the first time will have uncertainty in
deciding which is the “best” route.
• I can decide “best” by recalling knowledge which I have gathered by travelling to
LABIM by taking different routes.
• An agent perspective
• An agent with several immediate options of unknown value can decide what to do by
first examining different possible sequences of actions that lead to states of known
value and then choosing the best sequence of actions
2
Why Search?
• Many problems do not have a simple algorithmic solution
• The way around it is to cast these problems as search problems
• Search also allows to use extra knowledge (heuristics-based search)
• Aspects of intelligent behavior, such as planning, can be cast as search
• Search is a way of reasoning over preferences and constraints
• Note that search only shows how to solve a problem. It does not tell
how to formulate the problem as search.
3
Route Planning
• You can cast route planning
as a search problem
• Landmarks and locations are
seen as nodes
• Nodes are connected if a
road joins two locations
• We end up with a graph
• State space search (initial,
goal, operator/action, path
cost)
4
8-Puzzle
• States: position of tiles
• Initial state: Board position
at the start of the game
• Goal state: Final board
position
• Actions/Operators: Move
blank up, down, left, right
• Path Cost: 1 per move
5
8-Queens
• Initial state: Board position
at the start of the game
• Goal state: None queens
attacked
• Actions/Operators: Add or
move a queen to any
square
• Path Cost: Not applicable
6
Uninformed and Informed Search
• Uninformed means there is no additional information about the state
beyond that provided in the problem definition
• An uninformed search can generate successors from a state and
distinguish a goal state from a non-goal state
• Search that know which non-goal states is more promising are called
informed search or heuristics search
7
Evaluating Search Strategies
• Completeness: Can the strategy guarantee to find a solution?
• Optimality: Does the solution have low cost or minimal cost?
• Time Complexity: Time taken to find a solution. Typically, we count
the number of nodes expanded.
• We often talk about worst-case
• Space Complexity: Memory needed to perform the search
8
Search Tree
• How does a computer find the
best move/action?
• It generates a search tree and
finds the optimal path in it
• The number of branches are the
legal moves (actions)
• Depth is the number of steps it
decides to peek forward i.e. play
the game
9
Complexity
• Number of possible choices at each state is called the branching
factor b
• Number of steps it takes to reach the goal state is called the depth d
• Complexity of search procedure calculates how many states have to
be checked to reach the goal state
• It is represented using the big-O notation
10
Complexity
• Suppose you design a search procedure whose time complexity is
O(bd)
• Your friend designed another procedure with time complexity is
O(bd)
• Which one is better?
11
Complexity
O(bd) O(bd)
• b=3, d=5 bd = 15 • b=3, d=5 bd = 243
• b=5, d=10 bd = 50 • b=5, d=10 bd = 9,765,625
• b=8, d=15 bd = 120 • b=8, d=15 bd = 35,184,372,088,832
12
Breadth First Search
• A is the start state (initial board
Trace
layout) and F is the goal state A
(winning position)
BC
CD
• BFS proceeds by DBDE
expanding/visiting the shallowest
node BDE
DED
• What data structures does BFS ED
use? DF
F
13
Breadth First Search
• Complete: Yes
• Optimal: If all operator has the same cost
• Time complexity: O(bd)
• Imagine each node has b successors
• at depth 1 there will be b nodes
• at depth 2 there will be b2 nodes
• at depth d there will be bd nodes
Space Complexity: O(bd)
All nodes expanded must remain in memory
14
Depth First Search
• A is the start state (initial board Trace
layout) and F is the goal state A
(winning position)
BC
DC
• DFS proceeds by C
expanding/visiting the node at
the deepest level of the tree BDE
DDE
• What data structures does DFS DE
use? E
F
15
Depth First Search
• Complete: No
• Optimal: No
• Time complexity: O(bd)
Space Complexity: O(bd)
Longest path is d and each node in that
path has b siblings
16
Depth Limited Search
• Like DFS but uses a depth cutoff l to avoid possibly
infinite paths
• Time complexity: O(bl)
• Space complexity: O(bl)
• How do we choose l ?
• Domain knowledge: in a map we might see that a location
can be reached by any other location at maximum 9
steps/actions
• Solve some problems and then determine l
17
Iterative-Deepening Search
• Since depth limit (cutoff) is difficult to choose, use depth limits of l =
0,1,2,3,…
• Do DFS at each level
• Time complexity: Quite modest like DFS O(bd)
• Space complexity: Like BFS O(bd)
• Complete: Yes
18
Iterative-Deepening Search
19
Uniform Cost Search
• Like BFS but expand the
lowest-cost node
• need to define a function
g(n) which returns the cost
of getting to node n
• AKA Dijkstra’s algorithm
• BFS is a special case of
UCS where g(n) = depth(n)
20
Informed (Heuristic) Search
• Use domain specific knowledge to guide/hints the search
• The hint comes from the heuristic function h(n)
• h(n) is the “estimate” cost of the cheapest path from the state at node n to
the goal state
• For example in the route planning problem, h(n) can be the Euclidean
distance from the current state to the goal state
• In 8-puzzle h(n) can be the number of tiles out of place
21
Heuristic Search Start State
1 2 3
• Heuristic function for an 8- 4 5 6
puzzle game is to count 7 8
Goal State
number of tiles out of
place from the Goal
22
1 2 3
Heuristic Search 8 4
7 6 5
Goal State
• Heuristic function h(n)
• Cost function g(n)
• Evaluation function f(n)
• f(n) = h(n) Greedy
• f(n) = g(n) +h(n)
23
Greedy Best-First Search
• Expand the node with the lowest h(n)
• h(goal) = 0
• Does not always find the best (optimal) solution
• Performance completely depends on h(n)
24
Greedy Best-First Search
• We are in Arad
and we want to
find the path to
Bucharest
• The numbers are
the cost of taking
that route
25
Greedy Best-First Search
• We have the straight line distance heuristic
26
Greedy Best-First Search
• We have the straight line distance heuristic
27
Greedy Best-First Search
• We have the straight line distance heuristic
28
Best-First Search Pseudocode
Best-firstSearch(Node start, Node for(each node w adjacent to n) {
destination): if(w is not in open-list and closed-list) {
closed-list = [] set n as w’s parent
open-list = [start] add w to open-list
}
while(open-list is not empty) {
}
dequeue(n) from open-list remove n from open-list
if(n is destination) add n to closed-list
return n }
29
A* Search
• Idea: Avoid expanding paths that are already expensive
• Evaluation function f(n) = g(n) +h(n)
• g(n) : cost taken to reach n
• h(n) : estimation of cost from n to goal
• f(n) : estimated total cost from n to goal
30
A* Search
Back to the route problem
31
A* Search
32
A* Search
33
A* Search
• A-star is complete
• A-star finds a cost-optimal path if we have an admissible heuristic
• An admissible heuristic never overestimates the cost to reach the
goal
• In the route planning problem, the shortest distance heuristic is always less
than the actual cost
34
A-Star Search Pseudocode
AStarSearch(Node start, Node for(each node w adjacent to n) {
destination): totalcost = g(n) + cost(n,w)
closed-list = [] if(w is not in open-list and closed-list) {
g(start) = 0 set n as w’s parent
f(start) = g(start) + h(start) g(w) = totalcost
open-list = [start] add w to open-list
while(open-list is not empty) { }
dequeue(n) from open-list
if(n is destination)
return n
35
A-Star Search Pseudocode
else { }
if(totalcost is smaller than g(w)) { }
set n as w’s parent }
}
f(w) = totalcost + h(w) remove n from open-list
if(w in closed-list) { add n to closed-list
remove w from closed-list }
add w to open-list
36
Hill Climbing Search
• This is used in problems where ne need to find the GOAL (solution)
but the PATH is not important
• It operates on a “state space” i.e. set of complete configurations
• We start with a “current” state and then constantly improve upon
that state
• We use this because they use very little memory and can find
reasonable solutions in large state space
37
Hill Climbing Search
• Start the search from a
“current state”
• Move in the direction
that is higher (steep)
i.e. increasing value of
the objective function
• Stop when you reach a
peak
38
Hill Climbing Search: Problems
• They get stuck in
• local maxima/minima
• Plateau
• Ridges
39
Game Playing / Adversarial Search
• We consider two-player games where players alternate making
moves
• We call the first player MAX and the second player MIN
• MAX is trying to maximize her advantage
• MIN is trying to minimize MAX’s score
• MIN and MAX are using the same information
• This is valid for fully-observable games
40
Game Playing / Adversarial Search
• Such games can be represented as a tree
• Nodes: state in the game (e.g. board configuration)
• Edges: permissible moves
• Leaves: Terminal positions where the outcome is clear (win, lose, draw)
• Each leaves have a score
• High score is good for one player (MAX)
• MIN tries to minimize this score
41
Minimax Search
• It is X’s turn to make a move
• The numbers are scores from
MAX’s view
• +1 is a position where MAX wins
• -1 MAX loses
• 0 Draw
• What move should MAX move
from the current position?
42
Minimax Search
minimax(n: node):
if leaf(n) then return evaluate(n)
if n is a max node
v := L (-1 in our game)
for each child of n
v' := minimax(child)
if v' > v, v:= v'
return v
if n is a min node
v := W (+1 in our game)
for each child of n
v' := minimax(child)
if v' < v, v:= v'
return v
43
Example
44
Another Example
MAX
MIN
MAX
1 3 -2 4 -5 2 -1 -3 4 2 4 -2 1 5 1 -3
45
Minimax with Alpha-Beta Pruning
46
Another Example
MAX
MIN
MAX
1 3 -2 4 -5 2 -1 -3 4 2 4 -2 1 5 1 -3
47
Exercise
MAX
MIN
MAX
1 3 -2 2 4 -1 1 2 -1 4 7 3 6 -1 -1 1 2 4 5 -2 7
48
49
50
51
52
53
54