Search Strategies in Problem Solving
Search Strategies in Problem Solving
Searching
Step in Problem Solving
Searching is Performed through the State Space
Searching accomplished by constructing a search tree
Steps of Searching
Check whether the current state is the goal state or not
Expand the current state to generate the new sets of states
Choose one of the new states generated for search which entire depend on the selected
search strategy
Repeat the above steps until the goal state is reached or there are no more states to be
expanded
Searching: Types
1) Blind Search or Uninformed Search
2) Informed Search or Heuristic Search
1 ) Uninformed Search Techniques
Breadth First Search
Uniform Cost Search
1
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Depth First Search
Backtracking Search
Depth Limited Search
Iterative Deepening Depth First Search
Bidirectional Search
Search Strategy Comparison
2) Informed Search or Heuristic Search
1 2 3 2 8 3
8 4 1 4
7 6 5 7 6 5
1) Uninformed Search
Search provided with problem definition only and no additional information about the
state space
Expansion of current state to new set of states is possible
It can only distinguish between goal state and non-goal state
Less effective compared to Informed search
3
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
a ) Breadth First Search
Root node is expanded first
Then all the successors of the root node are expanded
Then their successors are expanded and so on.
Nodes, which are visited first will be expanded first (FIFO)
All the nodes of depth ‘d’ are expanded before expanding any node of depth ‘d+1’
5
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Same as time complexity
i.e. O(bd+1)
Since each node has to be kept in the memory
Disadvantages
Memory Wastage
Irrelevant Operations
Time Intensive
It doesn’t assure the optimal cost solution
When a dead end is reached, the search backup to the next node that still has unexplored
successors.
a ) Completeness
If the path chose get infinitely down, even when shallow solution exists
Not complete
b) Optimality
The strategy might return a solution path that is longer than the optimal solution, if it
starts with an unlucky path
Not optimal
c) Space Complexity
At depth l < d (max) we have b-1 nodes. So we have (d-1) * (b-1) nodes
It needs to store a single path from root to a leaf node and the remaining unexpanded
sibling nodes for each node in the path
For a search tree of branching factor ‘b’ and maximum tree depth ‘m’, only the storage of
bm+1 node is required
Hence,
Space Complexity= O(b.m+1) = O(bm)
d) Time Complexity
O(bm), in the worst case, since in the worst case all the bm nodes of the search tree would
be generated. Hence, Time Complexity= O(bm)
7
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
=1 + b+ b2 + ………..+bd
=O (bd )
=O (bm )
Uniform Cost Search or UCS begins at a root node and will continually expand nodes, taking the
node with the smallest total cost from the root until it reaches the goal state.
Uniform cost search doesn't care about how many steps a path has, only the total cost of the path.
UCS with all path costs equal to one is identical to breadth first search.
Similar to BFS except that it sorts (ascending order) the nodes in the fringe according to the cost
of the node. where cost is the path cost.
Doesn’t care about the number of steps a path has but only about their cost
It might get stuck in an infinite loop if it expands a node that has a zero cost action
leading back to same state
Complete, if the cost of every step is greater than or equal to some small positive constant
E
b) Optimality
c) Time Complexity
O(b C*/E)
d) Space Complexity
O(b C*/E)
d) Backtracking Search
It uses still less memory
Only one successor is generated at a time rather than all
Each partially expanded nodes remember which node to expand next
Four Criteria
Completeness: Not Complete
Optimality: Not Optimal
Time Complexity= O(bm)
Space Complexity= O(m)
9
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Four Criteria
Four Criteria
10
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
g) Bidirectional Search
Performs two simultaneous searches, one forward from initial state and the other backward
from the last state
Four Criteria
Time Complexity
11
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Summary of algorithms
2. Informed Search
Strategy of problem solving where problem specific knowledge is known along with
problem definition
These search find solutions more efficiently by the use of heuristics
Heuristic is a search technique that improves the efficiency of the search process
By eliminating the unpromising states and their descendants from consideration, heuristic
algorithms can find acceptable solutions.
Heuristics are fallible i.e. they are likely to make mistakes as well
It is the approach following an informed guess of next step to be taken
It is often based on experience or intuition
Heuristic have limited information and hence can lead to suboptimal solution or even fail
to find any solution at all
The measure i.e. evaluation function must incorporate some estimate of the cost of the path from
a state to the closest goal state
The algorithm may have different evaluation function, one of such important function is the
heuristic function h(n)
where, h(n) is the estimated cost of the cheapest path from node n to the goal
a ) Completeness
Can start down an infinite path and never return to any possibilities
Not complete
b) Optimality
Looks for immediate best choice and doesn’t make careful analysis of long term
options
May give longer solution even if shorter solution exists
Not optimal
c) Space Complexity
O(bm) where, m is the maximum depth of search space, since all nodes have to be
kept in memory
d) Time Complexity
13
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
O(bm)
a.2) A* Search
Evaluates node by combining g(n), the cost to reach the node and h(n) the cost to get
from node to goal
f(n)=g(n)+h(n)
where f(n) is the estimated cost of the cheapest solution through node n
To find the cheapest solution, the first try node is the mode with lowest g(n)+h(n)
Admissible Heuristic: h(n) is admissible if it never overestimates cost to reach the
solution
example: hSLD (straight line distance) as g(n) is exact, so f(n) is never overestimated
➢ When h(n) = actual cost to goal.
- Only nodes in the correct path are expanded.
-Optimal solution is found.
➢ When h(n) < actual cost to goal
-Additional nodes are expanded
-Optimal solution is found
➢ When h(n) > actual cost to goal
-Optimal solution can be overlooked.
a) Optimality
Optimal if h(n) is admissible
b) Completeness
Complete if h(n) is admissible
c) Space Complexity
O(bd) if h(n) is admissible
d) Time Complexity
14
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
b) Local Search Algorithm and Optimization
Optimization
Aim to find the best state according to an objective function
Local Search Algorithm
It operates using a single current state rather than multiple path and
generally move only to neighbour of that state
c) Iterative Improvement Algorithm
Consider that all the states are laid down on the surface of a landscape
The height of a point on a landscape corresponds to process to move around
the landscape trying to find the highest peaks, which are the optimal
solutions
Algorithm is suitable for problems where the path of the goal is irrelevant
and only final configuration matters
15
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Global Maxima
Local Maxima
Local Minima
Global Minima
State Space →
Fig.: A One Dimensional State Space Landscape 2
An area of the state space landscape where the evaluation function is flat
Can be flat local maxima where no uphill exists or shoulder from which it is possible to
progress
A hill climbing search might be unable to find its way off the plateau
Ridges
A special kind of local maxima which is the result of a sequence of local maxima that is
very difficult for greedy algorithms to navigate
It is an area of search space that is higher than the surrounding areas and that itself is at a
slope
Algorithm
17
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Hill Climbing Search: Solution to the Problems
Local Maxima
Plateau
Make a big jump in some direction to try to get a new section of the search space
If rule apply single small steps, apply them several times in the same direction
Ridges
Apply two or more rules such as bidirectional search before doing the test
This search takes some downhill steps so that it can escape that particular local maxima and
continue with other peaks in the state space
If it worsen the situation, it is accepted with some probability less than 1 which decreases
exponentially with the badness of the move i.e. for bad moves the probability is low and
for comparatively less bad one, it’s higher
The degree of badness of the move is determined by the amount ∆E, by which the evaluation is
worsened
The probability also depends on the value of a objective function parameter ‘T’
18
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
For low value of T, probability is high and vice versa
Hence, bad moves are more likely to be allowed at the beginning only
This method is more common in VLSI layout problem solving, factory scheduling and travelling
salesman problems
A game can be defined as a kind of search problem with the following components:
19
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Initial State identifying the initial position in the game and identification of the
first player
Successor Function returning a list of (move, state) pairs
Terminal Test which determine that the game is over
Utility function which gives a numeric value for the terminal states.
20
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Chapter 2 Agent, Search and Game Playing
Agent
An Agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators. This is analogous to a human agent has eye, ears
and other organs for sensors and hands, legs, mouth, and other body parts for actuators. A robotic
agent might have cameras and infrared range finders for sensors and various motors for
actuators.
What is black box AI? Put simply it is the idea that we can understand what goes in and what
comes out, but don’t understand what goes on inside.
The job of AI is to design the agent program that implement the agent function mapping percepts
to actions. We assume this program will run the architecture of computing device with physical
sensors and actuators. This is called architecture.
agent=architecture + program
An example of a taxi driver agent is given below
c) Goal-based Agents
• The agent can use goals with a set of actions and their predicted outcomes to
see which actions achieve our goals
• Search and planning are two subfields to achieve an agent's goal.
• A simple example would be the shopping list; our goal is to pick up every thing
on that list. This makes it easier to decide if you need to choose between milk
and orange juice because you can only afford one.
2
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
d) Utility-based reflex agents
➢ A utility function maps each state after each action to real number representing how
efficiently each action achieves the goal.
➢ They choose actions based on a preference (utility) for each state.
➢ Useful when many actions solving the same goal.
➢ Sometimes achieving the desired goal is not enough.
➢ We may look for a quicker, safer, cheaper trip to reach a destination.
➢ Because of the uncertainty in the world, a utility agent chooses the action that maximizes
the expected utility.
➢ Example:
• Quicker, safer, more reliable ways to get where going
• Price comparison shopping
• Bidding and evaluating bids on items in an auction
3
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
Intentionality is the ability to think, feel and act in a deliberate way towards a purpose. It is a
term of philosophy that comes up in relation to artificial intelligence. Where it is widely believed
that intentionality is a common human trait and ability, it is less clear whether machines could
ever possess such a dimension.
Most AI is focused on practical problems such as recognizing an image, driving a car or
searching the internet. As such, goals are often more or less hardcoded into AI software. In other
words, AI is currently mostly about learning how to answer a question, make a decision or
predict something. It isn't about forming high level goals and acting with a purpose.
Thought experiments such as the Chinese Room suggest that AI is fundamentally different from
consciousness because machines process symbols and don't "understand" them. If a machine
were to develop consciousness, intentionality would be an early precursor. A super
intelligent machine with intentionality could represent an existential as its goals might not align
with the interests of humans.
Overview: Intentionality
Artificial Intelligence
Thought Processes
Consciousness
Affective Computing
Superintelligence
4
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
Games
A game can be formally defined as a kind of search problem with the following components:
• The initial state , which includes the board position and an indication of whose
move it is.
• A set of Operators, which define the legal moves that a player can make.
• A terminal Test, which determines when the game is over. States where the
game has ended are called terminal states.
• Payoff function. A utility function (also called a payoff function), which gives a
numeric value for the outcome of a game. In chess, the outcome is a win, lose or
draw, which we can represent by the values +1, -1 or 0. Some games have a wider
variety of possible outcomes; for example, the payoffs in backgammon range
from +192 to -192.
Game Playing:
Game playing involves abstract and pure form of computation that seems to require intelligence.
So, game playing has close relationship to intelligence and its well defined stated and rules.
Game playing research has contributed ideas on how to make the best use of time to reach good
decisions. Game playing is important in AI due to following reasons :
Strategies Rules :
Plies, Moves and Turns
A strategy is a list of the optimal choices for each player at each stage of a given game. It is
common in game theory to refer to one player's turn as a "ply" of the game. One round of all the
player's turns is called a "move". this originates in Chess, where one move consists of each
player taking on turn. Because most turn-based AI is based on Chess- playing programs, the
word "move" is often used in this context.
There are many more games, however, that treat each player's turn as a separate move and this is
the terminology normally used in turn-based strategy games.
A game has at least two players. An instance of game begins with a player, choosing from a set
of specified game rules. This choice is called a move. After first move, choosing from a set of
5
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
specified game rules. This choice is called move. After first move , the new situation determines
which player to make move and alternatives available to that player.
Game Playing with Minimax Tic-Tac-Toe (TTT)
Assume that two players named X (MAX) and 0 (MIN) who are playing the game. MAX is
playing first. Initially MAX has 9 possible moves. Play alternates between MAX and MIN until
we reach leaf nodes corresponding to terminal states such that one player has 3 in a row or all
squares are filled.
The number on each leaf node indicates the utility value of the terminal state from the point of
view of MAX. High value is assumed to be good for MAX and bad for MIN
Consider the game with initial state as :
X 0 0
0 X
X
Step 1 MAX player moves by making X
X 0 0
0 X
X
X 0 0 X 0 0 X 0 0
0 X X 0 X 0 X
X X X X
X
X 0 0
0 X
X
Max's Move
X 0 0 X 0 0 X 0 0
0 X X 0 X 0 X
X X X X X
X 0 0 X 0 0
X 0 0
0 X X 0 X
0 0 X
X 0 X x X 0 x
6
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
X 0 0
0 X X
X 0
X 0 0 X 0 0
0 X 0 0 X
MIN's move X x 0 X x
Evaluation Function
An evaluation function, also known as a heuristic evaluation function or static evaluation
function, is a function used by game-playing programs to estimate the value or goodness of a
position in the minimax and related algorithms.
Utilitarian
It is one particular form of consequentialism, in which the “good consequence” is considered to
be the one that maximizes happiness (or: welfare, benefit) for all people concerned.
Consequentialism would look only at the consequences of an action to judge whether the action
is right or wrong. A consequentialist would say that an action is morally right if its consequences
lead to a situation that is clearly better than things were before the action.
Decision Making
Decision is obviously related to reasoning. One of the possible definitions of artificial
intelligence (AI) refers to cognitive processes and especially to reasoning. Before making any
decision, people also reason, Artificial Intelligence is being used in decision support for tasks
such as aiding the decision maker to select actions in real-time and stressful decision problems;
7
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
reducing information overload, enabling up-to-date information, and providing a dynamic
response with intelligent agents; enabling communication required for collaborative decisions;
and dealing with uncertainty in decision problems.
Planning
Planning is a key ability for intelligent systems, increasing their autonomy and flexibility through
the construction of sequences of actions to achieve their goals. Planning techniques have been
applied in a variety of tasks including robotics, process planning, web-based information
gathering, autonomous agents and spacecraft mission control. Planning involves the
representation of actions and world models, reasoning about the effects of actions, and
techniques for efficiently searching the space of possible plans.
Internal Representation:
Knowledge is the information about a domain that can be used to solve problems in that domain.
To solve many problems requires much knowledge, and this knowledge must be represented in
the computer. As part of designing a program to solve problems, we must define how the
knowledge will be represented. A representation scheme is the form of the knowledge that is
used in an agent. A representation of some piece of knowledge is the internal representation of
the knowledge.
8
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology