0% found this document useful (0 votes)
13 views28 pages

Search Strategies in Problem Solving

The document outlines the process of searching in problem-solving, detailing steps, types of searches (uninformed and informed), and various search strategies such as breadth-first and depth-first search. It also discusses criteria for measuring search effectiveness, including completeness, optimality, time complexity, and space complexity. Additionally, it classifies problems based on their properties and provides an overview of algorithms used in informed search.

Uploaded by

iamashokaryal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views28 pages

Search Strategies in Problem Solving

The document outlines the process of searching in problem-solving, detailing steps, types of searches (uninformed and informed), and various search strategies such as breadth-first and depth-first search. It also discusses criteria for measuring search effectiveness, including completeness, optimality, time complexity, and space complexity. Additionally, it classifies problems based on their properties and provides an overview of algorithms used in informed search.

Uploaded by

iamashokaryal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit 2: Searching

Searching
 Step in Problem Solving
 Searching is Performed through the State Space
 Searching accomplished by constructing a search tree

Steps of Searching
 Check whether the current state is the goal state or not
 Expand the current state to generate the new sets of states
 Choose one of the new states generated for search which entire depend on the selected
search strategy
 Repeat the above steps until the goal state is reached or there are no more states to be
expanded

Searching: Types
1) Blind Search or Uninformed Search
2) Informed Search or Heuristic Search
1 ) Uninformed Search Techniques
 Breadth First Search
 Uniform Cost Search
1
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 Depth First Search
 Backtracking Search
 Depth Limited Search
 Iterative Deepening Depth First Search
 Bidirectional Search
 Search Strategy Comparison
2) Informed Search or Heuristic Search

 Best First Searching


 Greedy Search
 A* Search
 Local Search Algorithm & Optimization
 Iterative Improvement Algorithm
 Hill Climbing Search
 Simulated Annealing Search
 Local Beam Search
 Genetic Algorithm
 Adversarial Search Techniques
 Mini-max Procedure
 Alpha Beta Procedure

Searching: Criteria to Measure


 Completeness: Ability to find the solution if the solution exists
 Optimality: Ability to find out the highest quality solution among the several solutions
 Should maintain the information about the number of steps or the path cost from
the current state to the goal state
 Time Complexity: Time taken to find out the solution
2
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 Space Complexity: Amount of Memory required to perform the searching
Searching: Evolution Function
 A number to indicate how far we are from the goal
 Every move should reduce this number or if not never increase
 When this number becomes zero, the problem is solved (there may be some exceptions)
Eg:-
8 Puzzle Games

1 2 3 2 8 3

8 4 1 4

7 6 5 7 6 5

It's Goal State It's Initial State


Evolution Function = 0 Evolution Function = -4

Searching: Problem Classification


 Ignorable: Intermediate actions can be ignored. Example: Water Jug Problem
 Recoverable: The actions can be implemented to go the initial states. Example: 8 Puzzle
Games
 Irrecoverable: The actions can’t be implemented to reach the previous state. Example:
Tic-Tac-Toe
 Decomposable: The problem can be broken into similar ones. Example: Bike Racing

1) Uninformed Search
 Search provided with problem definition only and no additional information about the
state space
 Expansion of current state to new set of states is possible
 It can only distinguish between goal state and non-goal state
 Less effective compared to Informed search
3
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
a ) Breadth First Search
 Root node is expanded first
 Then all the successors of the root node are expanded
 Then their successors are expanded and so on.
 Nodes, which are visited first will be expanded first (FIFO)
 All the nodes of depth ‘d’ are expanded before expanding any node of depth ‘d+1’

S.N Next Successors Fringe(FIFO) Visited


1 A B,C,D A -
(initial)
2 B E,F B,C,D A
3 C G,H C,D,E,F A,B
4 D I,J D,E,F,G,H A,B,C
5 E K,L E,F,G,H,I,J A,B,C,D
6 F M F,G,H,I,J,K,L A,B,C,D,E
7 G N G,H,I,J,K,L,M A,B,C,D,E,F
8 H O H,I,J,K,L,M,N A,B,C,D,E,F,G
9 I P,Q I,J,K,L,M,N,O A,B,C,D,E,F,G,H
10 J R J,K,L,M,N,O,P,Q A,B,C,D,E,F,G,H,I
11 K S K,L,M,N,O,P,Q,R A,B,C,D,E,F,G,H,I,J
4
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
12 L T L,M,N,O,P,Q,R,S A,B,C,D,E,F,G,H,I,J,K
13 M - M,N,O,P,Q,R,S,T A,B,C,D,E,F,G,H,I,J,K,L
14 N (Goal) - N,O,P,Q,R,S,T A,B,C,D,E,F,G,H,I,J,K,L,M

Breadth First Search: Four Criteria


a ) Completeness
 d: depth of the shallowest goal
 b: branch factor
 This search strategy finds the shallowest goal first
 Complete, if the shallowest goal is at some finite depth
b ) Optimality
 If the shallowest goal nodes were available, it would already have been reached
 Optimal, if the path cost is a non-decreasing function of the path of the node
c) Time Complexity
 For a search tree a branching factor ‘b’ expanding the root yields ‘b’ nodes at the
first level.
 Expanding ‘b’ nodes at first level yields b2 nodes at the second level.
 Similarly, expanding the nodes at (d+1)th level yields bd node at dth level
 If the goal is in dth level, in the worst case, the goal node would be the last node in
the dth level.
 Hence, We should expand (bd-1) nodes in the dth level (Except the goal node itself
which doesn’t need to be expanded)
 So, Total number of nodes generated at dth level = b(bd-1)
=bd+1-b
 Again, Total number of nodes generated = 1+b+b2+…+bd+1-b
=O(bd+1)=O(bd)
 Hence, time complexity is O(bd+1)
where, b= branching factor and
d= level of goal node in the search table
d) Space Complexity

5
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 Same as time complexity
 i.e. O(bd+1)
 Since each node has to be kept in the memory
Disadvantages

 Memory Wastage
 Irrelevant Operations
 Time Intensive
 It doesn’t assure the optimal cost solution

b) Depth First Search


 Expands the nodes at the deepest level of the search tree (LIFO)

 When a dead end is reached, the search backup to the next node that still has unexplored
successors.

Consider the example figure of BFS

S.N Next Successors Fringe(LIFO) Visited


1 A B,C,D A -
(initial)
2 B EF B,C,D A
3 E KL E,F,C,D A,B
4 K S K,L,F,C,D A,B,E
5 S - S,L,F,C,D A,B,E,K
6 L T L,F,C,D A,B,E,K,S
7 T - T,F,C,D A,B,E,K,S,L
8 F M F,C,D A,B,E,K,S,L,T
6
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
9 M - M,C,D A,B,E,K,S,L,T,F
10 C GH C,D A,B,E,K,S,L,T,F,M
11 G N G,H,D A,B,E,K,S,L,T,F,M,C
12 N(goal) - N,H,D A,B,E,K,S,L,T,F,M,C,G

Depth First Search: Four Criteria

a ) Completeness

 Can get stuck going down the wrong path

 It will always continue downwards without backing up

 If the path chose get infinitely down, even when shallow solution exists

 Not complete

b) Optimality

 The strategy might return a solution path that is longer than the optimal solution, if it
starts with an unlucky path

 Not optimal

c) Space Complexity

 How many nodes can be in the queue (worst case) ?

 At depth l < d (max) we have b-1 nodes. So we have (d-1) * (b-1) nodes

 At max. depth d we have b nodes.

 It needs to store a single path from root to a leaf node and the remaining unexpanded
sibling nodes for each node in the path

 For a search tree of branching factor ‘b’ and maximum tree depth ‘m’, only the storage of
bm+1 node is required

 So, total = (d-1) * (b-1) + b = O(bd) = O(bm)

Hence,
Space Complexity= O(b.m+1) = O(bm)

d) Time Complexity

 O(bm), in the worst case, since in the worst case all the bm nodes of the search tree would
be generated. Hence, Time Complexity= O(bm)

 Assume (worst case) that there is 1 goal leaf at the RHS

 So DFS will expand all nodes:

7
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
=1 + b+ b2 + ………..+bd

=O (bd )

=O (bm )

c) Uniform-cost search (UCS)


 Uniform cost search is a search algorithm used to traverse, and find the shortest path in weighted
trees and graphs.

 Uniform Cost Search or UCS begins at a root node and will continually expand nodes, taking the
node with the smallest total cost from the root until it reaches the goal state.

 Uniform cost search doesn't care about how many steps a path has, only the total cost of the path.

 UCS with all path costs equal to one is identical to breadth first search.

 Similar to BFS except that it sorts (ascending order) the nodes in the fringe according to the cost
of the node. where cost is the path cost.

Fringe Updated Next Node Successor


Fringe(Queue) (Head of Fringe) (In sorted Ascending order)
S0 A1,B5,C15 S(Start) C,B,A
A1, B5 , C15 B5,D11,C15 A D
B5, D11 ,C15 D10,D11,C15 B D
D10 ,D11 ,C15 - D(Goal) -
cost = 5+5=10. S B D
Disadvantages

 Doesn’t care about the number of steps a path has but only about their cost

 It might get stuck in an infinite loop if it expands a node that has a zero cost action
leading back to same state

Uniform Cost Search: Four Criteria


8
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
a) Completeness

 Complete, if the cost of every step is greater than or equal to some small positive constant
E

b) Optimality

 The same ensures optimality

c) Time Complexity

 O(b C*/E)

 Where C*→ cost of optimal path


and E→ small positive constant

 This complexity is much greater than that of Breadth first search

d) Space Complexity

 O(b C*/E)

d) Backtracking Search
 It uses still less memory
 Only one successor is generated at a time rather than all
 Each partially expanded nodes remember which node to expand next
Four Criteria
 Completeness: Not Complete
 Optimality: Not Optimal
 Time Complexity= O(bm)
 Space Complexity= O(m)

e) Depth Limited Search


 Modification of depth first search
 Depth first search with predetermined limit ‘l’
 After the nodes at the level ‘l’ are explored, the search backtracks without going further
deep i.e. the algorithm will not move down below.
 Hence, it solves the infinite path problem of the depth first search strategy.

9
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Four Criteria

 Completeness: Complete except at additional source of incompleteness if l>d


 Optimality: Optimal except at l>d . i.e. the first path may be found to the goal instead of
the shortest path.
 Time Complexity=O(bl)
 Space Complexity=O(bl)

f) Iterative Deepening Depth First Search


 Finds the best limit by gradually increasing depth limit l first to 0, then to 1, 2 and so on.
 It is the derivative of Depth Limited Search.
 Combines the benefits of the depth first and breadth first search
 The complex part is to choose good depth limit
 This strategy addresses the issue of good depth limit by trying all possible depth limits
 The process is repeated until goal is found at depth limit ‘d’ which is the depth of
shallowest goal.

Four Criteria

 Completeness: as of Breadth First Search i.e. Complete if branching factor is finite


 Optimality: as of Breadth First Search i.e. optimal if the path cost is non decreasing
function of depth
 Time Complexity= O(bd)
 Space Complexity= O(bd)

10
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
g) Bidirectional Search
 Performs two simultaneous searches, one forward from initial state and the other backward
from the last state

 Search stops when the two traversals meet in the middle

Four Criteria

 Completeness: Complete if both searches are B.F.S. and b is finite

 Optimality: Optimal if both searches are B.F.S.

 Time Complexity

 For B.F.S. is O(bd+1)

 If B.F. Bidirectional Search is used then the complexity = O(bd/2)

 Since the forward and backward searches have to go halfway only

 Space Complexity= O(bd/2)

Bidirectional Search (BFS)

Bidirectional Search (DFS)

11
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Summary of algorithms

2. Informed Search
 Strategy of problem solving where problem specific knowledge is known along with
problem definition
 These search find solutions more efficiently by the use of heuristics
 Heuristic is a search technique that improves the efficiency of the search process
 By eliminating the unpromising states and their descendants from consideration, heuristic
algorithms can find acceptable solutions.
 Heuristics are fallible i.e. they are likely to make mistakes as well
 It is the approach following an informed guess of next step to be taken
 It is often based on experience or intuition
 Heuristic have limited information and hence can lead to suboptimal solution or even fail
to find any solution at all

a) Best First Search


 A node is selected for expansion based on evaluation function f(n)
12
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 A node with lowest evaluation function is expanded first

 The measure i.e. evaluation function must incorporate some estimate of the cost of the path from
a state to the closest goal state

 The algorithm may have different evaluation function, one of such important function is the
heuristic function h(n)
where, h(n) is the estimated cost of the cheapest path from node n to the goal

Best First Search: Types


 Greedy Best First Search
 A* Search
a.1 ) Greedy Best First Search
 The node whose state is judged to be the closest to the goal state is expanded first
 At each step it tries to be as close to the goal as it can
 It evaluates the nodes by using heuristic function
hence, f(n)=h(n)
where, h(n)=0, for the goal node
This search resembles depth first search in the way that it prefers to follow a single path all the
way to the goal or if not found till the dead end and returns back up.
Four Criteria

a ) Completeness
 Can start down an infinite path and never return to any possibilities
 Not complete
b) Optimality
 Looks for immediate best choice and doesn’t make careful analysis of long term
options
 May give longer solution even if shorter solution exists
 Not optimal
c) Space Complexity
 O(bm) where, m is the maximum depth of search space, since all nodes have to be
kept in memory
d) Time Complexity
13
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
O(bm)

a.2) A* Search
 Evaluates node by combining g(n), the cost to reach the node and h(n) the cost to get
from node to goal
f(n)=g(n)+h(n)
where f(n) is the estimated cost of the cheapest solution through node n
 To find the cheapest solution, the first try node is the mode with lowest g(n)+h(n)
 Admissible Heuristic: h(n) is admissible if it never overestimates cost to reach the
solution
example: hSLD (straight line distance) as g(n) is exact, so f(n) is never overestimated
➢ When h(n) = actual cost to goal.
- Only nodes in the correct path are expanded.
-Optimal solution is found.
➢ When h(n) < actual cost to goal
-Additional nodes are expanded
-Optimal solution is found
➢ When h(n) > actual cost to goal
-Optimal solution can be overlooked.
a) Optimality
 Optimal if h(n) is admissible
b) Completeness
 Complete if h(n) is admissible
c) Space Complexity
 O(bd) if h(n) is admissible

d) Time Complexity

 O(bd) if h(n) is admissible


Eg

14
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
b) Local Search Algorithm and Optimization
 Optimization
 Aim to find the best state according to an objective function
 Local Search Algorithm
 It operates using a single current state rather than multiple path and
generally move only to neighbour of that state
c) Iterative Improvement Algorithm
 Consider that all the states are laid down on the surface of a landscape
 The height of a point on a landscape corresponds to process to move around
the landscape trying to find the highest peaks, which are the optimal
solutions
 Algorithm is suitable for problems where the path of the goal is irrelevant
and only final configuration matters

15
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Global Maxima

Objective Flat Local Maxima


Function Shoulder
Current State

Local Maxima

Local Minima

Global Minima

State Space →
Fig.: A One Dimensional State Space Landscape 2

Iterative Improvement Algorithm: Types


 Hill Climbing Search
 Simulated Annealing Search
 Local Beam Search
 Genetic Algorithm

c. 1) Hill Climbing Search


 Hill climbing is an optimization technique for solving computationally hard problems.
 Used in problems with “the property that the state description itself contains all the
information”
 The algorithm is memory efficient since it does not maintain a search tree
 Hill climbing attempts to iteratively improve the current state by means of an evaluation
function
 Searching for a goal state = Climbing to the top of a hill
 Moves continuously in the direction of increasing value (uphill)
 Doesn’t maintain a search tree so the current node data structure needs only record the
state and its objective function value
16
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 Hill climbing doesn’t look ahead beyond the immediate neighbours of the current state
 Also called greedy local search sometimes because it grabs a good neighbour state
without thinking about where to go next
 it often makes very rapid progress towards the solution because it is usually quite easy to
improve a bad state
 One move is selected and all other nodes are rejected and are never considered
 Halts if there is no successor
Hill Climbing Search: Problems
 Local Maxima
 Peak that is higher than each of its neighbouring states but lower than the global
maxima
 Hill climbing halts whenever a local maxima is reached
 Plateau

 An area of the state space landscape where the evaluation function is flat

 Can be flat local maxima where no uphill exists or shoulder from which it is possible to
progress

 A hill climbing search might be unable to find its way off the plateau

 Ridges

 A special kind of local maxima which is the result of a sequence of local maxima that is
very difficult for greedy algorithms to navigate

 It is an area of search space that is higher than the surrounding areas and that itself is at a
slope

Algorithm

1. Determine successors of current state

2. Choose successor of maximum goodness

3. If goodness of best successor is less than current state's goodness, stop

4. Otherwise make best successor the current state and go to step 1

Problem: depending on initial state, can get stuck in local maxima

17
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Hill Climbing Search: Solution to the Problems
 Local Maxima

 Backtrack to some earlier node and try going to different direction

 Plateau

 Make a big jump in some direction to try to get a new section of the search space

 If rule apply single small steps, apply them several times in the same direction

 Ridges

 Apply two or more rules such as bidirectional search before doing the test

 Moving in several directions at once

c.2) Simulated Annealing Search


 Rather than starting from a different initial state all over again, when a current state shows no
progress in the technique

 This search takes some downhill steps so that it can escape that particular local maxima and
continue with other peaks in the state space

 A random pick is made for the move

 If it improves the situation, it is accepted straight away

 If it worsen the situation, it is accepted with some probability less than 1 which decreases
exponentially with the badness of the move i.e. for bad moves the probability is low and
for comparatively less bad one, it’s higher

 The degree of badness of the move is determined by the amount ∆E, by which the evaluation is
worsened

 The probability also depends on the value of a objective function parameter ‘T’
18
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 For low value of T, probability is high and vice versa

 Hence, bad moves are more likely to be allowed at the beginning only

 This method is more common in VLSI layout problem solving, factory scheduling and travelling
salesman problems

c.3) Local Beam Search


 A path-based algorithm
 Keeps track of k-states rather than just one
 Begins with k randomly generated states, at each step, all the successors of all k states are
generated
 If anyone is the goal, the algorithm halts
 Can quickly become concentrated in a small region of the state space

c.4) Genetic algorithms


 A successor state is generated by combining two parent states
 Start with k randomly generated states (population)
 A state is represented as a string over a finite alphabet (often a string of 0s and 1s)
 Evaluation function (fitness function). Higher values for better states.
 Produce the next generation of states by selection, crossover, and mutation

d) Adversarial Search Techniques


 Often known as Games or Game Playing
 Used in competitive multi-agent environments
 Based on game theory
 Deterministic and fully observable environments in which there are two agents whose
actions must alternate and in which the utility values at the end of the game are always
equal and opposite
 This creates adversarial situation
Optimal Decision in Adversarial Search

 A game can be defined as a kind of search problem with the following components:
19
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
 Initial State identifying the initial position in the game and identification of the
first player
 Successor Function returning a list of (move, state) pairs
 Terminal Test which determine that the game is over
 Utility function which gives a numeric value for the terminal states.

d.1) Minimax Algorithm


 Max is considered as the first player in the game and Min as the second player
 This algorithm computes the minimax decision from the current state
 It uses a recursive computation of minimax values of each successor state directly
implementing some defined function
 The recursion proceeds from the initial node to all the leaf nodes
 Then the minimax values are backed up through the tree as the recursion unwinds
 It performs the depth first exploration of a game tree in a complete way

d.2 ) Alpha Beta Pruning


 Minimax algorithm has to examine exponentially increasing number of moves
 As the exponential rise can’t be avoided Pruning cut it into halves
 By not considering a large part of the tree number of states to be calculated is cut down
 When applied to a standard minimax tree, alpha beta pruning returns the same move as
minimax would, but prunes away the branches which couldn’t possibly influence the
final decision
 Alpha beta pruning could be applied to the trees of any depth

20
Compiled By:
Er. Ranjan Raj Aryal
Add. Associate Professor
Cosmos College Of Management & Technology
Chapter 2 Agent, Search and Game Playing
Agent
An Agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators. This is analogous to a human agent has eye, ears
and other organs for sensors and hands, legs, mouth, and other body parts for actuators. A robotic
agent might have cameras and infrared range finders for sensors and various motors for
actuators.
What is black box AI? Put simply it is the idea that we can understand what goes in and what
comes out, but don’t understand what goes on inside.

The job of AI is to design the agent program that implement the agent function mapping percepts
to actions. We assume this program will run the architecture of computing device with physical
sensors and actuators. This is called architecture.
agent=architecture + program
An example of a taxi driver agent is given below

Agent Type Percepts Actions Goals Environment


Taxi Driver Cameras, Steer, accelerate, Safe, fast, legal, Roads, other
Speedometer, brake, talk to comfortable trip, traffic, pedestrians,
GPS, sonar, passenger maximize profits customers
microphone

Types of Agent Programs


• Simple reflex agents
• Goal-based Agents
• Model-based reflex agents
• Utility-based reflex agents
1
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
a) Simple reflex agents
• Simple agent which select actions on the basis of current percepts
• Useful for quick automated response
• E.g very similar nature of human to fire

b) Model-based reflex agents


• made to deal with partial accessibility
• It keeps an internal state that depends on what it has seen before.
• To update we need two things:
➢ Information on how the world evolves on its own
➢ How the world is affected by the agents actions ?

c) Goal-based Agents

• The agent can use goals with a set of actions and their predicted outcomes to
see which actions achieve our goals
• Search and planning are two subfields to achieve an agent's goal.
• A simple example would be the shopping list; our goal is to pick up every thing
on that list. This makes it easier to decide if you need to choose between milk
and orange juice because you can only afford one.
2
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
d) Utility-based reflex agents
➢ A utility function maps each state after each action to real number representing how
efficiently each action achieves the goal.
➢ They choose actions based on a preference (utility) for each state.
➢ Useful when many actions solving the same goal.
➢ Sometimes achieving the desired goal is not enough.
➢ We may look for a quicker, safer, cheaper trip to reach a destination.
➢ Because of the uncertainty in the world, a utility agent chooses the action that maximizes
the expected utility.
➢ Example:
• Quicker, safer, more reliable ways to get where going
• Price comparison shopping
• Bidding and evaluating bids on items in an auction

Intentionality and Goals

3
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
Intentionality is the ability to think, feel and act in a deliberate way towards a purpose. It is a
term of philosophy that comes up in relation to artificial intelligence. Where it is widely believed
that intentionality is a common human trait and ability, it is less clear whether machines could
ever possess such a dimension.
Most AI is focused on practical problems such as recognizing an image, driving a car or
searching the internet. As such, goals are often more or less hardcoded into AI software. In other
words, AI is currently mostly about learning how to answer a question, make a decision or
predict something. It isn't about forming high level goals and acting with a purpose.
Thought experiments such as the Chinese Room suggest that AI is fundamentally different from
consciousness because machines process symbols and don't "understand" them. If a machine
were to develop consciousness, intentionality would be an early precursor. A super
intelligent machine with intentionality could represent an existential as its goals might not align
with the interests of humans.

Overview: Intentionality

Type Thought Processes

Definition The ability to think, feel and act in a


deliberate way towards a purpose.

Related Concepts Chinese Room

Artificial Intelligence

Thought Processes

Consciousness

Affective Computing

Superintelligence

4
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
Games
A game can be formally defined as a kind of search problem with the following components:

• The initial state , which includes the board position and an indication of whose
move it is.
• A set of Operators, which define the legal moves that a player can make.
• A terminal Test, which determines when the game is over. States where the
game has ended are called terminal states.
• Payoff function. A utility function (also called a payoff function), which gives a
numeric value for the outcome of a game. In chess, the outcome is a win, lose or
draw, which we can represent by the values +1, -1 or 0. Some games have a wider
variety of possible outcomes; for example, the payoffs in backgammon range
from +192 to -192.

Game Playing:
Game playing involves abstract and pure form of computation that seems to require intelligence.
So, game playing has close relationship to intelligence and its well defined stated and rules.
Game playing research has contributed ideas on how to make the best use of time to reach good
decisions. Game playing is important in AI due to following reasons :

• The state of game is easy to represent.


• The rules of the game are limited and precise. Further the rules of the games are
static, known to both players
• They provide a structured task. Therefore, success or failure can be easily
measured.
• Games are examples from a class of problems concerned with reasoning about
actions.
• Games simulate real life situation.

Strategies Rules :
Plies, Moves and Turns
A strategy is a list of the optimal choices for each player at each stage of a given game. It is
common in game theory to refer to one player's turn as a "ply" of the game. One round of all the
player's turns is called a "move". this originates in Chess, where one move consists of each
player taking on turn. Because most turn-based AI is based on Chess- playing programs, the
word "move" is often used in this context.
There are many more games, however, that treat each player's turn as a separate move and this is
the terminology normally used in turn-based strategy games.
A game has at least two players. An instance of game begins with a player, choosing from a set
of specified game rules. This choice is called a move. After first move, choosing from a set of
5
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
specified game rules. This choice is called move. After first move , the new situation determines
which player to make move and alternatives available to that player.
Game Playing with Minimax Tic-Tac-Toe (TTT)
Assume that two players named X (MAX) and 0 (MIN) who are playing the game. MAX is
playing first. Initially MAX has 9 possible moves. Play alternates between MAX and MIN until
we reach leaf nodes corresponding to terminal states such that one player has 3 in a row or all
squares are filled.
The number on each leaf node indicates the utility value of the terminal state from the point of
view of MAX. High value is assumed to be good for MAX and bad for MIN
Consider the game with initial state as :

X 0 0
0 X
X
Step 1 MAX player moves by making X

X 0 0
0 X
X

X 0 0 X 0 0 X 0 0
0 X X 0 X 0 X
X X X X
X

Step 2 MIN player moves by making 0

X 0 0
0 X
X
Max's Move

X 0 0 X 0 0 X 0 0
0 X X 0 X 0 X
X X X X X

X 0 0 X 0 0
X 0 0
0 X X 0 X
0 0 X
X 0 X x X 0 x

6
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
X 0 0
0 X X
X 0

X 0 0 X 0 0
0 X 0 0 X
MIN's move X x 0 X x

Step3 Again MAX's moves

Evaluation Function
An evaluation function, also known as a heuristic evaluation function or static evaluation
function, is a function used by game-playing programs to estimate the value or goodness of a
position in the minimax and related algorithms.

Utilitarian
It is one particular form of consequentialism, in which the “good consequence” is considered to
be the one that maximizes happiness (or: welfare, benefit) for all people concerned.
Consequentialism would look only at the consequences of an action to judge whether the action
is right or wrong. A consequentialist would say that an action is morally right if its consequences
lead to a situation that is clearly better than things were before the action.

Decision Making
Decision is obviously related to reasoning. One of the possible definitions of artificial
intelligence (AI) refers to cognitive processes and especially to reasoning. Before making any
decision, people also reason, Artificial Intelligence is being used in decision support for tasks
such as aiding the decision maker to select actions in real-time and stressful decision problems;
7
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology
reducing information overload, enabling up-to-date information, and providing a dynamic
response with intelligent agents; enabling communication required for collaborative decisions;
and dealing with uncertainty in decision problems.

Planning
Planning is a key ability for intelligent systems, increasing their autonomy and flexibility through
the construction of sequences of actions to achieve their goals. Planning techniques have been
applied in a variety of tasks including robotics, process planning, web-based information
gathering, autonomous agents and spacecraft mission control. Planning involves the
representation of actions and world models, reasoning about the effects of actions, and
techniques for efficiently searching the space of possible plans.

Internal Representation:
Knowledge is the information about a domain that can be used to solve problems in that domain.
To solve many problems requires much knowledge, and this knowledge must be represented in
the computer. As part of designing a program to solve problems, we must define how the
knowledge will be represented. A representation scheme is the form of the knowledge that is
used in an agent. A representation of some piece of knowledge is the internal representation of
the knowledge.

8
Compiled By:
[Link] Raj Aryal
Add. Associate Professor
Cosmos College of Management & Technology

You might also like