0% found this document useful (0 votes)
20 views14 pages

Problem Solving in AI

The document discusses problem-solving in Artificial Intelligence (AI), outlining the state space representation of problems and the steps involved in generating solutions. It details various problem-solving methods, including uninformed search algorithms like Breadth-First Search and Depth-First Search, as well as informed search algorithms such as A* and Hill Climbing. Each method is explained with examples, advantages, and disadvantages, emphasizing the importance of heuristics in improving search efficiency.

Uploaded by

harmangill9955
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views14 pages

Problem Solving in AI

The document discusses problem-solving in Artificial Intelligence (AI), outlining the state space representation of problems and the steps involved in generating solutions. It details various problem-solving methods, including uninformed search algorithms like Breadth-First Search and Depth-First Search, as well as informed search algorithms such as A* and Hill Climbing. Each method is explained with examples, advantages, and disadvantages, emphasizing the importance of heuristics in improving search efficiency.

Uploaded by

harmangill9955
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Problem Solving In AI

Problem solving is the process of finding a solution by moving step-by-step from the initial problem to the
desired solution. In Artificial Intelligence, problems are usually represented as a state space problem.
A state space is the collection of all possible situations (states) that can occur while solving a problem, starting
from the initial state and ending at the goal state.

Problem Solving as a Process


AI generates solutions using the following steps:
1. Define the problem clearly
2. Analyze the problem
3. Represent the required knowledge
4. Choose an appropriate problem-solving technique
5. Search for a solution

Defining the problem as State Space Search:


• States
• Operators (actions)
• Initial state
• Goal state
State
A state describes the current situation of the problem using relevant attributes.
Example:
Location of a robot in a room.
Operator (Action)
An operator changes one state into another.
• Precondition: Condition that must be true before applying the action
• Effect: Change produced after applying the action
Example:
Move Left, Move Right, Pick Object
Initial State
The starting point of the problem.
Goal State
The desired solution state.
Simple Example: Robot Moving in a Room
Problem: A robot wants to move from Room A to Room C.
State Space
All possible rooms the robot can be in:
• Room A
• Room B
• Room C

Initial State
Robot is in Room A

Goal State
Robot reaches Room C

Operators (Actions)
• Move from A → B
• Move from B → C

Solution Path
A→B→C
AI searches this path in the state space to reach the goal.

Problem Solving Methods in AI:


1. Uninformed search algorithm
2. Informed search algorithm

1. Uninformed search algorithm:


Uninformed search algorithms, also known as blind search algorithms, are problem-solving methods in
Artificial Intelligence that explore the state space without using any additional knowledge or heuristic
information about how close a particular state is to the goal. No Heuristic Knowledge means the algorithm
does not know which state is better or closer to the goal.
All nodes are treated equally.
These algorithms have no information about the goal location, distance, or direction. They only rely on:
• The initial state
• The set of possible actions (operators)
• The goal test (to check whether the current state is the goal)
• The path cost (in some algorithms)
Because of this limitation, uninformed search algorithms often explore a large number of states before finding
a solution and leads to High Time and Space Complexity.
Uninformed (blind) search algorithms are broadly classified into two main types:
1. Breadth-First–Type Search (Level-wise Search):
Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the
root node are expanded next, then their successors, and so on. BFS algorithm starts searching from the root
node of the tree and expands all successor node at the current level before moving to nodes of next level.

Breadth-first search implemented using FIFO queue data structure.


Step 1: Start
• Initial state = S
• Put S into the queue
Queue: [ S ]
Step 2: Expand S (Level 0)
• Remove S
• Add its children A, B
Queue: [ A, B ]
✔ Level 1 explored
Step 3: Expand A (Level 1)
• Remove A
• Add its children C, D
Queue: [ B, C, D ]
Step 4: Expand B (Level 1)
• Remove B
• Add its children G, H
Queue: [ C, D, G, H ]
✔ Level 2 explored
Step 5: Expand C (Level 2)
• Remove C
• Add its children E, F
Queue: [ D, G, H, E, F ]
Step 6: Expand D (Level 2)
• Remove D
• No new child (or already visited)
Queue: [ G, H, E, F ]
Step 7: Expand G (Level 2)
• Remove G
• Add its child I
Queue: [ H, E, F, I ]
Step 8: Expand H (Level 2)
• Remove H
• No new child
Queue: [ E, F, I ]
✔ Level 3 explored
Step 9: Expand E (Level 3)
• Remove E
• Add its child K
Queue: [ F, I, K ]
Step 10: Goal Found
• Remove K
• K is the goal state
• STOP search
Final BFS Traversal Order: S → A → B → C → D → G → H → E → F → I → K
Breadth First Search Algorithm:
1. Create an empty queue Q
2. Insert the initial state S into Q
3. While Q is not empty:
a. Remove the front node N from Q
b. If N is the goal state, return success
c. Else expand N and insert all unvisited children into Q
4. If queue becomes empty, return failure
Disadvantages:
• BFS requires lots of memory since each level of the tree must be saved into memory to expand the next
level.
• BFS needs lots of time if the solution is far away from the root node.

2. Depth-First–Type Search (Depth-wise Search):


DFS is a recursive algorithm for traversing a tree or graph data structure.
• It is called the depth-first search because it starts from the root node and follows each path to its greatest
depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.

Step-by-Step Working of DFS


Step 1: Start
• Initial state = S
• Push S onto stack
Stack: [ S ]
Step 2: Expand S
• Move to first child A
Stack: [ S, A ]
Step 3: Expand A
• Go to first child B
Stack: [ S, A, B ]
Step 4: Expand B
• Go to first child D
Stack: [ S, A, B, D ]
Step 5: Expand D
• D has no children
• Backtrack to B
• Step 6: Continue from B
• Next child is E
Stack: [ S, A, B, E ]
Step 7: Expand E
• No children
• Backtrack to A
Step 8: Continue from A
• Next child is C
Stack: [ S, A, C ]
Step 9: Expand C
• Go to child G
Stack: [ S, A, C, G ]
Step 10: Goal Found
• G is the goal state
• STOP search

DFS Traversal Order: S → A → B → D → E → C → G


Because DFS follows this rule: Go deep first, not wide.
Depth First Search Algorithm:
1. Create an empty stack S
2. Push initial state into stack
3. While stack is not empty:
a. Pop the top node N
b. If N is the goal state, return success
c. Else push unvisited children of N onto stack
4. If stack becomes empty, return failure
Advantages:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to
the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantages:
• There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

2. Informed search algorithm:


Informed search algorithms are AI search techniques that use heuristic information to choose the most
promising node, thereby reducing the search space and improving efficiency.
A heuristic is a function that estimates how close a state is to the goal.
Two types:

1. Heuristic search:
Hill climbing is a heuristic search algorithm that belongs to the family of local search methods. It is designed to
solve problems where the goal is to find an optimal (or near-optimal) solution by iteratively moving from the
current state to a better neighboring state, according to a heuristic or evaluation function. For example,
climbing a hill in the fog: You can only see the ground right around you, and at each step, you choose the
direction that goes upward (toward higher elevation).
The goal is to reach the highest peak, which represents the optimal solution.
Hill Climbing Algorithms
Hill climbing follows these steps:
1. Initial State: Start with an arbitrary or random solution (initial state).
2. Neighboring States: Identify neighboring states of the current solution by making small adjustments
(mutations or tweaks).
3. Move to Neighbor: If one of the neighboring states offers a better solution (according to some
evaluation function), move to this new state.
4. Termination: Repeat this process until no neighboring state is better than the current one. At this point,
we have reached a local maximum or minimum

Types of Hill Climbing in Artificial Intelligence


1. Simple Hill Climbing Algorithm: Simple Hill Climbing is a straightforward variant of
hill climbing where the algorithm evaluates each neighbouring node one by one and
selects the first node that offers an improvement over the current one.
2. Steepest-Ascent Hill Climbing: Steepest-Ascent Hill Climbing is an enhanced
version of simple hill climbing. Instead of moving to the first neighboring node that
improves the state, it evaluates all neighbors and moves to the one offering the highest
improvement (steepest ascent).
3. Stochastic Hill Climbing: Stochastic Hill Climbing introduces randomness into the
search process. Instead of evaluating all neighbors or selecting the first improvement, it
selects a random neighboring node and decides whether to move based on its
improvement over the current state.

Key Regions in the State-Space Diagram


1. Local Maximum: A local maximum is a state better than its neighbors but not the best overall. While its
objective function value is higher than nearby states, a global maximum may still exist.
2. Global Maximum: The global maximum is the best state in the state-space diagram where the objective
function achieves its highest value. This is the optimal solution the algorithm seeks.
3. Plateau/Flat Local Maximum: A plateau is a flat region where neighboring states have the same
objective function value, making it difficult for the algorithm to decide on the best direction to move.
4. Ridge: A ridge is a higher region with a slope which can look like a peak. This may cause the algorithm
to stop prematurely, missing better solutions nearby.
5. Current State: The current state refers to the algorithm's position in the state-space diagram during its
search for the optimal solution.
6. Shoulder: A shoulder is a plateau with an uphill edge allowing the algorithm to move toward better
solutions if it continues searching beyond the plateau.
Problem: Find the maximum of the function f(x)=−x2+4xf(x) = -x^2 + 4xf(x)=−x2+4x using hill climbing.
1. Heuristic function: h(x)=f(x)=−x2+4xh(x) = f(x) = -x^2 + 4xh(x)=f(x)=−x2+4x (we want to maximize
f(x)f(x)f(x))
2. Initial state: x=0x = 0x=0
Steps:
• Evaluate neighbors of x=0x = 0x=0:
o x=1x = 1x=1: f(1)=−1+4=3f(1) = -1 + 4 = 3f(1)=−1+4=3
o x=−1x = -1x=−1: f(−1)=−1−4=−5f(-1) = -1 - 4 = -5f(−1)=−1−4=−5
• Choose the best neighbor: x=1x = 1x=1 (value = 3)
• Evaluate neighbors of x=1x = 1x=1:
o x=2x = 2x=2: f(2)=−4+8=4f(2) = -4 + 8 = 4f(2)=−4+8=4
o x=0x = 0x=0: f(0)=0f(0) = 0f(0)=0
• Choose x=2x = 2x=2 (value = 4)
• Evaluate neighbors of x=2x = 2x=2:
o x=3x = 3x=3: f(3)=−9+12=3f(3) = -9 + 12 = 3f(3)=−9+12=3
o x=1x = 1x=1: f(1)=3f(1) = 3f(1)=3
• No neighbor is better than 4 → Stop.
Maximum found at x = 2, value = 4.

Advantages
• Simple to implement.
• Can quickly find a solution.
Disadvantages
• Can get stuck in:
o Local maxima (not global).
o Plateaus (flat regions).
o Ridges (paths requiring non-greedy moves).

2. A* search:
A* is a best-first search algorithm used in AI for finding the shortest path from a start node to a goal node.
• It combines the cost so far (like Dijkstra’s algorithm) and an estimate of the remaining cost (heuristic).
• A* is complete and optimal if the heuristic is admissible (never overestimates the true cost).
A* search is a best-first search algorithm that uses a cost function to decide which node to explore next:
f(n)=g(n)+h(n)
Where:
• g(n)= actual cost from start node to node n
• h(n) = estimated cost (heuristic) from node n to goal
• f(n) = total estimated cost of the cheapest solution through node n
The rule: Always expand the node with the lowest f(n)
How A* Search Works
1. Put the start node in the open list (nodes to explore).
2. Repeat until the open list is empty or the goal is found:
a. Select the node with lowest f(n)from the open list.
b. If it is the goal, stop.
c. Otherwise, move it to the closed list (visited nodes) and generate its neighbors.
d. For each neighbor:
o Calculate g(n),h(n),f(n)
3.
o If the neighbor is not in open/closed list, add it to open list.
o If it’s already in the open list but this path gives a lower g(n)update its f(n) and parent.
Values for h:
A:5, B:6, C:4, D:15, X:5, Y:8
Step 1: Start at S
• g(S)=0g(S) = 0g(S)=0
From the image, S connects to:
Node A
• Cost S → A = 1
• g(A)=1g(A) = 1g(A)=1
• h(A)=5h(A) = 5h(A)=5

f(A) = 1 + 5 = 6
]
Node B
• Cost S → B = 2
• g(B)=2g(B) = 2g(B)=2
• h(B)=6h(B) = 6h(B)=6

f(B) = 2 + 6 = 8
]
Choose A (lowest f = 6)
Step 2: Expand A
From image, A connects to X and Y
Node X
• Cost A → X = 4
• g(X)=1+4=5g(X) = 1 + 4 = 5g(X)=1+4=5
• h(X)=5h(X) = 5h(X)=5

f(X) = 5 + 5 = 10
]
Node Y
• Cost A → Y = 7
• g(Y)=1+7=8g(Y) = 1 + 7 = 8g(Y)=1+7=8
• h(Y)=8h(Y) = 8h(Y)=8

f(Y) = 8 + 8 = 16
]
OPEN list now has:
• B (f = 8)
• X (f = 10)
• Y (f = 16)
Choose B (lowest f = 8)
Step 3: Expand B
From image, B connects to C and D
Node C
• Cost B → C = 7
• g(C)=2+7=9g(C) = 2 + 7 = 9g(C)=2+7=9
• h(C)=4h(C) = 4h(C)=4

f(C) = 9 + 4 = 13
]
Node D
• Cost B → D = 1
• g(D)=2+1=3g(D) = 2 + 1 = 3g(D)=2+1=3
• h(D)=15h(D) = 15h(D)=15

f(D) = 3 + 15 = 18
]
OPEN list:
• X (10)
• C (13)
• Y (16)
• D (18)
Choose X (lowest f = 10)
Step 4: Expand X
From image, X → E
Node E
• Cost X → E = 2
• g(E)=5+2=7g(E) = 5 + 2 = 7g(E)=5+2=7
• h(E)=0h(E) = 0h(E)=0

f(E) = 7 + 0 = 7
]
Goal Reached
Optimal Path:
S→A→X→E
Total Cost: 7
Advantages of A* Search Algorithm
The A* search algorithm has several advantages in artificial intelligence and problem-solving.
1) Optimal Solution
A* guarantees the optimal (shortest) path from the start node to the goal node if the heuristic is admissible.
This makes it very useful where the shortest path is required.
2) Completeness
A* is complete, meaning it will always find a solution if one exists, provided the search space is finite.
3) Efficiency
A* uses a heuristic function to guide the search toward the goal. This reduces unnecessary exploration and
makes it more efficient than uninformed search algorithms like BFS and DFS.
4) Versatility
A* can be applied to many areas such as:
• Route planning
• Robotics
• Game development
• Navigation systems
As long as a good heuristic is available, A* performs well.

Disadvantages of A* Search Algorithm


Despite its advantages, A* also has some limitations.
1) Heuristic Dependency
The efficiency of A* depends heavily on the quality of the heuristic function.
If the heuristic is poor or inaccurate, A* may behave like a slow uninformed search.
2) High Memory Usage
A* stores all generated nodes in memory. For large search spaces, this can lead to memory
exhaustion.
3) Time Complexity
In the worst case, A* may explore a very large number of nodes, resulting in high time complexity,
especially when the heuristic is weak.

You might also like