0% found this document useful (0 votes)
29 views42 pages

Overview of AI Concepts and Types

The document provides an overview of Artificial Intelligence (AI), covering its definition, goals, types, and various problems it addresses, such as search, reasoning, learning, natural language processing, and perception. It also discusses the components of intelligent agents and the nature of environments in which they operate, introducing the PEAS framework. Additionally, it delves into search algorithms, problem reduction techniques, and the properties of search algorithms, highlighting their significance in solving complex tasks.

Uploaded by

Vijaya Laxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views42 pages

Overview of AI Concepts and Types

The document provides an overview of Artificial Intelligence (AI), covering its definition, goals, types, and various problems it addresses, such as search, reasoning, learning, natural language processing, and perception. It also discusses the components of intelligent agents and the nature of environments in which they operate, introducing the PEAS framework. Additionally, it delves into search algorithms, problem reduction techniques, and the properties of search algorithms, highlighting their significance in solving complex tasks.

Uploaded by

Vijaya Laxmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

UNIT-1

INTRODUCTION : AI problems, foundation of aI, history of ai,

INTELLIGENT AGENTS: AGENTS AND ENVIRONMENTS, concepts of


rationality, nature of environments, structure of agents, problem solving
agents, problem formulation

Artificial Intelligence:
Artificial means man-made and word intelligence refers to thinking process.

Artificial Intelligence is a branch of computer science by which we can create intelligent


machines which can behave like a human, think like a human and able to maintain accurate
decisions.

For example – Smart phones, electronic gadgets like smart watches, these are in the
preprogramed manner.
By use of these AI we don’t need to pre-program algorithms, to which they can assist with
their own intelligence.

With the help of AI we can create different types of software devices which we can solve real
world problems very easily and it gives accuracy such as health issues in medical field,
marketing, traffic issues etc.

With the help of AI we can create virtual personal assistance like Google Assistant.

With the help of AI we can create robots which can work in an environment where survival
of humans can be difficult.

AI opens skill paths for new technologies, new domains and new opportunities.

Goals of AI:
AI can explicate the human intelligence.
It can solve knowledge intensive tasks. An intelligence connection of perception and action.
Building a machine which can perform the tasks that requires human intelligence such as
self-driving cars, surgical operations in medical field, playing the chess etc.
Creating same system which can exhibit intelligence.

--------- xxxx ------------


AI Problems and Their Types

Artificial Intelligence deals with building systems that can solve problems intelligently like
humans. The main types of AI problems are:

1. Search Problems

 Many AI tasks can be seen as searching for a solution among possible alternatives.
 AI uses search algorithms to find the best path or sequence of actions to reach a
goal.
 Techniques:
o Breadth First Search (BFS)
o Depth First Search (DFS)
o Heuristic Search (A*, Hill Climbing)
 Examples:
o Google Maps (finding the shortest path)
o Solving puzzles like 8-puzzle, chess moves
o Planning robot actions

2. Reasoning and Inference Problems

 Reasoning means deriving new facts or conclusions from existing knowledge.


 AI systems use logic and inference engines to make decisions.
 Examples:
o Expert systems in medical field (diagnosing diseases)
o Legal reasoning systems
o Recommender systems (suggesting products or movies)

3. Learning Problems

 Instead of being pre-programmed, AI systems can learn from data and experiences.
 This is the core of Machine Learning.
 Types of Learning:
o Supervised Learning – learning from labeled data (e.g., spam mail detection).
o Unsupervised Learning – learning patterns without labels (e.g., clustering
customers).
o Reinforcement Learning – learning by trial and error using rewards (e.g.,
training robots, game-playing AI).
 Examples:
o Email spam detection
o Fraud detection in banking
o Game AI like AlphaGo

4. Natural Language Processing (NLP) Problems

 AI aims to understand and generate human language.


 Involves both text and speech.
 Examples:
o Chatbots (like ChatGPT, Alexa, Google Assistant)
o Machine Translation (Google Translate)
o Sentiment analysis of social media posts

5. Perception Problems

 AI uses sensors and recognition systems to understand the physical world.


 Deals with processing images, sounds, and environment data.
 Examples:
o Image recognition (face detection in cameras)
o Speech recognition (voice commands in smartphones)
o Self-driving cars (detecting roads, traffic signs, pedestrians)

--------- xxxx-------

AI TYPES:
AI refers to which is made by humans (as) non-natural things and intelligence refers to the
ability to understand and the thinking power.

AI is not a system but it is implemented in the system.

The categories which are breaking down into 2 primary types which are based on capabilities
and which are based on functionalities.

I. Based on Capabilities

1. Narrow AI (Weak AI)

 Narrow AI is designed to perform a specific task only.


 It does not have general intelligence or understanding.
 Examples:
o Google Assistant, Siri, Alexa
o Spam email filtering
o Recommendation systems (Netflix, YouTube, Amazon)

2. General AI (Strong AI)

 General AI can perform any intellectual task that a human can do.
 It can learn, reason, and adapt to new situations.
 Still a theoretical concept, not fully developed yet.
 Examples (conceptual):
o A robot doctor diagnosing multiple diseases like a human doctor
o AI that can write, drive, play games, and think like humans

3. Super AI

 Super AI will surpass human intelligence and capabilities.


 It can think, reason, solve problems, make decisions, and even have emotions.
 Currently fictional/futuristic.
 Examples (imaginary):
o AI shown in movies like “Jarvis” (Iron Man) or “Terminator”
o Self-aware robots with creativity and emotions

II. Based on Functionalities

1. Reactive Machines

 Basic type of AI, reacts only to current situations.


 No memory of past experiences.
 Examples:
o IBM’s Deep Blue (chess-playing computer)
o Basic robots

2. Limited Memory AI

 Can use past data and experiences to make better decisions.


 Most modern AI applications fall into this category.
 Examples:
o Self-driving cars (use past data of roads, traffic)
o Chatbots

3. Theory of Mind AI

 Next level of AI (still in research).


 Can understand human emotions, beliefs, and social interaction.
 Example (conceptual):
o Robots that act as therapists or companions
o AI that understands moods and reacts accordingly

4. Self-Aware AI

 Final stage of AI (still hypothetical).


 AI will have consciousness, self-awareness, and emotions.
 Example (fictional):
o Robots in movies like “Ex Machina” or “Her”
o Future AI that knows it exists

---------------- xxxxxxxxx----------------

Agents in AI
 Agents are software programs that can sense the environment, make decisions, and
take actions to achieve goals.
 Example:
o Google Assistant (agent that takes voice input and gives results)
o Self-driving car (agent that senses road & drives safely)
The nature of environments: The task environment which are essentially to which the
rational agents are the solution to solve the problem of the environment. The rationality of the
simple vacuum cleaner agent we have to specify the performance measure, the environment,
the agent actuators & along with sensors. These four broader type we can call the PEAS.

For example:

Performance measure include:

Performance measure include a automated driver to aspire desirable qualities includes getting
to the correct destination, minimizing the fuel consumption, minimizing the travel time or
cost, minimizing the violations of traffic rules and disturbance to the other drivers &
maximizing the safety of passengers and maximize the profit and passenger’s comfort.

Environment: Environment includes what is the driving environment that the taxi will face
only. Taxi drivers must deal with a variety of roads ranging rural areas to urban and the roads
which may contain traffic, pedestrians, road work, police cars etc.

Actuators: The actuators for an automated taxi includes those available to a human driver:
control over the engine through the accelerator and To control the steering and break.

Sensors: The basic sensors for a taxi, which includes one or more comfortable video camera,
so that it can see the road and it may be augmented with infrared or sonar system, which is
used to detect the distance of the other cars and obstacles.

Below that is a PEAS (Performance measure, Environment, Actuators, Sensors) table:

P (Performance
Agent Type E (Environment) A (Actuators) S (Sensors)
Measure)
Steering,
Camera, GPS, engine
accelerator, brake,
Safe, fast, Roads, traffic, sensors, sonar,
signals, road
Taxi comfortable, pedestrians, speedometer
display
stable, top profits customers

Display of
Medical questions, lists, Keyboard, entry of
Healthy patient, Patient, hospital,
Diagnosis treatment, symptoms, patient
reduce the cost doctor, nurses
System diagnosis, patient answers
research

Correct image
Satellite image analysis Obtaining satellite Display, categorize Color, image,
character
system image of observed data focus, zoom
recognition

Percentage of paths
Conveyor belt with Jointed arms and Camera, joint
Path picking robot which is incorrect in
parts and bins hand angle sensors
the bin
P (Performance
Agent Type E (Environment) A (Actuators) S (Sensors)
Measure)
Temperature,
Maximize purity, Valves, pumps,
Refinery control Refinery products pressure,
safety displays
chemical sensors

Maximize the Display screen,


Interactive tutoring
student’s score on Set of students questions and Pivot entries
agent
test suggestions

------------- xxxxxxxxxxxx-----------------

Components of a Rational Agent


A rational agent comprises several key components:
1. Perception: The ability to perceive the environment through sensors.
2. Knowledge Base: Information the agent has about the environment and itself.
3. Decision-Making Process: Algorithms and rules that guide the agent's actions.
4. Action: The ability to perform actions that affect the environment through actuators.
---------------xxxxxxxxxxxx--------------

UNIT – II
Searching – Searching for solutions, uniformed search strategies – Breadth first search, depth
first search. Search with partial information (Heuristic search) Hill climbing, A*, AO*
Algorithms, Problem reduction, Game Playing – Adversarial search, Games, mini-max
algorithm, optimal decisions in multiplayer games, Problem in Game playing, Alpha-Beta
pruning, Evaluation functions.

SEARCHING: Searching is the process of finding a sequence of actions that leads from the start
state to a goal state within a given state space.

A searching algorithm takes a problem as input and it returns a solution in the form of an
action sequence. Once a solution is found, the action can be carried out. This is known as
Execution phase.

A searching problem consists of:

1. A state space
2. A start space
3. A goal space

1. A state space: A state space contains a set of all possible states where we can be.
2. A start state: A start space (state) from where the search begins.
3. A goal space: A function that looks at the current state and it returns whether or not it
is in the goal state.

 The solution to a search problem is a sequence of actions called path.


 It performs the class from start state to goal state.
 The plan is achieved through a search algorithm, and it is divided into two categories:

1. Uninformed Search:

1. Depth First Search


2. Breadth First Search
3. Uniform Cost Search
4. Depth Limited Search

5. Iterative deeping depth first search


6. Bidirectional search

(b) Informed search:

1. Best first search


2. A* search
3. AO* algorithm
4. Problem reduction
5. Hill climbing

Example 1: Vacuum World Problem

 Initial State: The vacuum agent is located in one of two squares (Left or Right). Each
square may be clean or dirty.
 Actions:
1. Move Left
2. Move Right
3. Suck (clean the square)
 Goal State: All squares must be clean, regardless of the agent’s position.
 Path Cost: Each action (move or suck) has a cost of 1. The path cost is the total
number of actions.
 Successor Function: Generates new states when the agent applies actions (move or
clean).

Ex 2: 8 Puzzle problem

The 8 puzzle problem consists of a 3×3 board with 8 number tiles and a blank space.

Initial State:

1 2 3
5 6
7 8 4

Goal State:

1 2 3
5 8 6
7 4

1. Initial State
o The starting arrangement of the tiles on the 3×3 board.
o Example:
o 1 2 3
o 5 6
o 7 8 4
2. State Space
o The set of all possible configurations of the 8 numbered tiles and the blank
space (total = 9! = 362,880 possible states).
3. Successor Function
o Defines all possible moves by sliding a tile into the blank space (Up, Down,
Left, Right).
o Each move creates a new state.
4. Goal State
o The desired arrangement of tiles.
o Standard goal state is:
o 1 2 3
o 4 5 6
o 7 8
5. Path Cost
o Each move costs 1.
o The total cost is the number of moves taken to reach the goal.

----------------------------------------------------------------------------------------------------------------
PROPERTIES of searching :

Properties of Search Algorithms

1. Completeness — Completeness refers to the procedure of finding at least

one effective & complete solution among a series of multiple solutions.


2. Optimality - A solution discovered for an algorithm is considered optimal if it is assumed
to be the best solution (lowest path cost) among all other solutions.

3. Time Complexity — Time complexity refers to the maximum time taken

by the AI agents to accomplish the search tasks and how much time is to

be taken based on the complexity of the query.

4. Space Complexity — Space complexity refers to the storage and

maximum space needed to accomplish the search task. To accomplish a

task how much storage is needed is based on the complexity of the task.

POROBLEM REDUCTION

Problem reduction is an informed search technique in which a large problem is divided


into smaller and simpler subproblems. These subproblems may themselves be further
decomposed until they become very small that can be solved directly.
 The final solution is obtained by combining the solutions of subproblems.

Types of reductions/ representation of reductions

 The most common way to represent problem reduction is through an AND-OR


graph.

1. OR Nodes:
o Represent choices.
o The problem can be solved by solving any one of its child nodes.
o Example: To reach city C, you can go via A or via B.
2. AND Nodes:
o Represent necessary decompositions.
o The problem can be solved only if all child subproblems are solved
together.
o Example: To pass an exam, you must clear Theory and Practical.

Example for problem reduction:

 Root is the original problem.


 Branches are the subproblems which are decomposed from the original problem.
 AND/OR indicate whether all or any subproblem must be solved.

Algorithm of a Problem Reduction

1. Identify goal problem


2. Apply problem reduction algorithm. Decompose problem into subproblems
(AND/OR)
3. At each step:
* If it is an OR node → select the child with minimum cost.

* If it is an AND node → expand and sum up the cost of all children.

4. Recursively solve the subproblems


5. Combine the results to solve the original problem

Applications of Problem Reduction

1. Theorem Prove:
o Reduce the complex proof into lemma’s proofs
2. Planning:
o In planning, the task will break into substeps
3. Natural Language Processing (NLP):
o Sentence → Tokens → Parse Tree
o Navigate → Move → Drive
4. Game Solving:
o In game solving, we have to play the game to beat the opponent and not to
lose the game
5. Game Playing
o Reducing a game position into possible sub-moves.

Techniques to Solve Problem Reduction

1. Means-End Analysis:
o It reduces the difference between the current node and the goal node step by
step
2. Goal Trees:
o Used to visualize the decomposition of a problem
3. AO Algorithm*:
o Finds the optimal solution using AND/OR graphs
4. Divide and Conquer:
o A classical approach used in searching and sorting techniques, etc.

-------- x ------

Game Playing in AI
 Game playing is an important application of problem reduction in Artificial
Intelligence.
 In AI, a game is modelled as a search problem where:
o The system must strategize decision making
o Look ahead at all possible moves
o Respond to the opponent’s moves
o Try to win (or at least not lose)

Adversarial Search
Adversarial Search is a search technique used in competitive environments where two or
more agents (players) have conflicting goals, and each player tries to maximize their own
utility while minimizing the opponent’s utility.

 Adversarial search is used in two-player competitive games.


 One player’s gain is the other player’s loss.
 Called adversarial because the opponent works against you (not neutral like in
puzzle solving).
 Example: Chess, Tic-Tac-Toe, Checkers.

Types of Algorithms

1. Minimax Algorithm
2. Alpha-Beta (α-β) Pruning

Characteristics

1. Players – At least two players can play the game. Ex : Player vs Player, Player vs
[Link] – Deterministic, turn-based, with perfect information of the game
for both players can see the board.
2. Opponent – it can Competes directly to win the game ( one player’s win = another
player’s loss).

Examples: Chess, Ludo, Tic-Tac-Toe.

How AI plays a Game


Steps:

1. Model the game → Represent rules and moves in a code.


2. Build a game tree → The AI agent generates the tree of all possible moves and
opponent’s responses.
3. Apply Minimax / Alpha-Beta pruning → To choose the best possible move.
4. Choose the best move. The AI agent picks the move that gives the best choice of
outcome, assuming the opponent also plays perfectly.

Example / Advantages of Adversarial Search

 Each level of the tree shows a new move by the players.


 The leaf nodes = game result (win, loss, or draw).
 The internal nodes = intermediate positions of the game.
 AI uses Minimax Algorithm to score each leaf and then backtrack to find the best
move.
 Minimax Assumption:
o AI tries to maximize its own score.
o Opponent tries to minimize AI’s score.
 The Max function chooses the AI agent’s move to get the highest score in the game.
 The Min function (opponent) chooses the move with the lowest score for AI.
 Assumes that both players play optimally and it searches a possible moves to generate
the best outcome.
 AI plans the moves based on opponent moves.
 It performs DFS alg on game tree.

. This allows the AI to anticipate opponent’s best move and plan accordingly.

Minimax Algorithm

 The Minimax Algorithm is a recursive decision-making algorithm used in two-


player, turn-based, zero-sum games (e.g., Chess, Tic-Tac-Toe, Checkers).
 One player is the MAX player (tries to maximize the score), and the other is the MIN
player (tries to minimize the score).
 Both players tries to maximize the chance of win and minimize to chance of win to
another player.

working Principle

 The MAX player is trying to maximize the score.


 The MIN player tries to minimize the maximum possible gain of the opponent.

 Assumes that both players play optimally and it searches a possible moves to generate
the best outcome.
 AI plans the moves based on opponent moves.
 It performs DFS alg on game tree.

Steps of Minimax Algorithm


1. Generate the Game Tree → Build the game tree from the initial state down to
terminal states.
2. Evaluate Terminal Nodes → Assign utility values:
o Win = +1, Lose = -1, Draw = 0.
3. Backpropagate Values → Propagate values up the tree:
o At MAX nodes, choose the maximum child value.
o At MIN nodes, choose the minimum child value.
4. Choose Optimal Move → At the root, MAX selects the move with the best value.

Example Tree (from your notes)


A (MAX)
/ \
B C (MIN)
/ \ / \
D E F G (MAX)
/ \ / \ / \ / \
H I J K L M N O (min)

 At terminal nodes (H, I, J, … O), assign values (e.g., Win = +1, Lose = -1, Draw = 0).
 These values are propagated upward:
o MAX nodes choose maximum child value.
o MIN nodes choose minimum child value.
 Finally, the root (A) chooses the optimal strategy.

Properties
1. Completeness → Always finds a solution if game tree is finite.
2. Optimality → Always optimal if opponent plays optimally.
3. Time Complexity → O(b^m)
o b = branching factor (legal moves at each point).
o m = maximum depth of the tree.
4. Space Complexity → O(bm) (depth-first search storage).

30 (MAX)
/ \
12 30 (MIN)
/ \ / \
15 12 30 40 (MAX)
/ \ / \ / \ / \
15 11 8 12 12 30 16 40 (min)

In this algorithm, we have two players to play the game: 1. Max 2. Min

 Both players of the game are opponent to each other


o Max will select the maximum value
o Min will select the minimum value
 To exploration of the tree we are using depth first search
 Minimax proceed all the way down to the terminal node of the tree then backtracking
will apply the tree as the recursion

Applications

1. Chess – it evaluates the move to consider the opponent’s best response.


2. Checkers – to find the best move for capturing/blocking.
3. Tic-Tac-Toe – it guarantees unbeatable strategy.

Alpha–Beta Pruning
 Alpha–Beta pruning is an optimization technique for the Minimax algorithm.
 It reduces the number of nodes evaluated in the search tree by pruning (cutting off)
branches that will not affect the final decision.

Use of Alpha–Beta Pruning

 Minimax is slow in complex games like Chess, because of huge number of moves.
 Alpha–Beta pruning reduces the number of nodes that need to be evaluated.
 It does not affect the final result, only improves efficiency.

 Alpha (α): Best (highest) value that the MAX player can guarantee at that point.
 Beta (β): Best (lowest) value that the MIN player can guarantee at that point.
 If at any point α ≥ β, further exploration of that node is stopped (pruned).

Action of Alpha–Beta

For Max Node

1. Initialize α = –∞
2. If child value > α → update α
3. If α ≥ β → prune remaining children

For Min Node

1. Initialize β = +∞
2. If child value < β → update β
3. If β ≤ α → prune remaining children

Algorithm Steps

1. Initialize α = –∞, β = +∞.


2. Perform DFS search (same as Minimax).
3. At MAX node → update α = max(α, value).
4. At MIN node → update β = min(β, value).
5. If α ≥ β → prune (stop searching further children).

Example Tree

Diagram:

 A game tree is shown with MAX and MIN nodes.


 Alpha (α) and Beta (β) values are shown being updated or used to prune branches.

Explanation:

 In both MIN and MAX nodes, a situation can occur where α ≥ β. This allows pruning
of nodes (i.e., skipping evaluation of certain branches).
 Both Minimax and Alpha-Beta follow the same path for the optimal move.
 Alpha-Beta Pruning is optimal because it takes less time to get the best move.

Notes:
 Alpha (α):
o Represents the highest value found so far at any MAX node.
o Acts as a lower bound.
o Initialized to –∞.
 Beta (β):
o Represents the lowest value found so far at any MIN node.
o Acts as an upper bound.
o Initialized to +∞.

-----------------------------------------xxxxx-----------------------------------------------------------------------------------

MULTIPLAYERS GAME

In Artificial Intelligence, games can have more than two players is known as multi players
gamesuch players have their own goals.

e.g., Ludo, Poker, Monopoly

Unlike two-player games like chess one player will win and another player will lose the
game.

 Multiplayer games often have:


o ✦ Multiple winners or multiple losers
o ✦ Alliances and patrogens (possibly meant: “alliances and antagonists” or
"patrons")
o ✦ Different strategies for each player

Optimal Decision

The optimal decision is the one that maximizes your chance of winning (or minimizes loss),
depending on the game's goal.

To decide this, a player must consider:

1. What the other players might do


2. What type of benefit it brings them
3. How they might respond and counter

In multiplayer games, it's not just between you and one opponent — now there are several
opponents, making optimal decision-making more complex due to unpredictable conflicts.

Strategies for Optimal Decision in Multiplayer Games


1. Generalized Minimax (N-player Minimax)

 A generalization of the Minimax algorithm is theextension of min max for n players.


 A generalization of the Minimax algorithm for more than 2 players
 Each node in the game tree stores a vector of scores, e.g.,
[A-score, B-score, C-score] — one score for each player.
 On a player's turn, they try to maximize their own score, assuming other players will
do the same on their turns.

Example:

In a 3-player game (A, B, C), a game state may be evaluated as:


[A-score, B-score, C-score] = [2, -1, 0]

 Player B will choose a move that increases B's score, even if A and C also benefit
from it.

Paranoid Algorithm

The Paranoid Algorithm is used in multiplayer games (more than 2 players).

 In Paranoid Algorithm, a player thinks that all other players are working together
against them.
 Because of this assumption, the player plays in a very careful (conservative) way.
 This means:
o The player avoids risky moves.
o Always assumes the worst-case scenario (others will choose moves that hurt
me most).
o Plays as if the environment is completely hostile.

Coalition Based Reasoning (CBR)

 In CBR, some players join together (form a group) temporarily to defeat stronger
opponents.
 These players cooperate for a short time, but after achieving their goal, they can
split up again.
 It is used in games or situations where working together gives better results than
acting alone.

🧠 Key Ideas:

 Players team up for advantage.


 The teamwork is temporary — only until the goal is reached.
 It helps in finding better strategies by considering possible alliances among players.
 Often used in game theory and multi-agent systems.

Example:

If players A, B, and C are playing:

 B and C might join together to defeat A, who is stronger.


 Once A loses, B and C can again compete with each other.

Points should be awarded like:


 Winner gets +1
 Loser gets -1
 Tie or draw gets 0

To make an optimal decision, player A must:

1. Predict B and C’s moves.


2. Choose a strategy that minimizes the worst-case loss (may be randomized to avoid
being predictable).

🎮 Examples

1. Board games
2. Monopoly
3. Risk
4. Online games (PUBG, Free Fire)
5. Business competitions
6. Negotiation scenarios

🤖 Problems in Game Playing (AI)

In AI, game playing means designing an agent that can make intelligent decisions in a
competitive environment.
However, there are several problems:

1. Combination Explosion

 The number of possible game states grows exponentially with each move.
 Example: Chess has over 10¹²⁰ possible states — too many to explore fully.
 It’s impossible to evaluate every move in real time.

2. Time Constraints

 In real-time or competitive games, agents must make quick decisions.


 Searching the entire game tree is too slow.
 Hence, there is a need for cutoff depth, heuristics, or pruning.

3. Incompleteness of Search

 You may not reach terminal states due to limited search depth.
 This leads to inaccurate evaluations.
 The decision may not be truly optimal.

4. Uncertainty and Chance (Stochastic Games)

 Some games involve random events (e.g., dice rolls, card draws).
 Algorithms like Minimax cannot handle randomness directly.
 Need methods like Expectiminimax or Monte Carlo Search.
5. Partial Observability

 In games like Poker, players cannot see all the information.


 Requires probabilistic reasoning and belief states.
 Traditional Minimax assumes perfect information, so it doesn’t work here.

6. Multiple Players

 When there are more than two players, the strategy becomes makes it harder.

 Players may team up or betray each other.

 This requires game theory models like the Maxⁿ or Paranoid Algorithm.

7. Evaluation Function Design

 It is used to estimate the value of non-terminal nodes (not end states) in a game
tree.
 If it is poorly designed, it can mislead the AI and produce wrong decisions.
 A good evaluation function must consider multiple factors (like position, score,
moves, etc.).

8. Opponent Modelling

 The Minimax algorithm assumes the opponent always plays optimally — but this
may not be true.
 In practice, AI can use machine learning to understand and adapt to an opponent’s
behavior.
 Helps AI to predict strategies and learn from experience.

9. Learning & Adoption

 Many AI game systems don’t just follow fixed rules — they learn from experience.
 Techniques like reinforcement learning and self-play help the AI improve.
 However, learning requires a lot of computation power and time.

-----------------------------------------------------------------------------------------

Evaluation Function (Meaning)

 An evaluation function in AI estimates how good or desirable a game position


(state) is.
 Used when it’s too expensive or impossible to search all the way to the end of the
game.

Purpose of Evaluation Function

1. Helps AI choose the best move when search depth is limited.


2. Estimates how favorable a given position is for a player.
3. Used with algorithms like Minimax and Alpha-Beta Pruning.

Formal Definition of Evaluation Function

 Denoted as Eval(s)
 It returns a numerical score to represent the desirability of a state (s).
o For Max player → Higher score as better.
o For Min player → Lower score as better.

Structure of Evaluation Function

The general form of the evaluation function is:

Eval(s)=w1∗f1(s)+w2∗f2(s)+...+wn∗fn(s)Eval(s)

= w_1 * f_1(s) + w_2 * f_2(s) + ... + w_n * f_n(s)Eval(s)=w1∗f1(s)+w2∗f2(s)+...+wn∗fn(s)

Where:

 f1,f2,...,fnf_1, f_2, ..., f_nf1,f2,...,fn → are the features of the game state
(like material, position, control, etc.)
 w1,w2,...,wnw_1, w_2, ..., w_nw1,w2,...,wn → are the weights assigned to each
feature
(showing their importance).

♟️Examples

1. Chess → Piece value, mobility, king safety, board control


2. Checkers → Number of pieces, positions, possible captures
3. Tic-Tac-Toe → Immediate win/loss detection, blocking opponent

Main Components (Factors)

 Material Advantage
 Positional Advantage
 Mobility
 Control of the center of the board
 King’s Safety

⚡ Working of Evaluation Function

1. It takes a game state (like position, success chances, etc.).


2. It calculates the factors that affect the chance of winning.
3. It combines all these factors into a single numeric score.
o Higher score → better for Max player
o Lower score → better for Min player
UNIT – III
Representation of Knowledge: Knowledge representation issues, predicate logic – logic
programming, semantic nets – frames and inheritance, constraint propagation, representing
knowledge using rules, rules based deduction systems. Reasoning under uncertainty, review
of probability, Bayes’ probabilistic interferences and Dempster-Shafer theory.

Representation of knowledge:
The intelligence of human is achieved not by mere reflex mechanism. In the process of
reasoning, that can operate on internal representation of knowledge, this type of approach
which is done by knowledge based agent. This agent are supported by logic such as
propositional logic & first order predicate logic. Agents can form representations for complex
world. It is used in a process of inference to derive new representation about the world, and it
is used for the new representation to deduce what to do.

 Knowledge and reasoning are the most important to logic agents because they are
unable for the successful behaviour to achieve a desire goal.
 Knowledge base contains a set of sentences which are present in a formal language.
Therefore are two types of functions which is used a knowledge based agent:

1. Tell: It is used to add new sentences to the knowledge base agents.


2. Ask: Ask function is used to help the users for the input.

For example: ChatGPT, Meta AI, Gemini AI, Chatbot AI

 ChatGPT: What can I help with?


 Meta AI: Ask Meta AI
 Gemini AI: Ask Gemini
 Chatbot: Good Morning

Knowledge Representation Issues

 It deals with how the information and knowledge are to be stored and how the
structured information, and it is utilized by AI systems.

Representation Scheme:

The representation scheme has various types or ways, as follows:

1. Semantic Network:
The semantic network contains graph structures – a graph contains nodes and edges.
In graph nodes represent the concept and the edge represents the relationship.

1Frames:
The data structure which stores information about an object which includes attributes and
values.
2. Rules:
If the statement then it defines the relationship and concepts which can be used for the
reason.
3. Logic-Based Representation:
These are using the formal logic to represent the facts and rules.

2. Expressiveness vs. Complexity:

 The expressiveness is the ability to represent the complex knowledge.


 Complexity contains how easily the system can process this knowledge.

3. Ambiguity & Vagueness:

 Natural language often has ambiguities and vagueness, which makes it difficult for AI
systems to interpret the meaning accurately.

Examples:

 "Bank" – means financial institution or river bank


 "Live" – live concert (delicate) or live (living)

4. Incompleteness:

 The knowledge base may not contain the entire information.


 The incompleteness leads to incorrect conclusions or the inability to answer certain
queries.

5. Dynamic Knowledge:

 The dynamic knowledge is not static – it can change over time.


 AI must be updated to reflect the new information to maintain accuracy.

6. Contextual Understanding: AI must understand the context to give correct answers or


take the right action.

It means the knowledge can define on the content to interpret the information
correctly to make the appropriate decision.

7. Interoperability

 Different AI systems should be able to share and understand knowledge easily.


 This is a challenge in AI because different systems may use different formats or rules.

8. Reasoning

 Once knowledge is stored, AI must be able to reason (think) using that knowledge.
 Reasoning is about drawing conclusions or solving problems using the given
information.
9. Predicate Logic / Predicate Knowledge

 A system that uses mathematical logic to express knowledge.


 It helps in expressing relationships between objects and facts.

Logic Programming

 It is a method of using logic (rules, facts) to write programs.


 Used to represent knowledge and perform reasoning.

---------- xxxxxxxxx -----------

First-order logic / predicate logic


First-order logic is used to represent knowledge in artificial intelligence. It
is an extension of propositional logic. FOL represent the natural language
statements concisely.

It is also known as predicate logic or first-order predicate logic. First-order


logic is a powerful language that develops information about objects in an
easier way and can also express the relationship between those objects.

It overcomes the limitation of propositional logic by dealing with:

 Objects (things in the domain)


 Properties of objects
 Relations between objects
 Quantifiers (to express generality)

As a natural language, first-order logic also has two main parts:

1. Syntax
2. Semantics

Syntax of First-Order logic

The syntax of FOL determines which collection of symbols is a logical


expression in first-order logic. The basic syntactic elements of first-order
logic are symbols. We write statements in short-hand notation in FOL.

Basic Elements of First-order logic:


The following are the basic elements of FOL syntax:

1. Constants
 Represent specific named objects in the domain.
 They are fixed and do not change.
 Examples:
o John, Mary, India, 5, a
 Usage in FOL: Loves(John, Mary) → means John loves Mary.

2. Variables
 Represent placeholders for objects in the domain.
 They can take different values.
 Usually written as: x, y, z
Example:
o ∀x Human(x) → For every object x, if it is a human…

3. Predicates
 Predicates express properties of objects or relationships among objects.
 They return True/False.
 This statement predicts if it is true when the person John likes the ice cream.
 Types:
1. Unary Predicate → property of a single object.
 Example: Student(x) → x is a student.
2. Binary Predicate → relation between two objects.
 Example: Loves(x, y) → x loves y.
3. n-ary Predicate → relation among n objects.
 Example: Between(x, y, z) → x is between y and z.

4. Functions
 Functions map objects to other objects.
 Always return exactly one value.
 Example:
o Father(x) → gives the father of x.
o If Father(John) = David, it means David is John’s father.

5. Quantifiers
Quantifiers specify how many objects in the domain satisfy a property.

1. Universal Quantifier ( ∀ ) → "For all"


o Expresses that a property holds for every object.
o Example: ∀x Human(x) → Mortal(x)

2. Existential Quantifier ( ∃ ) → "There exists"


(All humans are mortal)

o Expresses that there is at least one object for which the property holds.
o Example: ∃x Student(x) ∧ Intelligent(x)
(There exists a student who is intelligent)

6. Logical Connectives
Used to combine formulas. Logical connectives are used to build complex statements in to
simple one.

1. Negation (¬) → not


Example: ¬Student(x) → x is not a student.
2. Conjunction (∧) → and
Example: Student(x) ∧ Intelligent(x) → x is a student and intelligent.
3. Disjunction (∨) → or
Example: Teacher(x) ∨ Student(x) → x is either teacher or student.
4. Implication (→) → if…then
Example: Human(x) → Mortal(x) → If x is human, then x is mortal.
5. Biconditional (↔) → if and only if
Example: Bachelor(x) ↔ (Man(x) ∧ ¬Married(x))
(x is a bachelor if and only if x is a man and not married)

7. Equality (=)
 Used to denote that two objects are the same.
 Example: Father(John) = David → David is father of John.

Syntax and Semantics of First-Order Logic


FOL's syntax defines how to construct valid logical expressions, while semantics gives
meaning to those expressions based on an interpretation which provides a domain of
discourse and assigns meanings to constants, predicates and functions.

For example, consider the domain of natural numbers. The predicate GreaterThan(x, y) holds
if x is greater than y.
If x = 5 and y = 3, GreaterThan(5, 3) is true.

------ --- X ----------

QUANTIFIERS

Quantifiers specify how many objects in the domain (RANGE AND SCCOPE) satisfy a
property.

Quantifiers are symbols in predicate logic (used in AI for knowledge representation)

These are the symbols that permit the determination or identification of the
range and scope of the variable in the logical expression. There are two types
of quantifiers:

1. Universal Quantifier (for all, everyone, everything)


2. Existential quantifier (for some, at least one).

universal quantifier : The universal quantifiers indicates that the


statement applies to all the objects (everything ) in a domain or set.

 Symbol: the universal quantifier is denoted by ∀x


o Meaning: "For all x" or "For every x" OR “For each x “

 Used when a statement must hold true for every element in the domain.

Exmple 1: ∀x likes(x, ice cream)

It means everyone likes ice cream in a group of x

Example 2 (English → Logic):

FOL: ∀x (Human(x) → Mortal(x))


 "All humans are mortal."

Meaning: For every x, if x is a human, then x is mortal.

Example 3:

FOL: ∀x (Bird(x) → CanFly(x))


 "All birds can fly."

Existential Quantifier ( ∃ ): It is Used when a statement must hold true for at least one
element in the domain.

 Symbol: ∃x
 Meaning: "There exists an x" or "For some x"

Example 1: ∃x likes(x, ice cream)

It means there is someone who likes the ice cream in a group of x

Example 2 (English → Logic):

FOL: ∃x (Student(x) ∧ Intelligent(x))


 "Some students are intelligent."

Meaning: There exists at least one x such that x is a student and intelligent.

Example 3:

FOL: ∃x (City(x) ∧ Beautiful(x))


 "There is a city that is beautiful."

---------- X ---------

Logic Programming in Detail


Logic programming is a programming paradigm based on formal logic.
Logic programming is used for representation of the knowledge and execution of the queries.
A program written in a logic programming language is a set of sentences in logical form,
expressing facts and rules about a problem domain.

Terms in Logic Programming

 Terms can be:


o Constants: Fixed values (e.g., 'John')
o Variables: Symbols representing unspecified objects (e.g., X)
o Functions: Functions return objects when applied to arguments.

Example:

 John is a constant word, while X is a variable.

3. Logical Connectives

In predicate logic programming, logical connectives are used to build complex statements
from simple ones, such as:

AND ( ∧ )
OR ( ∨ )


 NOT ( ¬ )
 Implication ( → )

These are essential for combining facts and rules in a logical and consistent manner.

4. Purpose of Logic Programming

 Logic programming is primarily used for knowledge representation and automated


reasoning.
 It allows querying and deriving new information from facts and rules.

5. Knowledge Representation :

Knowledge is represented in the form of facts and rules.

Example:

 Fact: likes(John, icecream)


This means John likes ice cream, and this is always considered to be true.
 Rules:
A rule is represented as:
 likes(X, icecream) : sweet(X).

This rule means: "X likes ice cream if X likes sweet."


6. Inference :

In Logic programming systems use inference engines to derive new information from facts
and rules.

Example:

 Fact: sweet(John)
 Rule: likes(X, icecream) : sweet(X)

From this, we can infer:

 likes(John, icecream)

7. Backtracking :

Backtracking is a method of searching where, if a particular path leads to a contradiction or


fails, the system automatically returns to a previous decision point and tries an alternative
path.

 Used when multiple possibilities exist.


 Automatically explores different combinations to find a solution.

Applications of Backtracking:

1. Expert systems: to get back the rules and facts from the previous conclusions.
2. Natural language processing: it is used to understand the human language.
3. Puzzle solving
4. Game playing

8. Automated Theorem Proving

Automated theorem proving uses algorithms to automatically prove or disprove a logical


statement based on given facts and rules.

 It applies methods from formal logic and mathematics.

9. Semantic Nets and Frames

 Semantic Net: A graphical representation of knowledge using nodes (concepts) and


edges (relationships).

Example:

o Nodes: "Dog", "Animal", "Mammal"


o Edge: "is a type of"
 Frames: Data structures representing stereotypical situations, including associated
attributes.
Example:
Frame: Dog

o Type: Mammal
o Color: Brown, Black
o Size: Small, Medium
o Behavior: Friendly

--------------------- X ------------------------------

Semantic Network

A Semantic Network is a graphical representation of knowledge that illustrates the


relationships between concepts.
=> It consists of nodes (representing objects or concepts) and edges (representing
relationships between them).
=> It is used to express relationships among entities or concepts and is represented by
connections between nodes.

 Useful for representing hierarchical knowledge like “is-a” (inheritance) and “has-a”
(attributes).
 Example:
Nodes → Dog, Animal, Mammal
Edges → “is a” → represents that Dog is a type of Animal, and Animal is a type of
Mammal.
 This structure allows easy visualization of relationships and helps in reasoning
about knowledge.

Components of Semantic Nets

 Nodes → represent objects, concepts, or entities.


 Edges (links) → represent relationships between nodes.
 Common relations:

IS-A (represents class–subclass relationship)

HAS-A (represents attribute/property relationship)

 Usually expressed in graph form (directed or undirected).

--------- xxxx ----------

FRAMES

In Frames, the data structure is used for representing stereotypical situations, similar to
how objects are represented in object-oriented programming.
(OR) A Frame is a data structure used to represent knowledge about an object, situation, or
concept in AI.
A frame consists of a collection of slots and their associated values.

Each frame represents a concept and may include default values, which can be overridden
by specific information.

components of a Frame

. frame: A frame = a collection of slots.


. slot: Each slot = attribute/property of the frame, which may contain:

 Value (specific data)


 Default value (if none specified)
 Constraints (rules for valid values)
 Procedural attachments (instructions to execute when slot is accessed/modified)

. Fillers: values or constraints for the slots.

. Inheritance: Frames support inheritance, where a frame can inherit properties from another
frame (parent frame). This allows for efficient representation of shared properties and
hierarchical relationships.

Example:

• Frame for a "Dog":

. Frame: Dog

▪ Slots: :

*Type : animal

*legs: 4

*sound: bark

Advantages of Frames
 Organizes knowledge in a structured and modular way.
 Inheritance reduces redundancy.
 Handles defaults and exceptions.
 Natural for object-oriented representation (similar to OOP).
 Supports procedural knowledge via slot-attached procedures.

-------------- xxxxxxx ---------


🔹 3. Inheritance
Definition: • A mechanism to derive new classes (subclasses) from
existing classes (superclasses), inheriting attributes and methods.

Inheritance means passing properties or attributes from a higher-level (parent) node or


frame to lower-level (child) nodes or frames.

 In Semantic Nets, inheritance happens along IS-A links.


 In Frames, a child frame inherits all slots from its parent frame unless overridden.

Example:

• Superclass: Animal

o Attributes: Breathes, Moves

• Subclass: Dog

o Inherits: Breathes, Moves

o Additional Attributes: Barks, Has fur

Types of Inheritance:

1. Single Inheritance: Inherits from one parent only.


2. Multiple Inheritance: Inherits from more than one parent frame (can cause
conflicts).

====================xxxxx--------------------

Semantic nets types


1. Definitional / Inheritance Semantic Nets

 Definition: Represent hierarchical relationships /class–subclass (e.g., "Dog is


a Mammal").

It is used to represent taxonomy/class–subclass (generalization–specialization) relationships


(is-a) and property links (has, type, attribute).
Example :

Canary —is-a→ Bird —is-a→ Animal


Bird —has→ CanFly
Canary —has→ CanSing
From this we infer Canary can fly (inherited) and Canary can sing (own property).

Uses: ontologies, hierarchies, knowledge bases.

class 2Assertional (Instance) Semantic Nets


Definition: emphasize (importance) instances (individual objects) and link them to class
nodes via instance-of or isa.
Representation: instance nodes connect to class nodes; instance nodes may have their own
properties.

Example:

Tweety —instance-of→ Canary


Canary —is-a→ Bird
Bird —has→ CanFly

3. Implicational / Conditional Semantic Nets

Definition: Focus on logical implications between concepts ( if–then )

e.g., "If Animal is Mammal and Mammal is Warm-Blooded, then Animal is


Warm-Blooded"

4. Executable Semantic Nets (Procedural Attachments)

Definition: links or nodes are attached to procedures/actions.

Example: visiting Doctor → worksIn→ Hospital could trigger a lookup procedure that
returns all doctors in a given hospital;

a link might be computeAge(patient).

5. Hybrid Semantic Nets

Definition: Combine two or more types of semantic nets is called hybrid


semantic nets. (combining definitional and assertional aspects)

How combined:

 Use frames for structured slots/attributes;


 Use logic/rules for constraints;
 Use semantic net structure for hierarchies;
 Possibly attach probabilities or fuzzy weights.

Example: a Dog node in a net + a Frame for Dog with slots (color, size) + rules for
barksIfThreatened.

[Link] Networks
 Continuously update and expand based on new information or examples.

-------- X -------

Constraint Satisfaction Problem (CSP)

A Constraint Satisfaction Problem is a problem in which certain variables have domains


containing possible values, and these variables must satisfy specific constraints.

(or) CSP = Problem defined by a set of variables, their possible values (domains), and a set
of constraints that specify allowable combinations.

Example – Sudoku Puzzle:

 The variables are the empty cells of a 9×9 grid.


 The domain of each variable is the set of possible numbers (1 to 9).
 The constraints are that each row, each column, and each 3×3 subgrid must contain
non-repeated numbers.

Example:
Map Coloring Problem

 Variables: X={A,B,C,D}
 Domains: D={Red,Green,Blue}
 Constraints: Adjacent regions cannot have the same color.
Example: A≠B,B≠C,C≠D

✅ Solution: Assign colors so that no two neighboring regions share the same color.

Types of Constraints:
1. Unary Constraint:
Applies to a single variable.
Example: X1≠Red
2. Binary Constraint:
Involves two variables.
Example: X1≠X2
3. Global Constraint:
Involves more than two variables.
Example: “AllDifferent(X1, X2, X3)”

Components of CSP:

1. Variables (V): set of variables


Example: V={X1,X2,X3}
2. Domains (D):
Each variable has a domain, i.e., the possible values it can take.
Example:
D(V1)={1,2,3}, D(V2)={1,2,3}
3. Constraints (C):
Restrictions that specify which combinations of values are allowed.
Example:
X1≠X2

🔹 Types of CSPs:

1. Discrete CSPs:
Variables have discrete (finite) values.
Example: Sudoku, Map coloring
2. Continuous CSPs:
Variables have continuous domains.
Example: Real-valued equations, scheduling problems
3. Mixed CSPs:
Contain both discrete and continuous variables.

Benefits of Constraint Propagation

1. It can reduce the search space.


2. It helps in detecting inconsistencies easily.
3. It improves the efficiency of solving a problem.

🔹 Applications of CSP:

 Sudoku solving
 Map coloring
 Timetable scheduling
 Resource allocation
 Vehicle routing
 Exam scheduling

--------- xxxx ---------

Representing Knowledge Using Rules

 In Artificial Intelligence, knowledge representation means storing knowledge about


the world in a form that a computer can understand and reason with.
One of the most important methods for this is Rule-Based Representation.
 ✅ Definition:
Knowledge representation using rules means expressing knowledge in the form of
IF–THEN statements (called production rules), which describe what action or
conclusion should follow when certain conditions are true.
 The rules are typically written in the form of if–then statements.
 Rules allow the system to make logical inferences, draw conclusions, and make
decisions based on available knowledge.

In AI, there are several types of rules commonly used for representing knowledge. One of
them is:
1. Production Rules

 These are rules of the form “if–then” statements.


 They are also known as calorific rules (note: likely meant “categorical” or “classic”
rules).

Example of Production Rule

“If the temperature is above 100°F, then the system should be alarmed.”

[Link] Rules

 These rules are used to formal logic representing knowledge.


 It can be expressed in propositional or predicate logic.

If (A ∧ B) → C
Example:

→ It means both A and B are true, then C is true.

[Link] Representation

 It represents the knowledge into structured form known as frames.


 A frame can hold information about an object and its attributes and relationships.
Example:
Frame for a Car → includes attributes like color, model, owner, glasses, type, etc.

[Link] Network

 It is a graphical representation of knowledge.


 The nodes represent concepts, and edges represent relationships between them.
Example:
A node for Dog may be connected to Animal with an “is-a” relationship.

[Link] Rules

 These rules are used in systems dealing with uncertainty.


 They allow degrees of truth rather than binary (true/false) logic.
Example:
If the temperature is high, then the fan speed will be medium.

[Link] Rules

 These rules specify limitations and constraints within the statements that the system
can operate on.
Example:
If a patient is allergic to penicillin, then that medicine should not be prescribed.

------- xxxx --------

Rule-based reduction Systems:


 A Rule-Based Reduction System is a problem-solving approach in AI where rules
(IF–THEN statements) are applied to reduce a complex problem into simpler
subproblems until a solution is reached.
 It combines problem reduction with rule-based reasoning.

These systems are designed to solve problems through logical reasoning by automatically
applying the rules to the knowledge-based agent.

Rule-based reduction is particularly useful for tasks such as decision-making, expert systems,
and knowledge representation.

1. Knowledge Base:
This is the core component that contains all the facts and rules.
o The facts represent the information about the domain, while the rules are
typically in the form of If–Then statements.

Example:
Rule: If it is raining, then the ground will be wet.

2. Inference Engine:
This is the processing unit that applies the rules to the facts in the knowledge base to
draw the conclusion.
It can work in two primary methods, as follows:

a) Forward Chaining / reasoning:


It is a data-driven reasoning approach where the system starts with known facts and applies
the rules to step by step to derive new facts until the goal/conclusion is reached.

It is used in production systems and expert systems where the goal is to explore all the
possibilities from the current state.

It Works like a breadth-first search.

 Process:
1. Take the initial data/facts.
2. Match them with the IF part of rules.
3. Fire/apply the rule → generate new facts.
4. Repeat until the desired goal is found.

👉 Example:

Rule 1: IF fever ∧ cough → THEN flu.


 Fact: “Patient has fever, cough.”

 Rule 2: IF flu → THEN prescribe medicine.
 Conclusion: “Prescribe medicine.”

For example:
Fact – It is raining
Rule – If it is raining, then carry an umbrella
(Conclusion) → By carrying an umbrella.
Used in: Expert systems like medical diagnosis.

b) Backward Chaining/ reasoning:


 Backward chaining is a goal-driven inference method in which reasoning starts from the
desired goal (hypothesis) and works backward through rules to check whether there are facts
that support the goal.

 Backward chaining is often used in logic programming and diagnostic systems.

 Works like a depth-first search.

 Process:
1. Define a goal to prove.
2. Check rules that can produce the goal.
3. See if the conditions (IF part) are satisfied.
4. If not, set those conditions as new sub-goals and repeat.

👉 Example:

Rule: IF fever ∧ cough → THEN flu.


 Goal: “Patient has flu.”

 Ask: “Does patient have fever? cough?”
 If facts match → conclude flu.

Used in: Troubleshooting systems, diagnostic systems.

For example:
Fact/Goal – Do I need an umbrella?
Rule – If it is raining, then carry an umbrella.

In backward chaining, we will check if it is raining.


If the condition is true, the conclusion will be “carry an umbrella.”

3. Hybrid Inference Engine (Forward + Backward)

Applications:

 Rule-based deduction systems are widely used in expert systems, decision-making


applications, and even in programming languages that support logic programming like
Prolog.

--------- xxxx --------

Reasoning under Uncertainty:

 In AI, it is involved to make decisions or draw the conclusion statements when the
information available is incomplete, ambiguous, or uncertain (when the complete
information is not available).
 It is important, Real-world problems often involve incomplete, noisy, or uncertain
data, so AI systems must reason probabilistically instead of using strict logic.
Why Uncertainty Occurs

1. Incomplete knowledge – All facts about the environment are not known.
2. Ambiguous data – Information may be vague or imprecise.
3. Unpredictable outcomes – Actions may have multiple possible results.
4. Sensor noise – Data from sensors may be inaccurate or noisy.

Approaches to Handle Uncertainty


Here are some key concepts and methods that are used to handle the
uncertainty under the reasoning:
[Link] Reasoning / Probability Theory:

 The most common method for representing uncertainty.

 Uses probability values (0–1) to represent the degree of belief in a statement.


 ‘0’ indicates impossibility and ‘1’ indicates certainity
 Probability reasoning indicates how much certainity in that event.
 Example:
o P(Rain) = 0.8 → 80% chance it will rain.

0.2 -> 29 $ chance it will not rain

. probability of weather, cavity-tooth ache

2. Bayesian Networks

 A graphical model that represents probabilistic relationships among variables.


 Each node = variable; edges = dependencies between variables.
 Useful for reasoning in complex domains like medical diagnosis and weather
prediction.

3. Fuzzy Logic

It is the traditional binary logic which deals with true or false values, and it allows for the
degrees of truth. It is also useful in some situations where the concepts are not black and
white.

Example:
Instead of saying the temperature is hot, fuzzy logic finds allowance for the above statement,
e.g., “the temperature is somewhat hot,” which can be more representative for real-world
conditions.

o “Temperature is high” may be partly true (e.g., 0.7 true).


 Used in control systems, washing machines, air conditioners, etc.
4. Dempster–Shafer Theory

This is a mathematical theory of evidence that allows for reasoning with uncertain
information. It combines different pieces of evidence to calculate the probability of an event.
This method is particularly used for dealing with conflicting information.

5. Markov Decision Process (MDPs)

This MDP is used for modeling and decision-making in situations where the outcomes are
partly random and partly under the control of the decision-maker.
This provides a framework of modeling for the environment, actions, rewards, and
transitions.
This makes allowance for optimal decision-making under uncertainty.

6. Certainty Factors (CF)

 Used in early expert systems (like MYCIN) to handle uncertainty in rule-based


reasoning.
 Each rule is given a certainty factor between –1 and +1 showing confidence level.
o CF = +1 → completely true
o CF = 0 → unknown
o CF = –1 → completely false

7. Default and Non-Monotonic Reasoning

 Default reasoning assumes something is true until proven otherwise.


o Example: “Birds can fly” → until we learn the bird is a penguin.
 Non-monotonic means new information can change previous conclusions.

[Link] Inference

One of the most significant applications of probability in AI is Bayesian inference.


This method updates the probability estimate for a hypothesis as more evidence becomes
available.

Example:
In spam detection, Bayesian inference can assign probabilities that an email is spam based on
the words it contains, allowing the computer to improve its predictions over time.

Markov Models

9.A Markov Model includes a hidden model that is widely used in AI to perform tasks like
speech recognition and natural language processing.
It assumes that the future states depend only on the current state and not on all the
previous events.
This simplifies the modeling of complex systems where uncertainty is present.

Example:
Predicting the next word in a sentence or recognizing spoken words.
[Link] Graphical Models

These are models that include social networks and Markov random fields, which represent
the relationships between a set of variables using graphs.
They allow for efficient computation of joint probability distributions and help in
understanding dependencies among variables.
These models are particularly useful in complex domains such as medical diagnosis and
social network analysis.

11. Decision Making under Uncertainty

Probability is also integral to decision-making, where it helps make rational choices in


uncertain environments.

Example:
An AI system deciding whether to send a robot for rescue during a storm, based on the
probability of success and risk.

Applications

It is used in various fields, which include robotics, medical, Weather prediction

Financial forecasting, Speech and language understanding and finance.

---------- xxxxx --------=

Bayes’ Probabilistic Inference / Bayesian Inference


Definition:

Bayes’ Probabilistic Inference is a method of reasoning under uncertainty that uses Bayes’
Theorem to update the probability of a hypothesis when new evidence is observed.

It helps AI systems make decisions when the data is incomplete, uncertain, or noisy.

Bayes’ Theorem:
P(A∣B)= P(B∣A)⋅P(A) / P(B)

Where:

 P(A/ B): Posterior probability – Probability of A happening, given B happened


 P(A): Prior probabilityof A – initial belief before evidence
 P(B∣A): Probability of B happening, given A happened
 P(B): probability of observing B
Example:

Suppose an AI system diagnoses a disease.

 1% of people have the disease → P(H)=0.01P(H) = 0.01P(H)=0.01


 Test detects disease correctly 90% of the time → P(E∣H)=0.9P(E|H) = 0.9P(E∣H)=0.9
 Test gives false positives 5% of the time → P(E)=0.05P(E) = 0.05P(E)=0.05

P(A∣B) =( 0.9×0.01)/0.05 = 0.18P(H|E) = 0.18

So, there’s 18% chance that the person actually has the disease after a positive test.

Advantages:

✅ Handles uncertainty systematically


✅ Allows continuous updating of knowledge
✅ Combines prior knowledge with new data

Limitations:

❌ Requires accurate prior probabilities


❌ Computationally expensive for large data sets

Applications in AI:

 Medical diagnosis
 Spam filtering (Naive Bayes Classifier)
 Image and speech recognition
 Decision support systems
 Robotics and autonomous navigation

------- xxxxx ------

Dempster–Shafer Theory of Evidence


Definition: Dempster-Shafer Theory (DST) is a mathematical theory of evidence
theory for managing uncertainty and making decisions when information is incomplete or
conflicting.

It allows reasoning without needing exact probabilities, unlike Bayesian inference.

Instead of assigning probabilities to single hypotheses, DST assigns a degree of belief to sets
of hypotheses.
It is useful when the available evidence does not strongly support one single outcome.
Basic Terms/key concepts:

 Frame of Discernment (Θ):


The set of all possible outcomes or hypotheses.
Example: Θ = {Rain, No-Rain}
 Basic Probability Assignment (BPA or m):
It represents the degree of belief given to a subset of Θ.
It satisfies:

∑m(A)=1, m(∅)=0

 Belief Function (Bel):


Total belief that supports a hypothesis.

Bel(A)=∑ m(B)

 Plausibility Function (Pl):


Measures how much we cannot disbelieve A.

Pl(A)=1−Bel(Aˉ)

Dempster’s Rule of Combination:

To combine multiple independent pieces of evidence from different sources


when exact probabilities are not known.

➡️Main formula:

m(A)=1 /1−K ∑ m1(B) × m2(C)

where
m1,m2 = basic belief assignments (BBAs) from two evidence sources
K = conflict measure (how much evidence disagrees)

➡️Type:
Evidential reasoning — works with belief and plausibility, not exact probabilities.

➡️Used when:
Information is incomplete or uncertain, and you want to merge evidence from multiple
sensors, experts, or reports.

➡️Example:
Two sensors detect whether it’s raining — each gives partial belief.
DST combines both to get a more confident final belief about “Rain
Advantages:

✅ Works with incomplete and uncertain information


✅ Does not require prior probabilities
✅ Combines evidence from multiple sources

Limitations:

❌ Computationally complex for large hypothesis spaces


❌ Results depend on how evidence is combined

Applications in AI:

 Sensor fusion in robotics


 Medical diagnosis
 Decision-making under incomplete data
 Fault detection systems
 Information retrieval

Difference between Bayes’ and Dempster–Shafer:


Aspect Bayes’ Inference Dempster–Shafer Theory

Requires prior probability Yes No

Type of reasoning Probabilistic Evidential

Deals with Exact probabilities Uncertain, partial evidence

Output Posterior probability Belief and plausibility

Use case When prior data is known When evidence is incomplete

------------- xxxx--------

You might also like