Overview of AI Concepts and Types
Overview of AI Concepts and Types
Artificial Intelligence:
Artificial means man-made and word intelligence refers to thinking process.
For example – Smart phones, electronic gadgets like smart watches, these are in the
preprogramed manner.
By use of these AI we don’t need to pre-program algorithms, to which they can assist with
their own intelligence.
With the help of AI we can create different types of software devices which we can solve real
world problems very easily and it gives accuracy such as health issues in medical field,
marketing, traffic issues etc.
With the help of AI we can create virtual personal assistance like Google Assistant.
With the help of AI we can create robots which can work in an environment where survival
of humans can be difficult.
AI opens skill paths for new technologies, new domains and new opportunities.
Goals of AI:
AI can explicate the human intelligence.
It can solve knowledge intensive tasks. An intelligence connection of perception and action.
Building a machine which can perform the tasks that requires human intelligence such as
self-driving cars, surgical operations in medical field, playing the chess etc.
Creating same system which can exhibit intelligence.
Artificial Intelligence deals with building systems that can solve problems intelligently like
humans. The main types of AI problems are:
1. Search Problems
Many AI tasks can be seen as searching for a solution among possible alternatives.
AI uses search algorithms to find the best path or sequence of actions to reach a
goal.
Techniques:
o Breadth First Search (BFS)
o Depth First Search (DFS)
o Heuristic Search (A*, Hill Climbing)
Examples:
o Google Maps (finding the shortest path)
o Solving puzzles like 8-puzzle, chess moves
o Planning robot actions
3. Learning Problems
Instead of being pre-programmed, AI systems can learn from data and experiences.
This is the core of Machine Learning.
Types of Learning:
o Supervised Learning – learning from labeled data (e.g., spam mail detection).
o Unsupervised Learning – learning patterns without labels (e.g., clustering
customers).
o Reinforcement Learning – learning by trial and error using rewards (e.g.,
training robots, game-playing AI).
Examples:
o Email spam detection
o Fraud detection in banking
o Game AI like AlphaGo
5. Perception Problems
--------- xxxx-------
AI TYPES:
AI refers to which is made by humans (as) non-natural things and intelligence refers to the
ability to understand and the thinking power.
The categories which are breaking down into 2 primary types which are based on capabilities
and which are based on functionalities.
I. Based on Capabilities
General AI can perform any intellectual task that a human can do.
It can learn, reason, and adapt to new situations.
Still a theoretical concept, not fully developed yet.
Examples (conceptual):
o A robot doctor diagnosing multiple diseases like a human doctor
o AI that can write, drive, play games, and think like humans
3. Super AI
1. Reactive Machines
2. Limited Memory AI
3. Theory of Mind AI
4. Self-Aware AI
---------------- xxxxxxxxx----------------
Agents in AI
Agents are software programs that can sense the environment, make decisions, and
take actions to achieve goals.
Example:
o Google Assistant (agent that takes voice input and gives results)
o Self-driving car (agent that senses road & drives safely)
The nature of environments: The task environment which are essentially to which the
rational agents are the solution to solve the problem of the environment. The rationality of the
simple vacuum cleaner agent we have to specify the performance measure, the environment,
the agent actuators & along with sensors. These four broader type we can call the PEAS.
For example:
Performance measure include a automated driver to aspire desirable qualities includes getting
to the correct destination, minimizing the fuel consumption, minimizing the travel time or
cost, minimizing the violations of traffic rules and disturbance to the other drivers &
maximizing the safety of passengers and maximize the profit and passenger’s comfort.
Environment: Environment includes what is the driving environment that the taxi will face
only. Taxi drivers must deal with a variety of roads ranging rural areas to urban and the roads
which may contain traffic, pedestrians, road work, police cars etc.
Actuators: The actuators for an automated taxi includes those available to a human driver:
control over the engine through the accelerator and To control the steering and break.
Sensors: The basic sensors for a taxi, which includes one or more comfortable video camera,
so that it can see the road and it may be augmented with infrared or sonar system, which is
used to detect the distance of the other cars and obstacles.
P (Performance
Agent Type E (Environment) A (Actuators) S (Sensors)
Measure)
Steering,
Camera, GPS, engine
accelerator, brake,
Safe, fast, Roads, traffic, sensors, sonar,
signals, road
Taxi comfortable, pedestrians, speedometer
display
stable, top profits customers
Display of
Medical questions, lists, Keyboard, entry of
Healthy patient, Patient, hospital,
Diagnosis treatment, symptoms, patient
reduce the cost doctor, nurses
System diagnosis, patient answers
research
Correct image
Satellite image analysis Obtaining satellite Display, categorize Color, image,
character
system image of observed data focus, zoom
recognition
Percentage of paths
Conveyor belt with Jointed arms and Camera, joint
Path picking robot which is incorrect in
parts and bins hand angle sensors
the bin
P (Performance
Agent Type E (Environment) A (Actuators) S (Sensors)
Measure)
Temperature,
Maximize purity, Valves, pumps,
Refinery control Refinery products pressure,
safety displays
chemical sensors
------------- xxxxxxxxxxxx-----------------
UNIT – II
Searching – Searching for solutions, uniformed search strategies – Breadth first search, depth
first search. Search with partial information (Heuristic search) Hill climbing, A*, AO*
Algorithms, Problem reduction, Game Playing – Adversarial search, Games, mini-max
algorithm, optimal decisions in multiplayer games, Problem in Game playing, Alpha-Beta
pruning, Evaluation functions.
SEARCHING: Searching is the process of finding a sequence of actions that leads from the start
state to a goal state within a given state space.
A searching algorithm takes a problem as input and it returns a solution in the form of an
action sequence. Once a solution is found, the action can be carried out. This is known as
Execution phase.
1. A state space
2. A start space
3. A goal space
1. A state space: A state space contains a set of all possible states where we can be.
2. A start state: A start space (state) from where the search begins.
3. A goal space: A function that looks at the current state and it returns whether or not it
is in the goal state.
1. Uninformed Search:
Initial State: The vacuum agent is located in one of two squares (Left or Right). Each
square may be clean or dirty.
Actions:
1. Move Left
2. Move Right
3. Suck (clean the square)
Goal State: All squares must be clean, regardless of the agent’s position.
Path Cost: Each action (move or suck) has a cost of 1. The path cost is the total
number of actions.
Successor Function: Generates new states when the agent applies actions (move or
clean).
Ex 2: 8 Puzzle problem
The 8 puzzle problem consists of a 3×3 board with 8 number tiles and a blank space.
Initial State:
1 2 3
5 6
7 8 4
Goal State:
1 2 3
5 8 6
7 4
1. Initial State
o The starting arrangement of the tiles on the 3×3 board.
o Example:
o 1 2 3
o 5 6
o 7 8 4
2. State Space
o The set of all possible configurations of the 8 numbered tiles and the blank
space (total = 9! = 362,880 possible states).
3. Successor Function
o Defines all possible moves by sliding a tile into the blank space (Up, Down,
Left, Right).
o Each move creates a new state.
4. Goal State
o The desired arrangement of tiles.
o Standard goal state is:
o 1 2 3
o 4 5 6
o 7 8
5. Path Cost
o Each move costs 1.
o The total cost is the number of moves taken to reach the goal.
----------------------------------------------------------------------------------------------------------------
PROPERTIES of searching :
by the AI agents to accomplish the search tasks and how much time is to
task how much storage is needed is based on the complexity of the task.
POROBLEM REDUCTION
1. OR Nodes:
o Represent choices.
o The problem can be solved by solving any one of its child nodes.
o Example: To reach city C, you can go via A or via B.
2. AND Nodes:
o Represent necessary decompositions.
o The problem can be solved only if all child subproblems are solved
together.
o Example: To pass an exam, you must clear Theory and Practical.
1. Theorem Prove:
o Reduce the complex proof into lemma’s proofs
2. Planning:
o In planning, the task will break into substeps
3. Natural Language Processing (NLP):
o Sentence → Tokens → Parse Tree
o Navigate → Move → Drive
4. Game Solving:
o In game solving, we have to play the game to beat the opponent and not to
lose the game
5. Game Playing
o Reducing a game position into possible sub-moves.
1. Means-End Analysis:
o It reduces the difference between the current node and the goal node step by
step
2. Goal Trees:
o Used to visualize the decomposition of a problem
3. AO Algorithm*:
o Finds the optimal solution using AND/OR graphs
4. Divide and Conquer:
o A classical approach used in searching and sorting techniques, etc.
-------- x ------
Game Playing in AI
Game playing is an important application of problem reduction in Artificial
Intelligence.
In AI, a game is modelled as a search problem where:
o The system must strategize decision making
o Look ahead at all possible moves
o Respond to the opponent’s moves
o Try to win (or at least not lose)
Adversarial Search
Adversarial Search is a search technique used in competitive environments where two or
more agents (players) have conflicting goals, and each player tries to maximize their own
utility while minimizing the opponent’s utility.
Types of Algorithms
1. Minimax Algorithm
2. Alpha-Beta (α-β) Pruning
Characteristics
1. Players – At least two players can play the game. Ex : Player vs Player, Player vs
[Link] – Deterministic, turn-based, with perfect information of the game
for both players can see the board.
2. Opponent – it can Competes directly to win the game ( one player’s win = another
player’s loss).
. This allows the AI to anticipate opponent’s best move and plan accordingly.
Minimax Algorithm
working Principle
Assumes that both players play optimally and it searches a possible moves to generate
the best outcome.
AI plans the moves based on opponent moves.
It performs DFS alg on game tree.
At terminal nodes (H, I, J, … O), assign values (e.g., Win = +1, Lose = -1, Draw = 0).
These values are propagated upward:
o MAX nodes choose maximum child value.
o MIN nodes choose minimum child value.
Finally, the root (A) chooses the optimal strategy.
Properties
1. Completeness → Always finds a solution if game tree is finite.
2. Optimality → Always optimal if opponent plays optimally.
3. Time Complexity → O(b^m)
o b = branching factor (legal moves at each point).
o m = maximum depth of the tree.
4. Space Complexity → O(bm) (depth-first search storage).
30 (MAX)
/ \
12 30 (MIN)
/ \ / \
15 12 30 40 (MAX)
/ \ / \ / \ / \
15 11 8 12 12 30 16 40 (min)
In this algorithm, we have two players to play the game: 1. Max 2. Min
Applications
Alpha–Beta Pruning
Alpha–Beta pruning is an optimization technique for the Minimax algorithm.
It reduces the number of nodes evaluated in the search tree by pruning (cutting off)
branches that will not affect the final decision.
Minimax is slow in complex games like Chess, because of huge number of moves.
Alpha–Beta pruning reduces the number of nodes that need to be evaluated.
It does not affect the final result, only improves efficiency.
Alpha (α): Best (highest) value that the MAX player can guarantee at that point.
Beta (β): Best (lowest) value that the MIN player can guarantee at that point.
If at any point α ≥ β, further exploration of that node is stopped (pruned).
Action of Alpha–Beta
1. Initialize α = –∞
2. If child value > α → update α
3. If α ≥ β → prune remaining children
1. Initialize β = +∞
2. If child value < β → update β
3. If β ≤ α → prune remaining children
Algorithm Steps
Example Tree
Diagram:
Explanation:
In both MIN and MAX nodes, a situation can occur where α ≥ β. This allows pruning
of nodes (i.e., skipping evaluation of certain branches).
Both Minimax and Alpha-Beta follow the same path for the optimal move.
Alpha-Beta Pruning is optimal because it takes less time to get the best move.
Notes:
Alpha (α):
o Represents the highest value found so far at any MAX node.
o Acts as a lower bound.
o Initialized to –∞.
Beta (β):
o Represents the lowest value found so far at any MIN node.
o Acts as an upper bound.
o Initialized to +∞.
-----------------------------------------xxxxx-----------------------------------------------------------------------------------
MULTIPLAYERS GAME
In Artificial Intelligence, games can have more than two players is known as multi players
gamesuch players have their own goals.
Unlike two-player games like chess one player will win and another player will lose the
game.
Optimal Decision
The optimal decision is the one that maximizes your chance of winning (or minimizes loss),
depending on the game's goal.
In multiplayer games, it's not just between you and one opponent — now there are several
opponents, making optimal decision-making more complex due to unpredictable conflicts.
Example:
Player B will choose a move that increases B's score, even if A and C also benefit
from it.
Paranoid Algorithm
In Paranoid Algorithm, a player thinks that all other players are working together
against them.
Because of this assumption, the player plays in a very careful (conservative) way.
This means:
o The player avoids risky moves.
o Always assumes the worst-case scenario (others will choose moves that hurt
me most).
o Plays as if the environment is completely hostile.
In CBR, some players join together (form a group) temporarily to defeat stronger
opponents.
These players cooperate for a short time, but after achieving their goal, they can
split up again.
It is used in games or situations where working together gives better results than
acting alone.
🧠 Key Ideas:
Example:
🎮 Examples
1. Board games
2. Monopoly
3. Risk
4. Online games (PUBG, Free Fire)
5. Business competitions
6. Negotiation scenarios
In AI, game playing means designing an agent that can make intelligent decisions in a
competitive environment.
However, there are several problems:
1. Combination Explosion
The number of possible game states grows exponentially with each move.
Example: Chess has over 10¹²⁰ possible states — too many to explore fully.
It’s impossible to evaluate every move in real time.
2. Time Constraints
3. Incompleteness of Search
You may not reach terminal states due to limited search depth.
This leads to inaccurate evaluations.
The decision may not be truly optimal.
Some games involve random events (e.g., dice rolls, card draws).
Algorithms like Minimax cannot handle randomness directly.
Need methods like Expectiminimax or Monte Carlo Search.
5. Partial Observability
6. Multiple Players
When there are more than two players, the strategy becomes makes it harder.
This requires game theory models like the Maxⁿ or Paranoid Algorithm.
It is used to estimate the value of non-terminal nodes (not end states) in a game
tree.
If it is poorly designed, it can mislead the AI and produce wrong decisions.
A good evaluation function must consider multiple factors (like position, score,
moves, etc.).
8. Opponent Modelling
The Minimax algorithm assumes the opponent always plays optimally — but this
may not be true.
In practice, AI can use machine learning to understand and adapt to an opponent’s
behavior.
Helps AI to predict strategies and learn from experience.
Many AI game systems don’t just follow fixed rules — they learn from experience.
Techniques like reinforcement learning and self-play help the AI improve.
However, learning requires a lot of computation power and time.
-----------------------------------------------------------------------------------------
Denoted as Eval(s)
It returns a numerical score to represent the desirability of a state (s).
o For Max player → Higher score as better.
o For Min player → Lower score as better.
Eval(s)=w1∗f1(s)+w2∗f2(s)+...+wn∗fn(s)Eval(s)
Where:
f1,f2,...,fnf_1, f_2, ..., f_nf1,f2,...,fn → are the features of the game state
(like material, position, control, etc.)
w1,w2,...,wnw_1, w_2, ..., w_nw1,w2,...,wn → are the weights assigned to each
feature
(showing their importance).
♟️Examples
Material Advantage
Positional Advantage
Mobility
Control of the center of the board
King’s Safety
Representation of knowledge:
The intelligence of human is achieved not by mere reflex mechanism. In the process of
reasoning, that can operate on internal representation of knowledge, this type of approach
which is done by knowledge based agent. This agent are supported by logic such as
propositional logic & first order predicate logic. Agents can form representations for complex
world. It is used in a process of inference to derive new representation about the world, and it
is used for the new representation to deduce what to do.
Knowledge and reasoning are the most important to logic agents because they are
unable for the successful behaviour to achieve a desire goal.
Knowledge base contains a set of sentences which are present in a formal language.
Therefore are two types of functions which is used a knowledge based agent:
It deals with how the information and knowledge are to be stored and how the
structured information, and it is utilized by AI systems.
Representation Scheme:
1. Semantic Network:
The semantic network contains graph structures – a graph contains nodes and edges.
In graph nodes represent the concept and the edge represents the relationship.
1Frames:
The data structure which stores information about an object which includes attributes and
values.
2. Rules:
If the statement then it defines the relationship and concepts which can be used for the
reason.
3. Logic-Based Representation:
These are using the formal logic to represent the facts and rules.
Natural language often has ambiguities and vagueness, which makes it difficult for AI
systems to interpret the meaning accurately.
Examples:
4. Incompleteness:
5. Dynamic Knowledge:
It means the knowledge can define on the content to interpret the information
correctly to make the appropriate decision.
7. Interoperability
8. Reasoning
Once knowledge is stored, AI must be able to reason (think) using that knowledge.
Reasoning is about drawing conclusions or solving problems using the given
information.
9. Predicate Logic / Predicate Knowledge
Logic Programming
1. Syntax
2. Semantics
1. Constants
Represent specific named objects in the domain.
They are fixed and do not change.
Examples:
o John, Mary, India, 5, a
Usage in FOL: Loves(John, Mary) → means John loves Mary.
2. Variables
Represent placeholders for objects in the domain.
They can take different values.
Usually written as: x, y, z
Example:
o ∀x Human(x) → For every object x, if it is a human…
3. Predicates
Predicates express properties of objects or relationships among objects.
They return True/False.
This statement predicts if it is true when the person John likes the ice cream.
Types:
1. Unary Predicate → property of a single object.
Example: Student(x) → x is a student.
2. Binary Predicate → relation between two objects.
Example: Loves(x, y) → x loves y.
3. n-ary Predicate → relation among n objects.
Example: Between(x, y, z) → x is between y and z.
4. Functions
Functions map objects to other objects.
Always return exactly one value.
Example:
o Father(x) → gives the father of x.
o If Father(John) = David, it means David is John’s father.
5. Quantifiers
Quantifiers specify how many objects in the domain satisfy a property.
o Expresses that there is at least one object for which the property holds.
o Example: ∃x Student(x) ∧ Intelligent(x)
(There exists a student who is intelligent)
6. Logical Connectives
Used to combine formulas. Logical connectives are used to build complex statements in to
simple one.
7. Equality (=)
Used to denote that two objects are the same.
Example: Father(John) = David → David is father of John.
For example, consider the domain of natural numbers. The predicate GreaterThan(x, y) holds
if x is greater than y.
If x = 5 and y = 3, GreaterThan(5, 3) is true.
QUANTIFIERS
Quantifiers specify how many objects in the domain (RANGE AND SCCOPE) satisfy a
property.
These are the symbols that permit the determination or identification of the
range and scope of the variable in the logical expression. There are two types
of quantifiers:
Used when a statement must hold true for every element in the domain.
Example 3:
Existential Quantifier ( ∃ ): It is Used when a statement must hold true for at least one
element in the domain.
Symbol: ∃x
Meaning: "There exists an x" or "For some x"
Meaning: There exists at least one x such that x is a student and intelligent.
Example 3:
---------- X ---------
Example:
3. Logical Connectives
In predicate logic programming, logical connectives are used to build complex statements
from simple ones, such as:
AND ( ∧ )
OR ( ∨ )
NOT ( ¬ )
Implication ( → )
These are essential for combining facts and rules in a logical and consistent manner.
5. Knowledge Representation :
Example:
In Logic programming systems use inference engines to derive new information from facts
and rules.
Example:
Fact: sweet(John)
Rule: likes(X, icecream) : sweet(X)
likes(John, icecream)
7. Backtracking :
Applications of Backtracking:
1. Expert systems: to get back the rules and facts from the previous conclusions.
2. Natural language processing: it is used to understand the human language.
3. Puzzle solving
4. Game playing
Example:
o Type: Mammal
o Color: Brown, Black
o Size: Small, Medium
o Behavior: Friendly
--------------------- X ------------------------------
Semantic Network
Useful for representing hierarchical knowledge like “is-a” (inheritance) and “has-a”
(attributes).
Example:
Nodes → Dog, Animal, Mammal
Edges → “is a” → represents that Dog is a type of Animal, and Animal is a type of
Mammal.
This structure allows easy visualization of relationships and helps in reasoning
about knowledge.
FRAMES
In Frames, the data structure is used for representing stereotypical situations, similar to
how objects are represented in object-oriented programming.
(OR) A Frame is a data structure used to represent knowledge about an object, situation, or
concept in AI.
A frame consists of a collection of slots and their associated values.
Each frame represents a concept and may include default values, which can be overridden
by specific information.
components of a Frame
. Inheritance: Frames support inheritance, where a frame can inherit properties from another
frame (parent frame). This allows for efficient representation of shared properties and
hierarchical relationships.
Example:
. Frame: Dog
▪ Slots: :
*Type : animal
*legs: 4
*sound: bark
Advantages of Frames
Organizes knowledge in a structured and modular way.
Inheritance reduces redundancy.
Handles defaults and exceptions.
Natural for object-oriented representation (similar to OOP).
Supports procedural knowledge via slot-attached procedures.
Example:
• Superclass: Animal
• Subclass: Dog
Types of Inheritance:
====================xxxxx--------------------
Example:
Example: visiting Doctor → worksIn→ Hospital could trigger a lookup procedure that
returns all doctors in a given hospital;
How combined:
Example: a Dog node in a net + a Frame for Dog with slots (color, size) + rules for
barksIfThreatened.
[Link] Networks
Continuously update and expand based on new information or examples.
-------- X -------
(or) CSP = Problem defined by a set of variables, their possible values (domains), and a set
of constraints that specify allowable combinations.
Example:
Map Coloring Problem
Variables: X={A,B,C,D}
Domains: D={Red,Green,Blue}
Constraints: Adjacent regions cannot have the same color.
Example: A≠B,B≠C,C≠D
✅ Solution: Assign colors so that no two neighboring regions share the same color.
Types of Constraints:
1. Unary Constraint:
Applies to a single variable.
Example: X1≠Red
2. Binary Constraint:
Involves two variables.
Example: X1≠X2
3. Global Constraint:
Involves more than two variables.
Example: “AllDifferent(X1, X2, X3)”
Components of CSP:
🔹 Types of CSPs:
1. Discrete CSPs:
Variables have discrete (finite) values.
Example: Sudoku, Map coloring
2. Continuous CSPs:
Variables have continuous domains.
Example: Real-valued equations, scheduling problems
3. Mixed CSPs:
Contain both discrete and continuous variables.
🔹 Applications of CSP:
Sudoku solving
Map coloring
Timetable scheduling
Resource allocation
Vehicle routing
Exam scheduling
In AI, there are several types of rules commonly used for representing knowledge. One of
them is:
1. Production Rules
“If the temperature is above 100°F, then the system should be alarmed.”
[Link] Rules
If (A ∧ B) → C
Example:
[Link] Representation
[Link] Network
[Link] Rules
[Link] Rules
These rules specify limitations and constraints within the statements that the system
can operate on.
Example:
If a patient is allergic to penicillin, then that medicine should not be prescribed.
These systems are designed to solve problems through logical reasoning by automatically
applying the rules to the knowledge-based agent.
Rule-based reduction is particularly useful for tasks such as decision-making, expert systems,
and knowledge representation.
1. Knowledge Base:
This is the core component that contains all the facts and rules.
o The facts represent the information about the domain, while the rules are
typically in the form of If–Then statements.
Example:
Rule: If it is raining, then the ground will be wet.
2. Inference Engine:
This is the processing unit that applies the rules to the facts in the knowledge base to
draw the conclusion.
It can work in two primary methods, as follows:
It is used in production systems and expert systems where the goal is to explore all the
possibilities from the current state.
Process:
1. Take the initial data/facts.
2. Match them with the IF part of rules.
3. Fire/apply the rule → generate new facts.
4. Repeat until the desired goal is found.
👉 Example:
For example:
Fact – It is raining
Rule – If it is raining, then carry an umbrella
(Conclusion) → By carrying an umbrella.
Used in: Expert systems like medical diagnosis.
Process:
1. Define a goal to prove.
2. Check rules that can produce the goal.
3. See if the conditions (IF part) are satisfied.
4. If not, set those conditions as new sub-goals and repeat.
👉 Example:
For example:
Fact/Goal – Do I need an umbrella?
Rule – If it is raining, then carry an umbrella.
Applications:
In AI, it is involved to make decisions or draw the conclusion statements when the
information available is incomplete, ambiguous, or uncertain (when the complete
information is not available).
It is important, Real-world problems often involve incomplete, noisy, or uncertain
data, so AI systems must reason probabilistically instead of using strict logic.
Why Uncertainty Occurs
1. Incomplete knowledge – All facts about the environment are not known.
2. Ambiguous data – Information may be vague or imprecise.
3. Unpredictable outcomes – Actions may have multiple possible results.
4. Sensor noise – Data from sensors may be inaccurate or noisy.
2. Bayesian Networks
3. Fuzzy Logic
It is the traditional binary logic which deals with true or false values, and it allows for the
degrees of truth. It is also useful in some situations where the concepts are not black and
white.
Example:
Instead of saying the temperature is hot, fuzzy logic finds allowance for the above statement,
e.g., “the temperature is somewhat hot,” which can be more representative for real-world
conditions.
This is a mathematical theory of evidence that allows for reasoning with uncertain
information. It combines different pieces of evidence to calculate the probability of an event.
This method is particularly used for dealing with conflicting information.
This MDP is used for modeling and decision-making in situations where the outcomes are
partly random and partly under the control of the decision-maker.
This provides a framework of modeling for the environment, actions, rewards, and
transitions.
This makes allowance for optimal decision-making under uncertainty.
[Link] Inference
Example:
In spam detection, Bayesian inference can assign probabilities that an email is spam based on
the words it contains, allowing the computer to improve its predictions over time.
Markov Models
9.A Markov Model includes a hidden model that is widely used in AI to perform tasks like
speech recognition and natural language processing.
It assumes that the future states depend only on the current state and not on all the
previous events.
This simplifies the modeling of complex systems where uncertainty is present.
Example:
Predicting the next word in a sentence or recognizing spoken words.
[Link] Graphical Models
These are models that include social networks and Markov random fields, which represent
the relationships between a set of variables using graphs.
They allow for efficient computation of joint probability distributions and help in
understanding dependencies among variables.
These models are particularly useful in complex domains such as medical diagnosis and
social network analysis.
Example:
An AI system deciding whether to send a robot for rescue during a storm, based on the
probability of success and risk.
Applications
Bayes’ Probabilistic Inference is a method of reasoning under uncertainty that uses Bayes’
Theorem to update the probability of a hypothesis when new evidence is observed.
It helps AI systems make decisions when the data is incomplete, uncertain, or noisy.
Bayes’ Theorem:
P(A∣B)= P(B∣A)⋅P(A) / P(B)
Where:
So, there’s 18% chance that the person actually has the disease after a positive test.
Advantages:
Limitations:
Applications in AI:
Medical diagnosis
Spam filtering (Naive Bayes Classifier)
Image and speech recognition
Decision support systems
Robotics and autonomous navigation
Instead of assigning probabilities to single hypotheses, DST assigns a degree of belief to sets
of hypotheses.
It is useful when the available evidence does not strongly support one single outcome.
Basic Terms/key concepts:
∑m(A)=1, m(∅)=0
Bel(A)=∑ m(B)
Pl(A)=1−Bel(Aˉ)
➡️Main formula:
where
m1,m2 = basic belief assignments (BBAs) from two evidence sources
K = conflict measure (how much evidence disagrees)
➡️Type:
Evidential reasoning — works with belief and plausibility, not exact probabilities.
➡️Used when:
Information is incomplete or uncertain, and you want to merge evidence from multiple
sensors, experts, or reports.
➡️Example:
Two sensors detect whether it’s raining — each gives partial belief.
DST combines both to get a more confident final belief about “Rain
Advantages:
Limitations:
Applications in AI:
------------- xxxx--------