0% found this document useful (0 votes)
60 views18 pages

Introduction to Artificial Intelligence

The document provides an introduction to Artificial Intelligence (AI), defining it as a field focused on creating systems that can perform tasks requiring human intelligence. It outlines four primary approaches to AI: Acting Humanly, Thinking Humanly, Thinking Rationally, and Acting Rationally, along with various applications in fields such as healthcare, finance, and gaming. Additionally, it discusses the historical development of AI and the interdisciplinary foundations that contribute to its evolution.

Uploaded by

Suhail Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views18 pages

Introduction to Artificial Intelligence

The document provides an introduction to Artificial Intelligence (AI), defining it as a field focused on creating systems that can perform tasks requiring human intelligence. It outlines four primary approaches to AI: Acting Humanly, Thinking Humanly, Thinking Rationally, and Acting Rationally, along with various applications in fields such as healthcare, finance, and gaming. Additionally, it discusses the historical development of AI and the interdisciplinary foundations that contribute to its evolution.

Uploaded by

Suhail Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

AI Unit I: Introduction to Artificial Intelligence

Date: 19/09/25

1. Introduction to AI

Definition

Artificial Intelligence (AI) is a broad field of computer science focused on creating systems
that can perform tasks that typically require human intelligence. This includes capabilities
like learning, reasoning, problem-solving, perception, and understanding language.

The ultimate goal of AI is to build machines that can think, act, and learn like humans, or
even surpass human capabilities in specific domains.

2. Approaches to AI

There are four primary approaches to AI, which can be categorized along two dimensions:
thought processes vs. behavior, and human-like vs. rational.

Human-like Rational

Thinking 1. Thinking Humanly 2. Thinking Rationally

Acting 3. Acting Humanly 4. Acting Rationally

1. Acting Humanly (The Turing Test Approach)

●​ Core Idea: To create a machine that can exhibit intelligent behavior indistinguishable
from that of a human.
●​ The Turing Test (Proposed by Alan Turing, 1950): This is the classic test for this
approach.
○​ A human interrogator asks a series of written questions to both a human and a
machine, without knowing which is which.
○​ If the interrogator cannot reliably tell the machine from the human, the
machine is said to have passed the test and demonstrated artificial intelligence.
●​ Required Capabilities: To pass the Turing Test, a machine would need:
○​ Natural Language Processing (NLP): To understand and generate human
language.
○​ Knowledge Representation: To store what it knows or hears.
○​ Automated Reasoning: To use the stored information to answer questions
and draw new conclusions.
○​ Machine Learning: To adapt to new circumstances and detect patterns.
○​ For the Total Turing Test (which includes physical interaction), it would also
need Computer Vision (to perceive objects) and Robotics (to manipulate
objects).

2. Thinking Humanly (The Cognitive Modelling Approach)

●​ Core Idea: To build a system that thinks like a human by understanding the actual
workings of the human mind.
●​ Methodology: This approach involves:
1.​ Introspection: Trying to catch our own thoughts as they happen.
2.​ Psychological Experiments: Observing a person in action to understand their
thought processes.
●​ Goal: To create a computer model of the human mind that can be used to test theories
about how humans think. The field of Cognitive Science brings together AI models
and psychological techniques to achieve this.

3. Thinking Rationally (The "Laws of Thought" Approach)

●​ Core Idea: To build a system that thinks based on logic and formal reasoning. This
approach is based on the idea that correct thinking can be codified as a set of logical
rules.
●​ Methodology: This approach uses logic and formal systems (like syllogisms) to
create programs that can reason correctly, given correct premises. For example:
"Socrates is a man; all men are mortal; therefore, Socrates is mortal."
●​ Challenges:
1.​ It's difficult to translate all real-world knowledge into formal logical notation.
2.​ Solving problems with pure logic can be computationally very slow.

4. Acting Rationally (The Rational Agent Approach)

●​ Core Idea: To create an agent that acts to achieve the best possible outcome in a
given situation. This is the most common and modern approach in AI.
●​ What is a Rational Agent? An agent is anything that can perceive its environment
through sensors and act upon that environment through actuators. A rational agent is
one that acts to achieve the best expected outcome.
●​ Advantages over "Laws of Thought":
○​ It's more general, as correct inference is just one way to achieve rationality.
○​ It's more adaptable to situations with uncertainty and incomplete information.
●​ Example: A self-driving car is a rational agent. It uses sensors (cameras, LiDAR) to
perceive its environment and actuators (steering, brakes) to act in a way that safely
reaches its destination.
3. Applications of AI

AI in Astronomy

●​ Data Analysis: AI algorithms can sift through massive datasets from telescopes to
find patterns and anomalies that humans might miss.
●​ Galaxy Classification: Machine learning models are used to automatically classify
galaxies based on their shape and other features.
●​ Exoplanet Discovery: AI helps in identifying planets outside our solar system by
analyzing the light from distant stars for tiny, periodic dips in brightness.
●​ Telescope Automation: AI can automate the control and scheduling of telescopes,
optimizing observation time.

AI in Healthcare

●​ Medical Imaging Analysis: AI is used to analyze X-rays, CT scans, and MRIs to


detect diseases like cancer at earlier stages and with higher accuracy.
●​ Personalized Treatment: AI can analyze a patient's genetic data and medical history
to recommend personalized treatment plans.
●​ Drug Discovery: Machine learning can speed up the process of discovering new
drugs by predicting the effectiveness of different chemical compounds.
●​ Administrative Tasks: AI automates tasks like scheduling appointments and
managing medical records, freeing up time for healthcare professionals.

AI in Gaming

●​ Smarter NPCs: AI creates more realistic and challenging non-player characters


(NPCs) that can learn from and adapt to the player's behavior.
●​ Procedural Content Generation (PCG): AI can automatically generate game levels,
maps, and even storylines, creating unique experiences for each player.
●​ Pathfinding: AI algorithms like A* are used to help characters navigate complex
game environments intelligently.
●​ Player Experience Modeling: AI can analyze player behavior to adjust the game's
difficulty in real-time, keeping the player engaged.

AI in Finance

●​ Algorithmic Trading: AI systems can analyze market data and execute trades at
speeds far beyond human capability.
●​ Fraud Detection: Machine learning algorithms can detect fraudulent transactions in
real-time by identifying unusual patterns in spending behavior.
●​ Credit Scoring: AI models can assess credit risk more accurately by analyzing a
wider range of data points than traditional methods.
●​ Personalized Banking: AI-powered chatbots and virtual assistants provide 24/7
customer service and personalized financial advice.

AI in Data Security

●​ Threat Detection: AI can analyze network traffic to identify and flag suspicious
activities that may indicate a cyberattack.
●​ Phishing Prevention: AI models can analyze emails to detect phishing attempts with
high accuracy by looking for tell-tale signs.
●​ Vulnerability Management: AI can proactively scan systems for weaknesses and
predict where future attacks are most likely to occur.
●​ Authentication: AI enhances security through methods like facial recognition and
behavioral biometrics (analyzing how a user types or moves a mouse).

4. History of AI (1943-2018)

●​ 1943: Warren McCulloch and Walter Pitts propose the first mathematical model of an
artificial neuron.
●​ 1950: Alan Turing publishes "Computing Machinery and Intelligence," introducing
the Turing Test.
●​ 1956: The term "Artificial Intelligence" is coined by John McCarthy at the Dartmouth
Conference, considered the birth of AI as a field.
●​ 1958: John McCarthy develops the AI programming language Lisp.
●​ 1961: The first industrial robot, Unimate, starts work on an assembly line at General
Motors.
●​ 1965: Joseph Weizenbaum creates ELIZA, an early chatbot that could converse with
humans.
●​ 1970s-1980s: The first "AI Winter" occurs, a period of reduced funding and interest
due to slow progress. Expert systems, which encode human expertise, become
popular.
●​ 1986: Backpropagation, a key algorithm for training neural networks, is popularized,
leading to a resurgence of interest in the field.
●​ 1997: IBM's Deep Blue chess computer defeats world champion Garry Kasparov.
●​ 2006: Geoffrey Hinton and others make significant breakthroughs in "deep learning,"
allowing for much more powerful neural networks.
●​ 2011: IBM's Watson computer wins the quiz show "Jeopardy!", demonstrating
remarkable natural language processing abilities.
●​ 2012: AlexNet, a deep neural network, achieves a massive leap in accuracy on the
ImageNet image recognition challenge, sparking the deep learning revolution.
●​ 2016: Google DeepMind's AlphaGo defeats world champion Lee Sedol at the
complex game of Go.
●​ 2018: AI models like Google's BERT achieve near-human level performance on
reading comprehension tasks.

AI Unit I: Part 2

Date: 19/09/25

1. Foundations of AI

Artificial Intelligence is an interdisciplinary field built upon the contributions of several other
disciplines.

●​ Philosophy: Addressed fundamental questions about the nature of knowledge,


reasoning, and consciousness.
○​ Key Questions: Can formal rules be used to draw valid conclusions? How
does the mind arise from a physical brain? Where does knowledge come
from?
○​ Contribution: Provided the basis for logic, reasoning, and the idea that the
mind operates, at some level, like a machine.
●​ Mathematics: Provided the tools for formal representation and manipulation of logic
and uncertainty.
○​ Key Concepts: Formal logic (propositional and first-order logic),
computation, and algorithms.
○​ Contribution: Mathematics allows us to state problems in a precise, formal
language that machines can understand and process. It's the foundation for
automated reasoning and analyzing the computability and complexity of
problems.
●​ Probability & Statistics: Provided the tools to reason under uncertainty and learn
from data.
○​ Key Concepts: Bayesian probability, statistical modeling, and data analysis.
○​ Contribution: Modern AI, especially machine learning, heavily relies on
statistics to make predictions, classify data, and handle the inherent
uncertainty of the real world. For example, a spam filter uses probability to
determine if an email is junk.
●​ Economics: Provided frameworks for rational decision-making.
○​ Key Concepts: Utility theory (the idea that rational agents try to maximize
their expected happiness or "utility"), decision theory, and game theory.
○​ Contribution: Economics helps define what it means to be a rational agent.
AI systems are often designed to make the best possible choice to achieve
their goals, which is a core economic principle.
●​ Neuroscience: Provided insights into the structure and function of the human brain.
○​ Key Concepts: How the brain processes information, how neurons work, and
how different brain regions are connected.
○​ Contribution: Neuroscience is the inspiration for Artificial Neural
Networks (ANNs), which are a cornerstone of modern deep learning. ANNs
are simplified models of the brain's structure.
●​ Psychology / Cognitive Science: Provided models of human thought, perception, and
learning.
○​ Key Concepts: How humans solve problems, learn new skills, and process
language.
○​ Contribution: This field helps AI researchers build systems that think and
act humanly. By studying human cognition, we can design AI that interacts
with people more naturally and solves problems in more human-like ways.

2. Agents

Definition

An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.

●​ Sensors: Devices that detect changes in the environment (e.g., cameras, microphones,
keyboards, temperature sensors).
●​ Actuators: Devices that affect the environment (e.g., robotic arms, wheels, speakers,
a display screen).
●​ Percept: The agent's perceptual inputs at any given instant.
●​ Percept Sequence: The complete history of everything the agent has ever perceived.
●​ Agent Function: A mapping from any given percept sequence to an action. $f: P^\*
\rightarrow A$

Diagram of an Agent

An agent interacts with its environment. It perceives the environment through its sensors and
acts on the environment using its actuators.

Code snippet
graph TD
subgraph Environment
A[State]
end
subgraph Agent
B(Sensors) --> C{Agent Function};
C --> D(Actuators);
end
A -- Percepts --> B;
D -- Actions --> A;

style Agent fill:#f9f,stroke:#333,stroke-width:2px

3. Types of Agents

Agents are categorized based on their complexity and intelligence.

1. Simple Reflex Agent

●​ Definition: These agents select actions based only on the current percept, ignoring
the rest of the percept history.
●​ Working: They use simple Condition-Action rules (if-then rules). For example, if
the car ahead brakes, then start braking.

Diagram:​
Code snippet​
graph TD
A[Environment] -- Percept --> B(Sensors);
B -- "What the world is like now" --> C{Condition-Action Rules};
C -- "What action I should do now" --> D(Actuators);
D -- Action --> A;

●​
●​ Limitation: They have very limited intelligence and cannot operate correctly if the
environment is not fully observable. They cannot handle situations where memory is
needed.

2. Model-Based Reflex Agent

●​ Definition: These agents maintain an internal state (a model of the world) to keep
track of the part of the world they can't see right now. The state is updated based on
how the world evolves and how the agent's actions affect the world.
●​ Working: It uses its internal model along with the current percept to decide on an
action. For example, a self-driving car needs to know not just that it sees brake lights,
but also where other cars are that it can't currently see.
Diagram:​
Code snippet​
graph TD
A[Environment] -- Percept --> B(Sensors);
B -- "What the world is like now" --> C(State);
C -- "How the world evolves" & "What my actions do" --> C
C --> D{Condition-Action Rules};
D -- "What action I should do now" --> E(Actuators);
E -- Action --> A;

●​

3. Goal-Based Agent

●​ Definition: These agents expand on model-based agents by having goal information.


They choose actions that will help them achieve a specific goal.
●​ Working: Instead of just reacting, they can think ahead and ask "What will happen if
I do action A? Will it get me closer to my goal?". This often involves search and
planning.
●​ Example: A GPS navigation system finds a route to a destination (the goal). It
considers different paths to see which one will achieve the goal.

4. Utility-Based Agent

●​ Definition: These agents are a more advanced form of goal-based agents. They use a
utility function to choose actions that maximize their expected "happiness" or
"utility," especially when there are multiple goals or when achieving a goal is
uncertain.
●​ Working: A utility function maps a state to a real number, which describes the
associated degree of happiness. If a goal can be achieved in multiple ways, the agent
will choose the path with the highest utility (e.g., the fastest or safest route, not just
any route).
●​ Example: A navigation system that not only finds a route but also considers traffic,
tolls, and travel time to suggest the best route according to the user's preferences.

5. Learning Agent

●​ Definition: A learning agent can operate in unknown environments and become more
competent over time. It has a learning element that allows it to improve its
performance through experience.
●​ Working: A learning agent has four main components:
1.​ Learning Element: Responsible for making improvements.
2.​ Performance Element: Responsible for selecting external actions (this is the
"agent" part we've seen so far).
3.​ Critic: Provides feedback to the learning element on how the agent is doing
based on a fixed performance standard.
4.​ Problem Generator: Suggests new, exploratory actions to help the agent
learn.
●​ Example: A spam filter that learns to better classify emails as spam or not spam
based on user feedback.

AI Unit I: Part 3

Date: 19/09/25

1. Types of Intelligent Agents (IA)

While the term "intelligent agent" is used broadly, it's useful to distinguish it from the more
specific "rational agent."

1) Intelligent Agent

An intelligent agent is an autonomous entity that perceives its environment through sensors
and acts upon that environment through actuators to achieve certain goals. The term generally
implies some level of learning or adaptation.

2) Rational Agent

A rational agent is a specific type of intelligent agent that always acts to achieve the best
expected outcome, given its knowledge and the evidence from its percepts. It aims to
maximize its performance measure. All rational agents are intelligent, but not all intelligent
agents are perfectly rational (they might take actions that are good but not optimal).

●​ Key Idea: Rationality depends on the performance measure, prior knowledge, the
sequence of percepts, and the actions available. It is not about being omniscient or
clairvoyant; it's about making the best decision with the information available.

2. PEAS Representation

PEAS is a framework used to define the task environment for a rational agent. It helps in
designing and analyzing agent behavior.
●​ Performance Measure: How is the agent's success evaluated? (The goal/objective)
●​ Environment: Where does the agent operate?
●​ Actuators: How does the agent act on the environment?
●​ Sensors: How does the agent perceive the environment?

Example 1: Self-Driving Car (Automated Taxi) 🚕


●​ Performance Measure: Safety, speed, legality, comfort, maximizing profit.
●​ Environment: Roads, traffic, pedestrians, weather, road signs.
●​ Actuators: Steering wheel, accelerator, brakes, signal lights, horn, display
screen.
●​ Sensors: Cameras, LiDAR, radar, GPS, speedometer, odometer, engine sensors,
microphone.

Example 2: Robotic Vacuum Cleaner 🤖


●​ Performance Measure: Cleanliness (amount of dirt cleaned), battery life, time
taken, coverage area.
●​ Environment: Room, floor, furniture, obstacles, people, pets, dirt.
●​ Actuators: Wheels, brushes, vacuum suction motor, speakers.
●​ Sensors: Infrared sensors (for obstacles and cliffs), dirt sensors, camera, bumper
sensors, wheel encoders (to track distance).

3. Agent Environment

Definition

The environment is everything in the world that surrounds the agent. It is the context in
which the agent perceives, decides, and acts. The nature of the environment is crucial in
determining the appropriate agent design.

Types of Environments

Environments can be classified along several dimensions:

1. Fully Observable vs. Partially Observable

●​ Fully Observable: The agent's sensors give it access to the complete state of the
environment at each point in time. It knows everything it needs to know to make a
choice.
○​ Example: A chess game where the agent can see the entire board.
●​ Partially Observable: The agent cannot see the full state of the environment. It might
be due to noisy or inaccurate sensors or parts of the state being hidden.
○​ Example: A self-driving car cannot see what is around a corner. A
poker-playing agent cannot see the other players' cards.

2. Static vs. Dynamic

●​ Static: The environment does not change while the agent is deliberating or deciding
on its next action.
○​ Example: A crossword puzzle. The puzzle waits for the agent to make a move.
●​ Dynamic: The environment can change while the agent is thinking. The agent must
continuously observe and react.
○​ Example: A self-driving car in traffic. Other cars will move while the agent is
deciding what to do next.

3. Discrete vs. Continuous

●​ Discrete: The environment has a finite or countable number of distinct states,


percepts, and actions.
○​ Example: Chess has a finite number of possible board states and a set number
of moves.
●​ Continuous: The state, time, percepts, or actions can take on a range of continuous
values.
○​ Example: Driving a car involves continuous variables like speed, position, and
steering angle.

4. Deterministic vs. Stochastic

●​ Deterministic: The next state of the environment is completely determined by the


current state and the agent's action. There is no uncertainty.
○​ Example: A simple vacuum cleaner world where the "suck" action is
guaranteed to clean a square.
●​ Stochastic: The next state is not perfectly predictable and involves an element of
randomness.
○​ Example: A self-driving car's braking action might be affected by a wet road,
making the outcome uncertain. A dice game is stochastic.

5. Single-agent vs. Multi-agent

●​ Single-agent: Only one agent is operating in the environment.


○​ Example: Solving a puzzle.
●​ Multi-agent: Two or more agents are operating in the environment. This can be
competitive (like in chess) or cooperative (like in a team of delivery drones).
○​ Example: A game of soccer, online gaming, automated stock trading.

6. Episodic vs. Sequential


●​ Episodic: The agent's experience is divided into atomic, independent episodes. The
action in one episode does not affect the next.
○​ Example: An image classification system that classifies one image at a time.
The classification of one image has no bearing on the next.
●​ Sequential: The agent's current decision can affect all future decisions. The past
matters.
○​ Example: Playing chess or driving a car. Every move or action has long-term
consequences.

7. Known vs. Unknown

●​ Known: The agent knows the "rules" or "physics" of the environment. It


understands the outcomes of its actions.
○​ Example: An agent playing chess knows all the rules of the game.
●​ Unknown: The agent does not know the rules of the environment and must learn
them through exploration and experience.
○​ Example: A rover exploring a new planet must learn about the terrain and its
properties.

8. Accessible vs. Inaccessible

●​ Accessible: The agent can obtain the complete and accurate state of the
environment. This is another term for Fully Observable.
●​ Inaccessible: The agent cannot obtain the complete state. This is another term for
Partially Observable.

AI Unit I: Part 4

Date: 19/09/25

1. Problem-Solving Agents

Definition

A problem-solving agent is a type of goal-based agent that decides what to do by finding a


sequence of actions that leads to a desirable state (a goal). This process of finding the
sequence of actions is called search.

Key Terminology
●​ Search: The process of exploring the state space to find a path from the initial state to
a goal state.
●​ Search Space (or State Space): The set of all possible states reachable from the
initial state.
●​ State: A configuration of the world or problem. For example, in a navigation
problem, a state would be the agent's current city.
●​ Goal Test: A function that determines whether a given state is a goal state.
●​ Search Tree: A representation of the search process. The root is the initial state, and
branches represent actions leading to new states.
●​ Actions: The possible moves an agent can make in a given state.
●​ Transition Model: A function that describes what state results from performing a
given action in a given state. It defines the "physics" of the environment.
●​ Path Cost: A function that assigns a numeric cost to a path. Typically, it's the sum of
the costs of the individual actions along the path.
●​ Solution: A sequence of actions (a path) from the initial state to a goal state.
●​ Optimal Solution: A solution that has the lowest path cost among all possible
solutions.

Problem Formulation

Before an agent can solve a problem, the problem must be precisely formulated. This
involves defining four components:

1.​ Initial State: The state where the agent starts.


2.​ Possible Actions: The set of actions available to the agent.
3.​ Transition Model: A description of what each action does.
4.​ Goal Test: The condition that determines if a state is a goal.
5.​ Path Cost: A function that assigns a cost to a path.

Example Problems

1. 8-Puzzle Problem

The 8-puzzle consists of a 3x3 grid with 8 numbered tiles and one blank space. The goal is to
rearrange the tiles from a random initial configuration to a specified goal configuration by
sliding tiles into the blank space.

A visual representation of the 8-puzzle showing an initial state, a series of moves (sliding
tiles into the blank space), and the final goal state.

●​ States: A specific configuration of the 8 tiles on the board.


●​ Initial State: Any random configuration of the board.
●​ Actions: Move the blank space Left, Right, Up, or Down.
●​ Goal Test: Check if the current board configuration matches the goal configuration.
●​ Path Cost: Each move costs 1.

2. 8-Queens Problem

The goal of the 8-queens problem is to place eight chess queens on an 8x8 chessboard so that
no two queens threaten each other. This means no two queens can be on the same row,
column, or diagonal.

An 8x8 chessboard with 8 queens placed on it, where no two queens are in the same row,
column, or diagonal, representing a valid solution.

●​ States: Any arrangement of 0 to 8 queens on the board.


●​ Initial State: An empty chessboard.
●​ Actions: Add a queen to any empty square.
●​ Goal Test: Check if 8 queens are on the board and none of them are attacking another.
●​ Path Cost: Not applicable for this formulation, as we only care about the final
configuration.

2. Types of Search Algorithms

Uninformed / Blind Search

These algorithms are given no information about the problem other than its definition. They
don't know if one state is "better" or "closer to the goal" than another.

1. Breadth-First Search (BFS)

●​ Strategy: Explores all nodes at the present depth level before moving on to the nodes
at the next depth level. It uses a Queue (First-In, First-Out).
●​ Properties:
○​ Complete? Yes.
○​ Optimal? Yes, if all step costs are equal.
○​ Time Complexity: O(bd)
○​ Space Complexity: O(bd)
○​ (where b is the branching factor, d is the depth of the shallowest solution)

2. Uniform Cost Search (UCS)

●​ Strategy: Expands the node with the lowest path cost (g(n)) from the start. It uses a
Priority Queue. It's like BFS but considers path cost instead of just depth.
●​ Properties:
○​ Complete? Yes.
○​ Optimal? Yes.
○​ Time Complexity: O(b1+⌊C∗/ϵ⌋)
○​ Space Complexity: O(b1+⌊C∗/ϵ⌋)
○​ (where C∗ is the cost of the optimal solution, ϵ is the minimum step cost)

3. Depth-First Search (DFS)

●​ Strategy: Explores as far as possible along each branch before backtracking. It uses a
Stack (Last-In, First-Out).
●​ Properties:
○​ Complete? No (can get stuck in infinite loops).
○​ Optimal? No.
○​ Time Complexity: O(bm)
○​ Space Complexity: O(bm)
○​ (where m is the maximum depth of the state space)

4. Depth-Limited Search (DLS)

●​ Strategy: A variation of DFS that imposes a depth limit to prevent infinite loops.
●​ Properties:
○​ Complete? Yes, if the limit is greater than or equal to the solution depth.
○​ Optimal? No.
○​ Time Complexity: O(bl)
○​ Space Complexity: O(bl)
○​ (where l is the depth limit)

5. Iterative Deepening Search (IDS)

●​ Strategy: Performs DLS repeatedly with increasing depth limits (0, 1, 2, ...). It
combines the benefits of BFS (completeness, optimality) and DFS (low memory
usage).
●​ Properties:
○​ Complete? Yes.
○​ Optimal? Yes, if all step costs are equal.
○​ Time Complexity: O(bd)
○​ Space Complexity: O(bd)

6. Bidirectional Search

●​ Strategy: Runs two simultaneous searches—one forward from the initial state and
one backward from the goal state. The search stops when the two meet in the middle.
●​ Properties:
○​ Complete? Yes.
○​ Optimal? Yes.
○​ Time Complexity: O(bd/2)
○​ Space Complexity: O(bd/2)

(Note: The next two algorithms, Hill Climbing and Simulated Annealing, are often taught
alongside uninformed search but are actually local search algorithms. They don't explore a
search tree but instead try to improve a single current state.)

Local Search Algorithms

These algorithms operate on a single current state and try to improve it, rather than keeping
track of paths. They are not guaranteed to find a solution.

7. Hill Climbing

●​ Strategy: A simple greedy algorithm that continuously moves in the direction of


increasing value (uphill). It picks the best immediate neighbor and moves to it. The
process stops when it reaches a "peak" where no neighbor has a higher value.
●​ Problem: It can get stuck at local maxima, which are peaks that are not the highest
peak (the global maximum) in the state space.

8. Simulated Annealing

●​ Strategy: An improvement over hill climbing. Like hill climbing, it always accepts
"uphill" moves. However, to escape local maxima, it also accepts "downhill" moves
with a certain probability. This probability decreases over time (as the system
"cools"), making it less likely to accept bad moves as it gets closer to a solution.
●​ Analogy: Based on the process of annealing in metallurgy, where a material is heated
and then slowly cooled to strengthen it. The heat allows atoms to move around
randomly, and the slow cooling allows them to settle into a low-energy, stable crystal
structure.

AI Unit I: Part 5

Date: 19/09/25

1. Informed (Heuristic) Search

Informed search algorithms use problem-specific knowledge beyond the problem's


definition itself. They have an idea of how "promising" or "close to the goal" a state is, which
helps them search more efficiently than blind, uninformed strategies.

This "problem-specific knowledge" is provided by a heuristic function, denoted as h(n).


●​ Heuristic Function h(n): An estimate of the cost of the cheapest path from the state
at node n to a goal state. A good heuristic helps guide the search toward the goal.

1. Best-First Search

Best-First Search is a general approach to informed search. It's a strategy of expanding the
most promising node chosen according to an evaluation function. The node with the lowest
evaluation is selected for expansion. A priority queue is used to store the nodes.

A common type of Best-First Search is Greedy Best-First Search.

Greedy Best-First Search

●​ Strategy: At each step, it tries to expand the node that is thought to be closest to the
goal. It evaluates nodes using only the heuristic function: f(n)=h(n).
●​ Behavior: It's "greedy" because it follows the single most promising-looking path,
ignoring the cost of the path so far.
●​ Example: In a route-finding problem, the heuristic h(n) might be the straight-line
distance from a city (node n) to the destination city. Greedy BFS would always
expand the city that is closest to the destination in a straight line, even if it leads down
a long and winding road.
●​ Properties:
○​ Complete? No. Can get stuck in loops.
○​ Optimal? No. It can find a solution quickly, but it might not be the shortest
path.
○​ Time Complexity: O(bm) (in the worst case)
○​ Space Complexity: O(bm) (in the worst case)

A map showing two paths from a start to a goal. Greedy Best-First Search chooses a path that
looks shorter initially (based on straight-line distance) but ends up being longer overall
compared to the optimal path.

2. A* Search

A* Search is widely considered the most important informed search algorithm. It combines
the strengths of Uniform Cost Search (which considers the path cost from the start) and
Greedy Best-First Search (which uses a heuristic to estimate the cost to the goal).

●​ Strategy: A* Search evaluates nodes by combining the cost to reach the node, g(n),
and the estimated cost to get from the node to the goal, h(n).
●​ Evaluation Function: f(n)=g(n)+h(n)
○​ g(n): The actual cost of the path from the start node to node n.
○​ h(n): The estimated cost (heuristic) of the cheapest path from node n to the
goal node.
○​ f(n): The estimated cost of the cheapest solution through node n.

A* aims to find the path with the lowest f(n).

Admissibility and Consistency of Heuristics

For A* to be optimal, the heuristic function h(n) must have certain properties:

1.​ Admissible Heuristic: An admissible heuristic never overestimates the actual cost
to reach the goal. For every node n, h(n)≤h∗(n), where h∗(n) is the true cost from n to
the goal.
○​ Analogy: The straight-line distance between two cities is an admissible
heuristic for a route-finding problem because the shortest road path can never
be shorter than the straight line.
2.​ Consistent Heuristic: A heuristic is consistent if, for every node n and every
successor n′ of n, the estimated cost of reaching the goal from n is no greater than the
step cost to get to n′ plus the estimated cost of reaching the goal from n′. This is a
stricter condition than admissibility.
○​ Formula: h(n)≤c(n,a,n′)+h(n′), where c(n,a,n′) is the cost of taking action a to
get from n to n′.

Properties of A*

●​ Complete? Yes.
●​ Optimal? Yes, provided the heuristic h(n) is admissible.
●​ Time Complexity: The number of nodes expanded is exponential in the error of the
heuristic.
●​ Space Complexity: It keeps all generated nodes in memory, so space complexity is a
major issue.

A map showing how A* search explores paths. It balances the actual distance traveled from
the start with the estimated straight-line distance to the goal, allowing it to find the true
shortest path, unlike Greedy BFS.

Common questions

Powered by AI

Breadth-first search explores all nodes at the current depth before moving to the next level, ensuring completeness and optimality if all step costs are equal. However, it suffers from high time and space complexity, O(b^d), which is prohibitive for deep solutions. A* search, combining aspects of uniform cost search and greedy best-first search, uses an evaluation function f(n)=g(n)+h(n) to balance path cost with goal proximity. It is complete and optimal if an admissible heuristic is used, but its exponential time complexity and memory usage remain issues. A* provides better guided searches compared to BFS due to heuristic use, but at the cost of computational resources .

An agent in AI is defined as an entity that perceives its environment through sensors and acts upon that environment using actuators. Sensors detect changes in the environment, and actuators carry out actions that affect the environment. The agent's function maps percept sequences into actions, enabling it to interact with its surrounding context .

Neuroscience has inspired the development of artificial neural networks by providing insights into how the brain processes information, works with neurons, and the connections between different brain regions. These insights have led to the creation of simplified models of the brain's structure, which are fundamental components of modern deep learning .

Economics offers frameworks for rational decision-making, which are pivotal in defining what it means to be a rational agent in AI. The principles of utility theory, decision theory, and game theory guide AI systems to make the best choices to achieve their goals. This aligns with the core economic principle of maximizing expected utility, providing a structured approach for AI design and functionality .

Probability and statistics provide the frameworks for reasoning under uncertainty and learning from data, which is fundamental to modern AI, especially machine learning. They help in making predictions, classifying data, and managing the inherent uncertainty of real-world situations, such as using Bayesian probability to determine if an email is junk .

Admissible heuristics in A* search are critical to ensuring optimality, as they never overestimate the actual cost to reach the goal, allowing the search to find the cheapest path. Consistent heuristics further guarantee optimality, ensuring that estimated path costs are non-increasing along a path, i.e., h(n) ≤ c(n,a,n') + h(n'). This consistency ensures that the path cost estimate remains accurate through transitions, maintaining the integrity of the search process, and preventing unnecessary expansions .

The PEAS framework (Performance Measure, Environment, Actuators, Sensors) provides a comprehensive means to define a task environment for rational agents. It helps designers specify how an agent's success is evaluated (performance measure), the environment it operates within, the tools it uses to interact with the environment (actuators), and the methods by which it perceives its environment (sensors). This structured approach ensures that agent designs align with specific goals and environmental contexts .

A rational agent is a specific type of intelligent agent that acts to achieve the best expected outcome based on its knowledge and the evidence from its percepts, aiming to maximize its performance measure. While all rational agents are intelligent, not all intelligent agents are perfectly rational, as they might not always make optimal actions. Rationality is contingent on the performance measure, prior knowledge, percepts, and available actions, making rational agents distinct in their structured approach to decision-making .

An agent's environment is characterized by various factors such as observability, dynamics, discreteness, determinism, the number of agents involved, and whether interactions are episodic or sequential. These factors influence how agents are designed; for instance, partially observable environments require agents to include mechanisms for dealing with incomplete information, while dynamic environments necessitate continuous observation and adaptation to changes. Different combinations of these characteristics necessitate tailored approaches to agent design .

Mathematics provides the tools for formal representation and manipulation of logic and uncertainty, which are essential for automated reasoning. It enables the expression of problems in a precise, formal language that machines can process, serving as the basis for understanding computation and algorithms. This is crucial for analyzing the computability and complexity of problems .

You might also like