0% found this document useful (0 votes)
71 views25 pages

Understanding Artificial Intelligence Basics

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views25 pages

Understanding Artificial Intelligence Basics

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

What is AI?

Artificial Intelligence (AI) is the science and engineering that deals with automation of intelligent
computer programs that can think, learn, and act.

The term AI is used to describe machines that mimic cognitive functions that humans associate
with other human minds like learning and problem solving.

It draws from computer science, mathematics, psychology, neuroscience, linguistics, philosophy,


and other fields.

John McCarthy’s Definition (1956)

“AI is the science and engineering of making intelligent machines, especially intelligent
computer programs.”

Three Levels of AI
Narrow AI (Weak AI)

AI is said to be narrow when a machine can do specific tasks better than human

➢​ Specialized for a single task or narrow range of tasks.


➢​ Simulates intelligence but has no true understanding.

Examples: Google Translate, Siri, Chess AI.

General AI

➢​ It has the ability to understand ,learn and apply knowledge like humans do
➢​ Can perform any intellectual task with the accuracy level as humans can do.
➢​ Learns, adapts, and applies knowledge across different domains.
➢​ Still theoretical — no true AI exists yet.

Strong AI​

➢​ Intelligence is strong when it can beat in many tasks.


➢​ Surpasses human intelligence in all areas, including creativity and emotional
intelligence.
➢​ Hypothetical future stage.
[Link] to AI

1.​ Thinking Humanly – Modeling how humans think (Cognitive modeling).

If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
There are three ways to do this: through introspection—trying to catch our own thoughts as
they go by; through psychological experiments—observing a person in action; and through
brain imaging—observing the brain in action. Once we have a sufficiently precise theory of
the mind, it becomes possible to express the theory as a computer program.

If the program’s input–output behavior matches corresponding human behavior, that is


evidence that some of the program’s mechanisms could also be operating in humans

2.​ Acting Humanly – Making machines that behave like humans (Turing Test approach).

Turing tests measure the machine’s ability that shows humans like intelligence. ie, how close
the machine’s intelligence is compared to a human.

The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after
posing some written questions, cannot tell whether the written responses come from a person
or from a computer.

3.​ Thinking Rationally – Using logic and reasoning to reach conclusions.

Thinking rationally is a The “laws of thought” approach


4.​ Acting Rationally – Choosing the best action to achieve goals (Rational Agent approach).

An agent is just something that acts (agent comes from the Latin agere, to do).

All computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, adapt to
change, and create and pursue goals.

A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.

5. History of AI (1943–2011)

➢​ 1943 – Warren McCulloch & Walter Pitts proposed the first mathematical model of artificial
neurons.

➢​ 1950 – Alan Turing introduced the Turing Test to measure machine intelligence.

➢​ 1955–65 (Great Enthusiasm) – Birth of AI programs like Logic Theorist and General Problem
Solver. Optimism about rapid progress.

➢​ 1956 – Dartmouth Conference (John McCarthy coined the term “Artificial Intelligence”).

➢​ 1966–73 (Reality Dawns) – Limited computing power & data slowed progress. Machine
translation failures reduced funding.

➢​ Otherwise known as 1966–1973 – First AI Winter

➢​ ALPAC Report (1966) criticized slow progress in machine translation , hardware limitations
and unrealistic promises by early researchers.
➢​ 1969–85 (Adding Domain Knowledge) – Development of expert systems.

➢​ DENDRAL – An AI system for chemical analysis.

➢​ MYCIN – An AI system for diagnosing bacterial infections.

➢​ 1974–1980 – Revival via Expert Systems

Knowledge-based systems like MYCIN (medical diagnosis) and DENDRAL (chemical analysis)
showed practical benefits.
Rule-based reasoning became the focus.

➢​ 1980–1987 – Commercial AI Boom

Expert systems gained industrial adoption and AI companies attracted heavy investment.

➢​ 1987–1993 – Second AI Winter - Due to Expert systems becoming costly to maintain ,the
specialized hardware market for AI (Lisp machines) collapsed. So many AI companies failed;
funding declined sharply.

➢​ 1997 – IBM Deep Blue Defeats Chess Champion

IBM’s Deep Blue defeated Garry Kasparov, marking a symbolic AI achievement in strategic
reasoning.

➢​ 2000s – AI in Everyday Life

AI spread into search engines, recommendation systems, and speech recognition.

*2011 – IBM Watson Wins Jeopardy! *

Watson used natural language processing and vast knowledge databases to defeat human champions.

➢​ 2012 – Google Now Launched

An intelligent personal assistant providing proactive, context-aware information on mobile devices.

➢​ 2014 – Chatbot Eugene

Eugene Goostman, a chatbot simulating a 13-year-old Ukrainian boy, reportedly passed a version of
the Turing Test by convincing 33% of judges of its humanity.

➢​ 2014 – Amazon Echo Introduced

Amazon launched Echo with Alexa, a voice-controlled AI assistant for home automation, music, and
queries.

➢​ 2015–Present – Deep Learning and Generative AI Era

Breakthroughs in computer vision, speech recognition, and game-playing (e.g., AlphaGo, 2016).
Goal of AI

Applications of AI
●​ Natural Language Processing – Chatbots, translators.
●​ Computer Vision – Face recognition, medical imaging.
●​ Robotics – Autonomous drones, industrial robots.
●​ Expert Systems – Medical/legal diagnosis.
●​ Gaming – Chess AI, strategy games.
●​ Recommendation Systems – Netflix, YouTube suggestions.
●​ Autonomous Vehicles – Self-driving cars.
●​ Speech Recognition – Siri, Alexa, Google Assistant.
Intelligent Agents

An agent is anything that perceives its environment through sensors and acts upon it through
actuators.

For Eg, A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and so
on for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various motors for
actuators.

A software agent receives keystrokes, file contents, and network packets as sensory inputs and acts
on the environment by displaying on the screen, writing files, and sending network packets.

Example: Self-driving car

●​ Sensors: Cameras, LIDAR, GPS


●​ Actuators: Steering, brakes, accelerator
●​ Environment : Road with traffic light, vehicles etc

A rational agent chooses actions that maximize its performance measure, based on its perceptions
and knowledge.

Ie, A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the
best expected outcome.

Basic Terminologies
We use the term percept to refer to the agent’s perceptual inputs at any given instant

.For eg, “Traffic light is red.”

An agent’s percept sequence is the complete history of everything the agent has ever perceived.

Agent Function : A mapping from percept sequences to actions. This function defines what an agent
should do in any given situation.

Environment : The external conditions in which an agent operates. Examples include roads for a
self-driving car, the chessboard for a chess program, or the internet for a search engine.
Example: A robotic vacuum cleaner perceives dirt (sensor), moves to clean it (actuator).

Agent Programs: The programs are the software implementation that tells an agent how to
behave in an environment to achieve its goals.
Ie,the actual code or algorithm inside the agent.

How an Agent Operates in an Environment?

1. Perceiving the Environment via Sensors

An agent constantly receives information about its surroundings. These inputs, called
percepts, come through the agent’s sensors, such as cameras for robots, microphones for
voice assistants, or keyboard input for software agents .

2. Building a Percept Sequence

Instead of reacting to just the current input, an agent often keeps a record—a percept
sequence—of its entire history of sensor inputs.
This history helps the agent make decisions

3. Deciding via the Agent Function

The agent function is an abstract mapping that relates each possible percept sequence to an
[Link] function defines which action the agent should take next .

4. Implementing via Agent Program


The agent function is brought to life by the agent program—actual code running on physical
hardware (the architecture). Together, the program and the hardware define the complete
agent.
The agent processes incoming percepts, feeds them to the program, and executes the
resulting action via actuators .

5. Acting Back into the Environment

The selected action is performed via actuators, which can manipulate the environment—like
moving a robot’s arm, sending a network message, or adjusting a thermostat . This may alter
the environment’s state or directly affect what the agent or others perceive next.

After acting, the environment may change. The agent then receives a new percept (or
percepts), adding to the percept sequence, and the cycle repeats.

●​ Example : Vacuum Cleaner World


Omniscience vs. Rationality
Omniscience

Omniscience means knowing the actual outcome of every possible action in advance.
Ie, its like knowing everything.
Omniscience is an ideal concept

A truly omniscient agent would always choose the perfect action because it knows exactly
what will happen.

Not realistic in the real world — agents can only work with the information they have, not
with future events already known.
Rationality: It is about selecting the best possible action based on the percept sequence ,
the agent’s knowledge about the environment, and the available actions .
Eg: Artificial intelligence machines

A rational agent does not need to be omniscient — it only needs to choose the action that
maximizes expected performance given its current information.
Rationality takes uncertainty into account and makes decisions that are “best in expectation,”
not necessarily perfect in outcome.

Task Environment

The task environment is the external setting in which an agent operates, defined by
everything that can affect its performance.

Ie, Before implementing an agent, we should specify at what environment it should


work,what actions should be taken for each percept received from that environment in
advance.
➢​ It is the starting point for designing an AI system.
➢​ It determines what the agent should perceive (what sensors are needed)
➢​ It defines what actions are possible (what actuators are needed)

➢​ It guides how the agent should behave to be considered rational

PEAS Framework
It introduces4 terms to describe a task environment:

●​ P – Performance Measure: Criteria for judging success (e.g., speed, accuracy, cost).
●​ E – Environment: External surroundings affecting the agent (e.g., roads for a taxi,
opponents for a chess agent).
●​ A – Actuators: Devices/actions the agent uses to affect the environment (e.g., wheels,
arms, display).
●​ S – Sensors: Devices the agent uses to gather information about the environment
(e.g., camera, GPS, microphone).

Example 1 – Chess Agent

Performance Measure: Win the game.

Environment: Opponent, chessboard, rules of chess.

Actuators: Move pieces.

Sensors: Board position (internal model or camera).

Example 2 – Fully Automated Taxi

Performance Measure: Safe, fast, legal, comfortable trip; maximize profit.

Environment: Roads, traffic, pedestrians, weather, passengers, traffic laws.

Actuators: Steering, accelerator, brake, display, horn.


Sensors: Cameras, GPS, radar, speedometer, microphones.

More examples are given below.

Types of Environments in AI
In Artificial Intelligence, an environment is everything outside the agent that it interacts with.​
The performance of an agent depends heavily on the nature of its environment.​
1. Fully Observable vs. Partially Observable
Fully Observable: At any given time, the agent’s sensors provide complete and accurate information
about the state of the environment.

A task environment is effectively fully observable if the sensors detect all aspects that are
relevant to the choice of action.
●​ In fully observable environments, the decision-making process is simpler because no
hidden variables exist — the agent can make decisions based solely on current percepts.
●​ Fully observable environments are convenient because the agent need
not maintain any internal state to keep track of the world.
Partially Observable: The agent’s sensors provide incomplete or noisy data about the environment’s
state.

For example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in
other squares, and an automated taxi cannot see what other drivers are thinking.

●​ In partially observable environments, the agent must maintain an internal state or model
to keep track of unseen or uncertain information.

Examples

●​ Fully Observable: Chess, Sudoku (all positions are visible)


●​ Partially Observable: Poker (opponents’ cards hidden), Driving in fog, Medical diagnosis
(incomplete test results)

[Link]-Agent vs. Multi-Agent


Single-Agent: Only one agent is operating in the environment; other entities are not agents.

Multi-Agent: More than one agent exists in the environment, and they can be cooperative or
competitive.
In multi-agent settings, agents may work together toward a common goal or compete for resources.

For example, in chess, the opponent entity B is trying to maximize its performance measure,
which, by the rules of chess, minimizes agent A’s performance measure. Thus, chess is a
competitive multiagent environment.

In the taxi-driving environment, on the other hand, avoiding collisions maximizes the
performance measure of all agents, so it is a partially cooperative multiagent environment.

●​ Environments with competitive multi-agents often require game theory to model


interactions.

Examples

●​ Single-Agent: Solving a maze alone, Personal assistant software


●​ Multi-Agent Cooperative: Robots collaborating in a warehouse
●​ Multi-Agent Competitive: Soccer, Online multiplayer games, Business market competition

[Link] vs. Continuous


Discrete: A finite number of distinct states, actions, and percepts [Link] environments are
easier to simulate and solve with search algorithms.

For example, the chess environment has a finite number of distinct states (excluding the
clock). Chess also has a discrete set of percepts and actions.

Continuous: Infinite possible states, actions, and percepts exist; changes occur in a smooth,
measurable [Link] environments require calculus, control theory, and differential
equations.

For Example, Taxi driving is a continuous-state and continuous-time problem: the speed and
location of the taxi and of the other vehicles sweep through a range of continuous values and
do so smoothly over time.

More Examples

●​ Discrete: Chess (finite moves), Turn-based games, Digital circuit control


●​ Continuous: Robotic arm control, Autonomous car navigation, Flight control

4. Deterministic vs. Stochastic


Deterministic: The next state of the environment is completely determined by the current state and
the agent’s action.

Deterministic environments are easier to model mathematically since there’s no uncertainty

Stochastic: The next state involves uncertainty — randomness or unpredictability is part of the
environment.

Stochastic environments require probabilistic reasoning, prediction, and planning under uncertainty.

Examples

●​ Deterministic: Solving a mathematical equation, Tic-tac-toe


●​ Stochastic: Weather forecasting, Dice-based board games, Robot movement on slippery
surfaces

[Link] vs. Dynamic


Static: The environment does not change while the agent is deliberating (thinking about the next
move).

Static environments allow agents to take as much time as needed to decide.

If the environment can change while an agent is deliberating, then we say the environment is
dynamic for that agent; otherwise, it is static. Static environments are easy to deal with
because the agent need not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time.
Dynamic: The environment can change during the agent’s thinking or execution of an action.
Dynamic environments, on the other hand, are continuously asking the agent what it wants to
do; if it hasn’t decided yet, that counts as deciding to do nothing.
Dynamic environments require agents to act quickly, often with real-time responses.

There’s also a semi-dynamic category, where the environment itself doesn’t change but the agent’s
performance score changes over time (e.g., timed quizzes).

Examples

●​ Static: Crossword puzzles, Offline scheduling problems


●​ Dynamic: Stock trading, Autonomous driving, Air traffic control
●​ Semi-dynamic: Timed chess (environment static, but time is running out)

6. Episodic vs. Sequential


Episodic: The agent’s experience is divided into discrete episodes. Each episode consists of the agent
perceiving and then performing a single action. The choice of action in one episode does not depend
on previous episodes.

In an episodic task environment, the agent’s experience is divided into atomic episodes. In each
episode the agent receives a percept and then performs a single action. Crucially, the next episode
does not depend on the actions taken in previous episodes.

●​ Episodic environments are simpler since each episode is independent.

Sequential: The current decision can affect future decisions. The agent must consider the sequence of
actions and their long-term consequences.

●​ Sequential environments require planning and strategy because actions have future
effects.

Examples

●​ Episodic: Image recognition (classify each image independently), Spam email filtering
●​ Sequential: Chess, Self-driving cars, Route planning

[Link] and Unknown


It refers to the knowledge the agent has about its environment’s rules and outcomes — not whether
the environment itself is visible.

Known Environment
The agent knows the laws of the environment — i.e., the rules for how states change when actions are
taken.

●​ The outcomes of each action are predictable, even if the environment is partially
observable.
●​ The agent can plan using a model of the environment.
●​ “Known” here means the agent knows the model — transition rules, possible states, and
outcomes are given.
●​ The environment can still be partially observable, but the agent understands how it works
internally.
●​ Enables use of model-based reasoning and planning.

Examples

●​ Chess: The rules and effects of each move are known.


●​ Physics-based simulations: Laws of motion are predefined.
●​ Solving a maze when you have its complete map.

Unknown Environment
●​ The agent does not know the rules of the environment beforehand.
●​ The results of actions must be learned through trial and error or observation.
●​ Requires learning to build an internal model.
●​ The agent has to explore, gather data, and gradually learn how the environment behaves.
●​ Often solved with reinforcement learning, exploration strategies, or model learning.
●​ Decision-making initially has uncertainty, which reduces as the agent learns.

Examples

●​ Playing a brand-new game without knowing its rules.


●​ Robot navigating an unfamiliar building.
●​ Learning to control a drone without a prior physics model of its aerodynamics.
An agent is defined as anything that can perceive its environment through sensors and act upon it
through actuators.

An agent’s behavior is described by its agent function, which maps percepts to actions.

The implementation of this mapping is called the agent program.


Simple reflex agents

The simplest kind of agent is the simple reflex agent.


These agents select actions on the basis of the current percept, ignoring the rest of the percept
history.

For example, the vacuum agent whose agent function is based only on the current location
and on whether that location contains dirt.
●​ Implemented using condition–action rules:​
IF condition THEN action.
●​ Work only in fully observable environments.
●​ No internal state or memory — cannot handle hidden information.
●​ Very fast because they do not involve complex reasoning.

Advantages

●​ Easy to design for well-defined, static environments.


●​ Quick reaction time.

Disadvantages

●​ Fail in partially observable or changing environments.


●​ Cannot adapt to new situations.

Example

●​ Vacuum-cleaner agent: “If dirt is detected, then suck.”

Model-based reflex agents

The most effective way to handle partial observability is for the agent to keep track of the
part of the world it can’t see now. That is, the agent should maintain some sort of internal
state that depends on the percept history and thereby reflects at least some of the unobserved
aspects of the current state.
●​ Maintain an internal state (a model) to keep track of aspects of the world that are not
immediately observable.
●​ Update the internal model based on percepts and knowledge of how the world works.
●​ Can handle partially observable environments.
●​ Requires a model of the environment’s dynamics — how actions affect states and how
the world evolves.

Advantages

●​ Can make better decisions when information is incomplete.


●​ More flexible than simple reflex agents.
Disadvantages

●​ Requires more computation and memory to maintain the model.

Example

●​ Robot navigation in a building where some corridors are not directly visible.

The below figure gives the structure of the model-based reflex agent with internal state,
showing how the current perception is combined with the old internal state to generate the
updated description of the current state, based on the agent’s model of how the world works.

Goal-Based Agents
●​ Make decisions by considering the desirable states they want to reach (goals).
●​ Choose actions that achieve these goals, often using search and planning.
●​ Goals provide a future-oriented perspective, unlike reflex agents that act only on the
present percept.
●​ Can compare different possible action sequences to find one that achieves the goal.

Advantages

●​ Flexible — can adapt to new goals without redesigning the agent.


●​ Can handle complex decision-making.

Disadvantages

●​ More computationally expensive — requires reasoning and planning.


●​ Needs accurate models for effective planning.

Example

●​ Route-finding agent: if we say - Find the shortest path from A to B.

Utility-Based Agents
Definition

●​ Extend goal-based agents by using a utility function to measure the desirability of


different states.
●​ Select the action that maximizes expected utility.
●​ Useful when multiple goals exist or when goals conflict.
●​ Utility allows trade-offs between different possible outcomes.
●​ Can handle uncertainty by maximizing expected utility.
Advantages

●​ Provides a rational basis for decision-making under uncertainty.


●​ Can balance between speed, safety, cost, and other factors.

Disadvantages

●​ Requires defining a correct and complete utility function.


●​ More complex to implement.

Example

●​ Autonomous car deciding between fastest vs. safest route.

Learning Agents
●​ Improve their performance over time through experience.
●​ Can work in unknown environments by learning the environment model and improving
decision strategies.

It has 4 Components namely

1.​ Learning Element – Responsible for making improvements based on feedback.


2.​ Performance Element – Chooses actions based on current knowledge.
3.​ Critic – Provides feedback on the agent’s performance.
4.​ Problem Generator – Suggests new experiences to explore for learning.

●​ Enable agents to adapt to changing environments.


●​ Can operate with incomplete prior knowledge.
Advantages

●​ Improve over time without manual reprogramming.


●​ Capable of handling unknown or dynamic situations.

Disadvantages

●​ Requires exploration, which may involve making mistakes.


●​ The learning process can be slow or resource-intensive.

Example

●​ Game-playing agent that learns better strategies by playing many games.

You might also like