Cognitive Robotics
01. Introduction
AbdElMoniem Bayoumi, PhD
Fall 2024
1
Acknowledgment
▪ These slides have been created by Wolfram
Burgard, Dieter Fox, Cyrill Stachniss and
Maren Bennewitz
2
Little bit about me
▪ Current Affiliation:
▪ Cairo University, Faculty of Engineering
(Computer Engineering Dept.)
▪ Ph.D., University of Bonn, Germany
▪ Research Interests:
▪ Deep learning
▪ Navigation
3
Research Interests
LoopNet: Where to Focus? Detecting Loop Closures in Dynamic Scenes
H. Osman, N. Darwish, and A. Bayoumi
In: IEEE Robotics and Automation Letters (RA-L), 2022, presented in ICRA 2022.
4
Research Interests
LMOT: Efficient Light-Weight Detection and Tracking in Crowds
R. Mostafa, H. Baraka, and A. Bayoumi
In: IEEE Access, 2022. 5
Administrivia
▪ Contacts:
▪ abayoumi@[Link]
▪ Grading Policy:
▪ Project: 20%
▪ Assignments: 10%
▪ Midterm: 10%
▪ Final Exam: 60% (written & closed book
exam)
▪ Slides:
6
Content of This Course
▪ Probabilities and Bayes
▪ The Kalman Filter
▪ The Extended Kalman Filter
▪ Probabilistic Motion Models
▪ Probabilistic Sensor Models
▪ Discrete Filters
▪ The Particle Filter, Monte Carlo Localization
▪ Mapping with Known Poses
▪ SLAM: Simultaneous Localization and Mapping
▪ SLAM: Landmark-based FastSLAM
▪ SLAM: Grid-based FastSLAM
▪ Path Planning and Collision Avoidance
7
Traditional Robotics
▪ Controlled
environment
▪ Well understood
▪ Millions of
robots in mass
production
▪ Not covered in
this lecture
8
New Application Domains
▪ Flexible automation
▪ Mining, agriculture,…
▪ Logistics
▪ Household
▪ Medicine
▪ Dangerous environments
(Space, under water,
nuclear power plants, …)
▪ Toys, entertainment
9
Cognitive Robotics
▪ Have cognitive functions normally associated with
people or animals
▪ Interpret various kinds of sensor data
▪ Act purposefully and autonomously towards
achieving goals
▪ Operate in dynamic real-life environments
▪ Exhibit a high degree of robustness in coping with
unpredictable situations
▪ Key challenges
▪ Systematic treatment of uncertainties
▪ Perceiving the environmental state
▪ Coordination of teams of collaborative robots in dynamic
environments
▪ ….
10
Tour Guide Robot Minerva
(CMU + Univ. Bonn, 1998)
11
Autonomous Vacuum Cleaners
new improved version with mapping capabilities
and better cleaning strategies
12
Autonomous Lawn Mowers
not many cognitive capabilities required 13
DARPA Grand Challenge 2005
[Courtesy of Sebastian Thrun] 14
The Google Self-Driving Car
[Courtesy of Google]
15
The Google Self-Driving Car
[Courtesy of Google]
16
The Google Self-Driving Car
[Courtesy of Google]
17
Driving in the Google Car
[Courtesy of Google] 18
Obelix Experiment: Uni Freiburg
[Courtesy of AIS, Freiburg] 19
Autonomous Quadrotor
Navigation
[Courtesy of AIS, Freiburg] 20
Stair Climbing (HRL)
21
Interaction, Object Grasping
[Courtesy of T. Asfour etl al.] 22
Towel Folding
[Courtesy of P. Abbeel et al.] 23
Cognitive Robot Cosero
AIS Lab Uni Bonn (Sven Behnke)
▪ Manipulation tasks in domestic
environments
▪ Human-robot interaction
24
Probabilistic Robotics
▪ Authors:
▪ Sebastian Thrun
▪ Wolfram Burgard
▪ Dieter Fox
▪ MIT Press, 2005
[Link]
25
Probabilistic Robotics
Key Idea
▪ Explicit representation of uncertainty
▪ Using the calculus of probability theory
▪ Perception = state estimation
▪ Action = utility optimization
26
Axioms of Probability Theory
▪ P(A) denotes the probability that proposition
A is true
▪ 0 £ P(A) £1
▪ P(True) =1 P(False) = 0
▪ 𝑃(𝐴∪𝐵)=𝑃(𝐴)+𝑃(𝐵)−𝑃(𝐴∩𝐵)
27
A Closer Look at Axiom 3
𝑃(𝐴∪𝐵)=𝑃(𝐴)+𝑃(𝐵)−𝑃(𝐴∩𝐵)
True
A A B B
28
Using the Axioms
𝑃 𝐴 ∪ ¬𝐴 = 𝑃 𝐴 + 𝑃 ¬𝐴 − 𝑃 𝐴 ∩ ¬𝐴
𝑃 𝑇𝑟𝑢𝑒 = 𝑃 𝐴 + 𝑃 ¬𝐴 − 𝑃 𝐹𝑎𝑙𝑠𝑒
1 = 𝑃 𝐴 + 𝑃 ¬𝐴 − 0
𝑃 𝐴 = 1 − 𝑃 ¬𝐴
29
Discrete Random Variables
▪ X denotes a random variable
▪ X can take on a countable number of values
in {x1, x2, …, xn}
▪ P(X=xi) or P(xi) is the probability that the
random variable X takes on value xi
▪ P(.) is called probability mass function
▪ For example:
P(Room) = 0.7, 0.2, 0.08, 0.02
office, lecture hall, seminar room, kitchen 30
Continuous Random Variables
▪ X takes on values in the continuum
▪ p(X=x) or p(x) is a probability density
function b
P( x [a, b]) = p ( x)dx
a
▪ For example:
p(x)
31
The Probability Sums up to One
Discrete case Continuous case
P( x) = 1
x
p( x)dx = 1
32
Joint and Conditional Probability
▪ P(X=x and Y=y) = P(x,y)
▪ If X and Y are independent then
P(x,y) = P(x) P(y)
▪ P(x | y) is the probability of x given y
P(x | y) = P(x,y) / P(y) conditional probability
P(x,y) = P(x | y) P(y) product rule
▪ If X and Y are independent then
P(x | y) = P(x)
33
Law of Total Probability
Discrete case Continuous case
P( x) = P( x | y ) P( y ) p ( x) = p( x | y ) p ( y )dy
y
34
Marginalization
Discrete case Continuous case
P ( x ) = P ( x, y ) p ( x) = p ( x, y )dy
y
35
Bayes’ Rule
P ( x, y ) = P ( x | y ) P ( y ) = P ( y | x ) P ( x )
P( y | x) P( x) likelihood prior
P( x y ) = =
P( y ) evidence
posterior
36
Normalization
𝑃(𝑦ȁ𝑥)𝑃(𝑥)
𝑃(𝑥 ȁ𝑦) =
𝑃(𝑦)
= 𝜂 𝑃(𝑦ȁ𝑥)𝑃(𝑥)
−1
1
𝜂 = 𝑃(𝑦) =
σ𝑧 𝑃(𝑦ȁ𝑧) 𝑃(𝑧)
Algorithm:
// compute
x : aux x| y = P( y | x) P( x) // unnormalized posterior
1
= // compute
aux x| y
x
// normalization factor
x : P( x | y ) = aux x| y // normalize posterior
37
Bayes’ Rule
with Background Knowledge
P(y | x, z) P(x | z)
P(x | y, z) =
P(y | z)
38
Conditional Independence
P( x, y z ) = P( x | z ) P( y | z )
P( x z ) = P( x | z , y )
When z is known,
Equivalent to y does not tell us
anything about x
and P( y z ) = P( y | z , x) When z is known,
x does not tell us
anything about y
39
Simple Example of State Estimation
▪ Suppose a robot obtains a
measurement z
▪ What is P(open|z)?
40
Causal vs. Diagnostic Reasoning
▪ P(open|z) is diagnostic
▪ P(z|open) is causal
▪ Often causal knowledge is easier to
obtain count frequencies!
▪ Bayes’ rule allows us to use causal
knowledge:
P( z | open) P(open)
P(open | z ) =
P( z )
41
Example
▪ P(z|open) = 0.6 P(z|open) = 0.3
▪ P(open) = P(open) = 0.5
P( z | open) P(open)
P(open | z ) =
P( z | open) p (open) + P( z | open) p (open)
0.6 0.5 0.3
P(open | z ) = = = 0.67
0.6 0.5 + 0.3 0.5 0.3 + 0.15
▪ z increases the probability that the door
is open
42
Combining Evidence
▪ Suppose our robot obtains another
observation z2
▪ How can we integrate this new
information?
▪ More generally, how can we estimate
P(x | z1, ..., zn )?
43
Recursive Bayesian Updating
P( zn | x, z1, , zn − 1)P( x | z1, , zn − 1)
P( x | z1, , zn) =
P( zn | z1, , zn − 1)
Markov assumption:
Last measurement zn is independent of previous
measurements z1,...,zn-1 if we know the state x
P( zn | x)P( x | z1, , zn − 1)
P( x | z1, , zn) =
P( zn | z1, , zn − 1)
= P( zn | x)P( x | z1, , zn − 1)
= 1...n P( zi | x) P( x)
i =1...n
44
Example: Second Measurement
𝑃 𝑧2 ȁ𝑜𝑝𝑒𝑛 𝑃(𝑜𝑝𝑒𝑛ȁ𝑧1 )
𝑃 𝑜𝑝𝑒𝑛 𝑧2 , 𝑧1 =
𝑃 𝑧2 ȁ𝑜𝑝𝑒𝑛 𝑃 𝑜𝑝𝑒𝑛 𝑧1 + 𝑃 𝑧2 ȁ¬𝑜𝑝𝑒𝑛 𝑃(¬𝑜𝑝𝑒𝑛ȁ𝑧1 )
45
Summary
▪ Probabilities allow us to model uncertainties in
a systematic way
▪ Bayes’ rule allows us to compute probabilities
that are hard to assess otherwise
▪ Under the Markov assumption, recursive
Bayesian updating can be used to efficiently
combine evidence
46
Acknowledgment
▪ These slides have been created by Wolfram
Burgard, Dieter Fox, Cyrill Stachniss and
Maren Bennewitz
47