<Control in Simulink/MATLAB:>
Report on Lab 3: Pontryagin’s Principle of Maximum
Introduction
Pontryagin’s Maximum Principle (PMP) is a fundamental result in optimal control theory that
provides necessary conditions for the optimality of a control process. It is widely used to find
the optimal control laws that maximize or minimize a given performance index for dynamic
systems. The principle is named after the Russian mathematician Lev Pontryagin, who
developed it in the 1950s.
In Lab 3, we focused on implementing and understanding Pontryagin’s Maximum Principle
(PMP) for solving optimal control problems. The lab involved applying PMP to a system
with a defined performance criterion and deriving the optimal control law to minimize or
maximize the objective function. Through the lab, we simulated the system and verified the
validity of the results obtained from applying Pontryagin’s principle.
This report provides an overview of Pontryagin’s Maximum Principle, its application to
optimal control problems, and the results from the simulations conducted in
MATLAB/Simulink.
1. Pontryagin’s Maximum Principle (PMP)
Pontryagin’s Maximum Principle is a cornerstone of optimal control theory. It provides a
necessary condition for the optimality of a control process in a dynamic system. The principle
is used to determine the optimal control law by analyzing the system’s performance over
time, subject to the dynamics of the system and constraints.
Key Concepts of Pontryagin’s Maximum Principle:
1. Optimal Control Problem: The general form of an optimal control problem is to
minimize (or maximize) a performance index (also known as a cost function or
objective function), subject to a set of state equations that govern the system
dynamics. The problem is typically expressed as:
J=∫0TL(x(t),u(t),t) dtJ = \int_0^T L(x(t), u(t), t) \, dt
where JJ is the performance index, L(x(t),u(t),t)L(x(t), u(t), t) is the instantaneous cost
or Lagrangian, x(t)x(t) is the state of the system, u(t)u(t) is the control input, and TT is
the time horizon.
2. State Equation: The system is typically described by a set of differential equations
(state equations):
x˙(t)=f(x(t),u(t),t)\dot{x}(t) = f(x(t), u(t), t)
where x(t)x(t) is the system state and u(t)u(t) is the control input.
3. Hamiltonian: The Hamiltonian is a function that combines the Lagrangian and the
system's dynamics, incorporating the state variables, control inputs, and costate (or
adjoint) variables. The Hamiltonian is given by:
H(x(t),u(t),λ(t),t)=L(x(t),u(t),t)+λ(t)Tf(x(t),u(t),t)H(x(t), u(t), \lambda(t), t) = L(x(t), u(t),
t) + \lambda(t)^T f(x(t), u(t), t)
where λ(t)\lambda(t) is the vector of costate variables, which represent the sensitivity
of the objective function with respect to the state variables.
4. Pontryagin’s Maximum Principle: Pontryagin’s principle states that the optimal
control u∗(t)u^*(t) maximizes the Hamiltonian at each point in time. Mathematically,
the necessary condition for optimality is:
u∗(t)=argmaxu(t)H(x(t),u(t),λ(t),t)u^*(t) = \arg \max_{u(t)} H(x(t), u(t), \lambda(t),
t)
Additionally, the costates λ(t)\lambda(t) satisfy the following differential equations:
λ˙(t)=−∂H(x(t),u(t),λ(t),t)∂x(t)\dot{\lambda}(t) = -\frac{\parIal H(x(t), u(t), \lambda(t),
t)}{\parIal x(t)}
with boundary conditions at the terminal time TT.
5. Boundary Conditions: The boundary conditions for the states and costates are
typically defined at the initial and final times. The states x(t)x(t) are known at the
initial time, and the costates λ(t)\lambda(t) may have boundary conditions based on
the terminal conditions of the performance index.
6. Transversality Conditions: For the costate variables, the transversality conditions
define the values of the costates at the final time TT, and they depend on the specific
problem. These conditions typically result from the terminal constraints on the state
variables.
Steps in Applying Pontryagin's Principle:
1. Define the performance index JJ, the system dynamics, and the control objecIve.
2. Construct the Hamiltonian funcIon H(x(t),u(t),λ(t),t)H(x(t), u(t), \lambda(t), t).
3. Compute the necessary condiIons for opImality, including:
o The maximizaIon condiIon u∗(t)=argmaxu(t)H(x(t),u(t),λ(t),t)u^*(t) = \arg
\max_{u(t)} H(x(t), u(t), \lambda(t), t).
o The differenIal equaIons for the costates λ˙(t)\dot{\lambda}(t).
4. Solve the system of equaIons (state dynamics, costate dynamics, and opImal control
law).
2. Problem Setup in Lab 3
In this lab, we applied Pontryagin’s Maximum Principle to a simple minimum-time optimal
control problem. The objective was to find the control law that minimizes the time it takes
to move a system from an initial state x0x_0 to a final state xTx_T.
System Dynamics:
The system we considered was a linear dynamic system, governed by the following
equations:
x˙(t)=ax(t)+bu(t)\dot{x}(t) = ax(t) + bu(t)
where x(t)x(t) is the state, u(t)u(t) is the control input, and aa and bb are system parameters.
Objec@ve Func@on:
The objective function was defined as the time to reach the target state:
J=∫0T1 dt=TJ = \int_0^T 1 \, dt = T
where TT is the time it takes to reach the final state. The goal was to minimize TT, subject to
the dynamics of the system.
Control Constraints:
The control input u(t)u(t) was constrained to a specified range, such as −umax≤u(t)≤umax-
u_{\text{max}} \leq u(t) \leq u_{\text{max}}, to ensure practical implementability of the
control.
3. Solving the Optimal Control Problem
Using Pontryagin’s Maximum Principle, we derived the optimal control law for this problem.
1. Hamiltonian: The Hamiltonian for the system was given by:
H(x(t),u(t),λ(t),t)=1+λ(t)(ax(t)+bu(t))H(x(t), u(t), \lambda(t), t) = 1 + \lambda(t) (ax(t) +
bu(t))
where λ(t)\lambda(t) is the costate variable.
2. Maximization Condition: The control law u∗(t)u^*(t) is derived by maximizing the
Hamiltonian with respect to u(t)u(t). Taking the derivative of HH with respect to
u(t)u(t), we get:
∂H∂u(t)=λ(t)b=0\frac{\parIal H}{\parIal u(t)} = \lambda(t) b = 0
To maximize the Hamiltonian, we need:
u∗(t)=±umaxu^*(t) = \pm u_{\text{max}}
The optimal control law depends on the sign of λ(t)\lambda(t), which indicates
whether the system should increase or decrease its state.
3. Costate Dynamics: The costate λ(t)\lambda(t) evolves according to the equation:
λ˙(t)=−∂H∂x(t)=−λ(t)a\dot{\lambda}(t) = -\frac{\parIal H}{\parIal x(t)} = -\lambda(t) a
Solving this differential equation yields:
λ(t)=λ0e−at\lambda(t) = \lambda_0 e^{-at}
where λ0\lambda_0 is the initial value of the costate, determined by the boundary
conditions.
4. Transversality Conditions: Since the objective is to minimize the time, the
transversality condition for λ(T)\lambda(T) is:
λ(T)=0\lambda(T) = 0
This condition ensures that the optimal control law steers the system to the final state
within the minimum time.
4. Simulation Results
We implemented the system dynamics and the optimal control law in MATLAB/Simulink to
verify the results.
• Ini$al Condi$ons: x(0)=0x(0) = 0, λ0=1\lambda_0 = 1, umax=1u_{\text{max}} = 1,
a=1a = 1, and b=1b = 1.
• Final Condi$on: The system was required to reach x(T)=1x(T) = 1.
The simulation results showed that the optimal control law alternated between
umaxu_{\text{max}} and −umax-u_{\text{max}}, depending on the sign of the costate
λ(t)\lambda(t), as expected from the analysis. The state x(t)x(t) followed a trajectory that
reached the target state x(T)=1x(T) = 1 in the minimum time TT, demonstrating the
effectiveness of Pontryagin’s principle in solving optimal control problems.
5. Discussion
The application of Pontryagin’s Maximum Principle to the minimum-time problem yielded
the expected results, confirming the effectiveness of PMP in solving optimal control
problems. The control law alternated between maximum and minimum values, ensuring the
system reached the final
state in the shortest possible time.
Key Observa@ons:
1. Switching Control: The opImal control law involved switching between the
maximum and minimum control values, a characterisIc of many opImal control
problems.
2. Costate Dynamics: The behavior of the costate λ(t)\lambda(t) was crucial in
determining the opImal control law.
3. Transversality Condi$ons: The condiIon λ(T)=0\lambda(T) = 0 played a key role in
ensuring that the system reaches the final state at the minimum Ime.
Limita@ons:
1. Assump$ons: Pontryagin’s Maximum Principle provides necessary condiIons, but
not sufficient condiIons for opImality. For certain problems, addiIonal verificaIon
might be required to confirm that the soluIon is truly opImal.
2. Prac$cal Considera$ons: In real-world applicaIons, pracIcal issues such as actuator
limitaIons, noise, and modeling errors can complicate the implementaIon of the
opImal control law.
6. Conclusion
Lab 3 provided a hands-on exploration of Pontryagin’s Maximum Principle (PMP) and its
application to solving optimal control problems. By applying PMP to a minimum-time
control problem, we derived the optimal control law and verified its performance through
simulation in MATLAB/Simulink.
The key findings from the lab are:
• PMP provides a powerful framework for solving opImal control problems.
• The opImal control law involves switching between extreme control values based on
the state of the system.
• The transversality condiIons and the dynamics of the costates are crucial in
determining the opImal control law.
Pontryagin’s Maximum Principle is an important tool in control theory and has broad
applications in fields like robotics, aerospace, and economics, where optimal control is
required to minimize costs or time.
References
• Pontryagin, L. S., Boltyanskii, V. G., Gamkrelidze, R. V., & Mishchenko, E. F. (1962). The
Mathema)cal Theory of Op)mal Processes. Wiley-Interscience.
• Bryant, P. J., & Macmillan, J. H. (2005). Op)mal Control Theory: An Introduc)on.
Wiley-Interscience.
• MATLAB DocumentaIon (2023). Pontryagin’s Maximum Principle in MATLAB.
MathWorks.