Friday, April 17, 2026
banner
Top Selling Multipurpose WP Theme

One may encounter a lot of irritating difficulties when attempting to numerically remedy a troublesome nonlinear and nonconvex optimum management drawback. On this article I’ll think about such a troublesome drawback, that of discovering the shortest path between two factors by way of an impediment area for a well known mannequin of a wheeled robotic. I’ll study widespread points that come up when attempting to unravel such an issue numerically (specifically, nonsmoothness of the price and chattering within the management) and how you can tackle them. Examples assist to make clear the ideas. Get all of the code right here: https://github.com/willem-daniel-esterhuizen/car_OCP

1.1 Define

First, I’ll introduce the automotive mannequin that we’ll research all through the article. Then, I’ll state the optimum management drawback in all its element. The subsequent part then exposes all of the numerical difficulties that come up, ending with a “wise nonlinear programme” that makes an attempt to take care of them. I’ll then current the main points of a homotopy technique, which helps in guiding the solver in the direction of a great resolution. I’ll then present you some numerical experiments to make clear every little thing, and end off with references for additional studying.


2. A automotive mannequin

We’ll think about the next equations of movement,

[
begin{align}
dot x_1

where (t geq 0) denotes time, (x_1inmathbb{R}) and (x_2inmathbb{R}) denote the car’s position, (x_3inmathbb{R}) denotes its orientation, (u_1inmathbb{R}) its velocity and (u_2inmathbb{R}) its rate of turning. This is a common model of a differential-drive robot, which consists of two wheels that can turn independently. This allows it to drive forwards and backwards, rotate when stationary and perform other elaborate driving manoeuvres. Note that, because (u_1) can be 0, the model allows the car to stop instantaneously.

A differential drive robot, as modelled by the equations of motion. Image by author.

Throughout the article we’ll let (mathbf{x} := (x_1, x_2, x_3)^topinmathbb{R}^3) denote the state, (mathbf{u} := (u_1, u_2)inmathbb{R}^2) denote the control, and define (f:mathbb{R}^3timesmathbb{R}^2 rightarrow mathbb{R}^3) to be,

[
f(mathbf{x},mathbf{u}) :=
left(
begin{array}{c}
u_1cos(x_3)
u_1sin(x_3)
u_2
end{array}
right),
]

in order that we will succinctly write the system as,

[
dot{mathbf{x}}


3. Optimal path planning problem

Considering the car model in the previous section, we want to find the shortest path connecting an initial and target position while avoiding a number of obstacles. To that end, we’ll consider the following optimal control problem:

[
newcommand{u}{mathbf{u}}
newcommand{x}{mathbf{x}}
newcommand{PC}{mathrm{PC}}
newcommand{C}{mathrm{C}}
newcommand{mbbR}{mathbb{R}}
newcommand{dee}{mathrm{d}}
newcommand{NLP}{mathrm{NLP}}
mathrm{OCP}:
begin{cases}
minlimits_{x, u} quad & J(x, u)
mathrm{subject to:}quad & dot{x}

where (Tgeq 0) is the finite time horizon, (J:C_T^3timesPC_T^2rightarrow mbbR_{geq 0}) is the cost functional, (x^{mathrm{ini}} inmbbR^3) is the initial state and (x^{mathrm{tar}}inmbbR^3) is the target state. The control is constrained to (mathbb{U}:= [underline{u}_1, overline{u}_1]instances [underline{u}_2, overline{u}_2]), with (underline{u}_j < 0 <overline{u}_j), (j=1,2). Round obstacles are given by a centre, ((c_1^i, c_2^i)inmbbR^2), and a radius, (r_iinmbbR_{geq 0}), with (mathbb{I}) indicating the indices of all obstacles. For the price practical we’ll think about the arc size of the curve connecting (x^{mathrm{ini}}) and (x^{mathrm{tar}}), that’s,

[
J(x,u) = int_0^Tleft(dot x_1^2(s) + dot x_2^2(s)right)^{frac{1}{2}}, dee s.
]

I’ve use the short-hand (PC_T^m) (respectively (C_T^m)) to indicate all piece-wise steady features (respectively steady features) mapping the interval ([0,T]) to (mbbR^m). The acronym “(mathrm{a.e.},, t in[0,T])” stands for “virtually each t in ([0,T])”, in different phrases, the by-product might not exist at a finite variety of factors in ([0,T]), for instance, the place the management is discontinuous.

3.1 Some feedback on the OCP

The equations of movement and the presence of obstacles make the optimum management drawback nonlinear and nonconvex, which is troublesome to unravel basically. A solver might converge to domestically optimum options, which aren’t essentially globally optimum, or it might fail to discover a possible resolution regardless that one exists.

For simplicity the obstacles are circles. These are good as a result of they’re differentiable, so we will use a gradient-based algorithm to unravel the OCP. (We’ll use IPOPT, which implements an interior-point technique.) The arc size operate, (J), is not differentiable when the automotive’s pace is zero. Nonetheless, as we’ll see, we will get rid of this drawback by including a small (varepsilon > 0) beneath the square-root.

Relying on the scheme used to discretise the equations of movement, there could also be chattering within the management sign. As we’ll see, this may simply be handled by penalising extreme management motion in the price operate.

The horizon size thought of in the issue, (T), is mounted. Thus, as the issue is posed, we really wish to discover the curve of shortest arc size for which the automotive reaches the goal in precisely (T) seconds. That is really not a problem for our automotive as a result of it may cease instantaneously and rotate on the spot. So, the options to the OCP (if the solver can discover them) may encompass lengthy boring chunks at the start and/or finish of the time interval if (T) may be very giant. If we wished to make the horizon size a call variable in the issue then now we have two choices.

First, if we use a direct technique to numerically remedy the issue (as we are going to do within the subsequent part) we might fluctuate the horizon size by making the discretisation step dimension (h), which seems within the numerical integration scheme, a call variable. It is because the dimension of the choice house should be set on the time at which you invoke the numerical nonlinear programme solver.

The second choice is to resort to an oblique technique (in a nutshell, it’s essential remedy a boundary-value drawback that you simply get from an evaluation of the issue through Pontryagin’s precept, typically through the capturing technique). Nonetheless, for an issue like ours, the place you’ve many state constraints, this may be fairly troublesome.


In case you are discovering this text attention-grabbing, please think about trying out the weblog topicincontrol.com. It presents deep dives into management idea, optimization, and associated subjects, typically with freely accessible code.

4. Deriving a smart nonlinear programme

We’ll remedy the optimum management drawback utilizing direct single capturing. Extra exactly, we’ll take the management to be piecewise fixed over a uniform grid of time intervals and propagate the state trajectory from the preliminary situation utilizing the fourth-order Runge-Kutta (RK4) technique. We’ll then kind a finite-dimensional nonlinear programme (NLP), the place the choice variables encompass the state and management at every discrete time step. This part reveals how you can kind a “wise” NLP, which offers with the assorted numerical difficulties.

4.1 The RK4 scheme

Contemplate the automotive’s differential equation, with preliminary situation, (x^{mathrm{ini}}), over an interval, ([0,T]),

[
dot{x}

Let (h>0) denote the constant time step, and let (K := mathrm{floor}(T/h)). Then the RK4 scheme reads,

[
x[k+1] = x[k] + frac{h}{6}mathrm{RK}_4(x[k], u[k]), quad forall ,, okay in[0:K-1], quad x[0] = x(0),
]

the place (mathrm{RK}_4:mbbR^3times mbbR^2 rightarrow mbbR^3) reads,

[
mathrm{RK}_4(x, u) = k_1 + 2k_2 + 2k_3 + k_4,
]

and

[
k_1 = f(x, u),quad k_2 = f(x+ frac{h}{2}k_1, u), quad k_3 = f(x + frac{h}{2}k_2, u),quad k_4 = f(x + hk_3, u).
]

The notation (x[k]) and (u[k]) is supposed to indicate the discretised state and management, respectively, in order to differentiate them from their continuous-time counterparts.

4.2 Singularity of the price practical

You is likely to be tempted to contemplate the polygonal arc size as the price within the NLP, specifically,

[
sum_{k=0}^{K-1}Vertx[k+1] – x[k]Vert = sum_{okay=0}^{Ok-1}left((x_{1}[k+1] – x_1[k])^2 + (x_2[k+1] – x_2[k])^2right)^{frac{1}{2}}.
]

Nonetheless, this operate is not differentiable if for some (kin{0,1,dots,Ok-1}),

[
Vertx[k+1] – x[k]Vert = 0,
]

which regularly results in the solver failing. You may see the error EXIT: Invalid quantity in NLP operate or by-product detected should you remedy an issue with this text’s code (which makes use of IPOPT, a gradient-based solver).

One resolution is to approximate the polygonal arc size with,

[
sum_{k=0}^{K-1}left((x_{1}[k+1] – x_1[k])^2 + (x_2[k+1] – x_2[k])^2 + varepsilonright)^{frac{1}{2}},
]

with (varepsilon > 0) a small quantity. We see that, for an arbitrary (kin{1,dots,Ok-1}) and (iin{1,2}),

[
begin{align*}
frac{partial}{partial x_i[k]} &left( sum_{m=0}^{Ok-1} left(x_1[m+1] – x_1[m])^2 + (x_2[m+1] – x_2[m])^2 + varepsilonright)^{frac{1}{2}} proper) [6pt]
&= frac{x_i[k] – x_i[k-1]}{left((x_1[k] – x_1[k-1])^2 + (x_2[k] – x_2[k-1])^2 + varepsilonright)^{frac{1}{2}}}
&quad + frac{x_i[k] – x_i[k+1]}{left((x_1[k+1] – x_1[k])^2 + (x_2[k+1] – x_2[k])^2 + varepsilon proper)^{frac{1}{2}}}
finish{align*}
]

and so this operate is constantly differentiable, guaranteeing clean gradients for IPOPT. (You get related expressions should you take a look at (frac{partial}{partial x_i[K]}) and (frac{partial}{partial x_i[0]})).

4.3 Management chattering

Management chattering is the speedy leaping/oscillation/switching of the optimum management sign. There could also be components of the answer that really chatter (this will happen when the optimum management is bang-bang, for instance) or a numerical solver might discover a resolution that chatters artificially. Delving into this deep subject is out of this text’s scope however, very briefly, you may encounter this phenomenon in issues the place the optimum resolution displays so-called lively arcs. These are parts of the answer alongside which the state constraints are lively, which, in our setting, corresponds to the automotive travelling alongside the boundaries of the obstacles alongside its optimum path. When fixing issues exhibiting such arcs through a direct technique, as we’re doing, the numerical solver might approximate the true resolution alongside these arcs with speedy oscillation.

Fortunately, a easy strategy to do away with chattering is to only penalise management motion by including the time period:

[
deltasum_{k=0}^{K-1}Vert u[k] Vert^2
]

in the price operate, for some small (delta). (This at the very least works nicely for our drawback, even for very small (delta).)

4.4 Scaling for good numerical conditioning

A well-scaled nonlinear programme is one the place small adjustments within the choice variable end in small adjustments in the price and the values of the constraint features (the so-called constraint residuals). We are able to verify how nicely our drawback is scaled by wanting on the magnitude of the price operate, at its gradient and Hessian, as nicely the Jacobian of the constraint operate at factors within the choice house (in particuar, on the preliminary heat begin and factors near the answer). If these portions are of comparable order then the solver might be sturdy, that means it would normally converge in comparatively few steps. A badly-scaled drawback might take extraordinarily lengthy to converge (as a result of it’d must take very small steps of the choice variable) or just fail.

Contemplating our drawback, it is sensible to scale the price by roughly the size we anticipate the ultimate path to be. A good selection is the size of the road connecting the preliminary and goal positions, name this (L). Furthermore, it then is sensible to scale the constraints by (1/L^2) (as a result of we’re squaring portions right here).

4.5 The wise nonlinear programme

Taking the discussions of the earlier subsections under consideration, the wise NLP that we’ll think about within the numerics part reads,

[
NLP_{varepsilon, delta}:
begin{cases}
minlimits_{x, u} quad & J_{varepsilon,delta}(x, u)
mathrm{subject to:}
quad & x[k+1] = x[k] + frac{h}{6}mathrm{RK}_4(x[k], u[k]), & forall okay in[0:K-1], &
quad & x[0] = x^{mathrm{ini}}, &
quad & x[K] = x^{mathrm{tar}}, &
quad & u[k]in mathbb{U}, & forall okay in[0:K-1], &
quad & frac{1}{L^2}left( x_1[k] – c_1^i proper)^2 + frac{1}{L^2}left( x_2[k] – c_2^i proper)^2 geq frac{1}{L^2} r_i^2, quad & forall okay in[0:K], & forall iin mathbb{I},
quad & (x, u) in (mbbR^{3})^Ok instances (mbbR^{2})^{Ok-1}, &
finish{instances}
]

the place (J_{varepsilon,delta}: (mbbR^{3})^Ok instances (mbbR^{2})^{Ok-1} rightarrow mbbR_{>0}) is outlined to be,

[
J_{varepsilon,delta}(x, u) := frac{1}{L} left(sum_{k=0}^{K-1}left(Vert x[k+1] – x[k]Vert^2 + varepsilon proper)^{frac{1}{2}} + deltasum_{okay=0}^{Ok-1}Vert u[k] Vert^2 proper).
]

To help upcoming dialogue, outline the possible set, (Omega), to be the set of all pairs ((x,u)) that fulfill the constraints of (NLP_{varepsilon, delta}). A possible pair ((x^{(varepsilon,delta)}, u^{(varepsilon,delta)})) is claimed to be globally optimum offered that,

[
J_{varepsilon,delta}(x^{(varepsilon,delta)}, u^{(varepsilon,delta)}) leq J_{varepsilon,delta}(x, u),quadforall(x, u) in Omega.
]

A possible pair ((x^{(varepsilon,delta)}, u^{(varepsilon,delta)})) is claimed to be domestically optimum offered that there exists a sufficiently small (gamma >0) for which,

[
J_{varepsilon,delta}(x^{(varepsilon,delta)}, u^{(varepsilon,delta)}) leq J_{varepsilon,delta}(x, u),quadforall(x, u) in Omegacap mathbf{B}_{gamma}(x^{(varepsilon,delta)}, u^{(varepsilon,delta)}).
]

Right here (mathbf{B}_{gamma}(x^{(varepsilon,delta)}, u^{(varepsilon,delta)}) ) denotes a Euclidean ball of radius (gamma>0) centred concerning the level ((x^{(varepsilon,delta)}, u^{(varepsilon,delta)}) ).


5. A homotopy technique

Recall that the issue is nonlinear and nonconvex. Thus, if we had been to only select a small (varepsilon>0) and (delta>0) and throw (NLP_{varepsilon, delta}) at a solver, then it might fail regardless that possible pairs may exist, or it might discover “unhealthy” domestically optimum pairs. One method to take care of these points is to information the solver in the direction of an answer by successively fixing simpler issues that converge to the troublesome drawback.

That is the concept behind utilizing a homotopy technique. We’ll information the solver in the direction of an answer with a easy algorithm that successively provides the solver a great heat begin:

Homotopy algorithm

[
begin{aligned}
&textbf{Input:}text{ Initial parameters } varepsilon_{mathrm{ini}}>0, delta_{mathrm{ini}}>0. text{ Tolerances } mathrm{tol}_{varepsilon}>0, mathrm{tol}_{delta}>0.
&textbf{Output:} text{ Solution } (x^{(mathrm{tol}_{varepsilon}, mathrm{tol}_{delta})}, u^{(mathrm{tol}_{varepsilon}, mathrm{tol}_{delta})}) text{ (if it can find one)}
&quad text{Get initial warm start}, (x^{mathrm{warm}}, u^{mathrm{warm}}) text{ (not necessarily feasible)}
&quad i gets 0
&quad textbf{ While } varepsilon > mathrm{tol}_{varepsilon} text{ and } delta > mathrm{tol}_{delta}:
&quadquad varepsilongets max { varepsilon_{mathrm{ini}} / 2^i, mathrm{tol}_{varepsilon} }
&quadquad deltagets max { delta_{mathrm{ini}} / 2^i, mathrm{tol}_{delta} }
& quadquad text{ Solve } NLP_{varepsilon, delta} text{ with warm start } (x^{mathrm{warm}}, u^{mathrm{warm}})
& quadquad text{Label the solution } (x^{(varepsilon, delta)}, u^{(varepsilon, delta)}).
&quadquad (x^{mathrm{warm}}, u^{mathrm{warm}})gets (x^{(varepsilon, delta)}, u^{(varepsilon, delta)}).
&quadquad igets i + 1
&textbf{end while}
&textbf{return}, (x^{(mathrm{tol}_{varepsilon}, mathrm{tol}_{delta})}, u^{(mathrm{tol}_{varepsilon}, mathrm{tol}_{delta})})
end{aligned}
]

NOTE!: The homotopy algorithm is supposed to assist the solver discover a good resolution. Nonetheless, there may be no assure that it’s going to efficiently discover even a possible resolution, nevermind a globally optimum one. For some idea concerning so-called globallly convergent homotopies you possibly can seek the advice of the papers, [1] and [2].


6. Numerical experiments

On this part we’ll think about (NLP_{varepsilon, delta}) with preliminary and goal states (x^{mathrm{ini}} = (2,0,frac{pi}{2})) and (x^{mathrm{tar}} = (-2,0,mathrm{free})), respectively (thus (L = 4)). We’ll take the horizon as (T=10) and hold the discretisation step dimension within the RK4 scheme fixed at (h=0.1), thus (Ok=100). Because the management constraints we’ll take (mathbb{U} = [-1,1] instances [-1, 1]), and we’ll think about this attention-grabbing impediment area:

6.1 Checking conditioning

First, we’ll verify the issue’s scaling on the heat begin. Stack the state and management into an extended vector, (mathbf{d}inmbbR^{3K + 2(Ok-1)}), as follows:

[
mathbf{d} := (x_1[0],x_2[0],x_3[0],x_1[1],x_2[1],x_3[1],dots,u_1[0],u_2[0],u_1[1],u_2[1],dots,u_1[K-1],u_2[K-1]),
]

and write (NLP_{varepsilon, delta}) as:

[
newcommand{d}{mathbf{d}}
newcommand{dini}{mathbf{d}^{mathrm{ini}}}
begin{cases}
minlimits_{mathbf{d}} quad & J_{varepsilon,delta}(d)
mathrm{subject to:}
quad & g(d) leq mathbf{0},
end{cases}
]

the place (g) collects all of the constraints. As a heat begin, name this vector (dini), we’ll take the road connecting the preliminary and goal positions, with (u_1[k] equiv 0.1) and (u_2[k] equiv 0).

We should always take a look on the magnitudes of the next portions on the level (dini):

  • The price operate, (J_{varepsilon,delta}(dini))
  • Its gradient, (nabla J_{varepsilon,delta}(dini))
  • Its Hessian, (mathbf{H}(J_{varepsilon,delta}(dini)))
  • The constraint residual, (g(dini))
  • Its Jacobian, (mathbf{J}(g(dini)))

We are able to do that with the code from the repo:

import numpy as np
from car_OCP import get_initial_warm_start, build_and_check_scaling

x_init = np.array([[2],[0],[np.pi/2]])
x_target = np.array([[-2],[0]])

T = 10 # Horizon size
h = 0.1 # RK4 time step

obstacles = [
    {'centre': (0,0), 'radius': 0.5},
    {'centre': (-0.5,-0.5), 'radius': 0.2},
    {'centre': (1,0.5), 'radius': 0.3},
    {'centre': (1,-0.35), 'radius': 0.3},
    {'centre': (-1.5,-0.2), 'radius': 0.2},
    {'centre': (1.5,-0.2), 'radius': 0.2},
    {'centre': (-0.75,0), 'radius': 0.2},
    {'centre': (-1.25,0.2), 'radius': 0.2}
]

constraints = {
    'u1_min' : -1,
    'u1_max' : 1,
    'u2_min' : -1,
    'u2_max' : 1,
}

warm_start = get_initial_warm_start(x_init, x_target, T, h)

build_and_check_scaling(x_init, x_target, obstacles, constraints, T, h, warm_start, eps=1e-4, delta=1e-4)

=== SCALING DIAGNOSTICS ===

Goal J : 1.031e+00

||∇J||_2 : 3.430e-01

||Hess J||_Frob : 1.485e+02

||g||_2 : 7.367e+00

||Jac g||_Frob : 2.896e+01

============================

With small (varepsilon) and (delta), and with our chosen (dini), the target ought to clearly be near 1. The scale of its gradient and of the constraint residuals on the heat begin ought to be of comparable order. Within the print-out, (Vert nabla J Vert_2) denotes the Euclidean norm. The price operate’s Hessian may be of bigger order (this will increase with a finer time step). Within the print-out, (Vert nabla mathrm{Hess} J Vert_{mathrm{Frob}}) is the Frobenius norm of the Hessian matrix, which, intuitively talking, tells you “how large” the linear transformation is “on common”, so it’s a great measure to contemplate. The Jacobian of (g) will also be of barely bigger order.

The print-out reveals that our drawback is well-scaled, nice. However one last item we’ll additionally do is inform IPOPT to scale the issue earlier than we remedy it, with "ipopt.nlp_scaling_method":

.... arrange drawback ....

opts = {"ipopt.print_level": 0, 
        "print_time": 0, 
        "ipopt.sb": "sure",
        "ipopt.nlp_scaling_method": "gradient-based"}

opti.reduce(price)
opti.solver("ipopt", opts)

6.2 Fixing WITHOUT homotopy

This subsection reveals the impact of the parameters (varepsilon) and (delta) on the answer, with out operating the homotopy algorithm. In different phrases, we repair (varepsilon) and (delta), and remedy (NLP_{varepsilon, delta}) straight, with out even specifying a great preliminary guess. Right here is an instance of how you need to use the article’s repo to unravel an issue:

import numpy as np
from car_OCP import solve_OCP, get_arc_length, plot_solution_in_statespace_and_control

eps=1e-2
delta=1e-2

x_init = np.array([[2],[0],[np.pi/2]])
x_target = np.array([[-2],[0]])

T = 10 # Horizon size
h = 0.1 # RK4 time step

obstacles = [
    {'centre': (0,0), 'radius': 0.5},
    {'centre': (-0.5,-0.5), 'radius': 0.2},
    {'centre': (1,0.5), 'radius': 0.3},
    {'centre': (1,-0.35), 'radius': 0.3},
    {'centre': (-1.5,-0.2), 'radius': 0.2},
    {'centre': (1.5,-0.2), 'radius': 0.2},
    {'centre': (-0.75,0), 'radius': 0.2},
    {'centre': (-1.25,0.2), 'radius': 0.2}
]

constraints = {
    'u1_min' : -1,
    'u1_max' : 1,
    'u2_min' : -1,
    'u2_max' : 1,
}

x_opt, u_opt = solve_OCP(x_init, x_target, obstacles, constraints, T, h, warm_start=None, eps=eps, delta=delta)

plot_solution_in_statespace_and_control(x_opt, u_opt, obstacles, arc_length=np.spherical(get_arc_length(x_opt), 2), obstacles_only=False, eps=eps, delta=delta)

Observe how the solver may discover domestically optimum options which might be clearly not so good, like within the case (varepsilon) = (delta) = 1e-4. That is one motivation for utilizing homotopy: you possibly can information the solver to a greater domestically optimum resolution. Observe how (delta) impacts the chattering, with it being actually unhealthy when (delta=0). The solver simply fails when (varepsilon=0).

The answer with eps = delta = 1e-2. Picture by Creator.
The answer with eps = delta = 1e-4. Observe how the answer is domestically optimum, and never so good. Picture by creator.
Answer with eps = 1e-6, and delta = 0. Observe how unhealthy the management chattering is. Picture by creator.

6.3 Fixing WITH homotopy

Let’s now remedy the issue with the homotopy algorithm. I.e., we iterate over (i) and feed the solver the earlier drawback’s resolution as a heat begin on every iteration. The preliminary guess we offer (at (i=0)) is similar one we used once we checked the conditioning, that’s, it’s simply the road connecting the preliminary and goal positions, with (u_1[k] equiv 0.1) and (u_2[k] equiv 0).

The figures under present the answer obtained at numerous steps of the iteration within the homotopy algorithm. The parameter (delta) is stored above 1e-4, so we don’t have chattering.

Answer through homotopy, at eps = delta = 1e-4. Picture by creator.
Answer through homotopy, at eps = 1e-5, delta = 1e-4. Picture by creator.
Answer through homotopy, at eps = 1e-6, delta = 1e-4. Picture by creator.

As one would anticipate, the arc size of the answer obtained through the homotopy algorithm monotonically decreases with growing (i). Right here, (delta)=1e-4 all through.


7. Conclusion

I’ve thought of a troublesome optimum management drawback and gone by way of the steps one must comply with to unravel it intimately. I’ve proven how one can virtually take care of problems with nondifferentiability and management chattering, and the way a easy homotopy technique can result in options of higher high quality.


8. Additional studying

A basic e-book on sensible points surrounding optimum management is [3]. Seek the advice of the papers [4] and [5] for well-cited surveys of numerical strategies in optimum management. The e-book [6] is one other wonderful reference on numerical strategies.


References

[1] Watson, Layne T. 2001. “Concept of Globally Convergent Likelihood-One Homotopies for Nonlinear Programming.” SIAM Journal on Optimization 11 (3): 761–80. https://doi.org/10.1137/S105262349936121X.

[2] Esterhuizen, Willem, Kathrin Flaßkamp, Matthias Hoffmann, and Karl Worthmann. 2025. “Globally Convergent Homotopies for Discrete-Time Optimum Management.” SIAM Journal on Management and Optimization 63 (4): 2686–2711. https://doi.org/10.1137/23M1579224.

[3] Bryson, Arthur Earl, and Yu-Chi Ho. 1975. Utilized Optimum Management: Optimization, Estimation and Management. Taylor; Francis.

[4] Betts, John T. 1998. “Survey of Numerical Strategies for Trajectory Optimization.” Journal of Steerage, Management, and Dynamics 21 (2): 193–207.

[5] Rao, Anil V. 2009. “A Survey of Numerical Strategies for Optimum Management.” Advances within the Astronautical Sciences 135 (1): 497–528.

[6] Gerdts, Matthias. 2023. Optimum Management of ODEs and DAEs. Walter de Gruyter GmbH & Co KG.

banner
Top Selling Multipurpose WP Theme

Converter

Top Selling Multipurpose WP Theme

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

banner
Top Selling Multipurpose WP Theme

Leave a Comment

banner
Top Selling Multipurpose WP Theme

Latest

Best selling

22000,00 $
16000,00 $
6500,00 $

Top rated

6500,00 $
22000,00 $
900000,00 $

Products

Knowledge Unleashed
Knowledge Unleashed

Welcome to Ivugangingo!

At Ivugangingo, we're passionate about delivering insightful content that empowers and informs our readers across a spectrum of crucial topics. Whether you're delving into the world of insurance, navigating the complexities of cryptocurrency, or seeking wellness tips in health and fitness, we've got you covered.