One of the standard controllers in basic control theory is the linear-quadratic regulator (LQR). There is a finite-horizon case (where you have a limited amount of time), and an infinite-horizon case (where you don’t); in this post, for simplicity, we’re only going to be dealing with the infinite-horizon case.

The LQR is designed to handle a very specific kind of problem. First, it assumes you are controlling a system with linear dynamics, which means you can express them as

,

where and are the state and its time derivative, is the input, and and capture the effects of the state and input on the derivative. And second, it assumes that the cost function, denoted , is a quadratic of the form

where is the target state, and and are weights on the cost of not being at the target state and applying a control signal. The higher is, the more important it is to get to the target state asap, the higher is, the more important it is to keep the control signal small as you go to the target state.

The goal of the LQR is to calculate a feedback gain matrix such that

drives the system to the target. When the system is a linear system with a quadratic cost function, this can be done optimally. There is lots of discussion elsewhere about LQRs and their derivation, so I’m not going to go into that with this post. Instead, I’m going to talk about applying LQRs to non-linear systems, and using finite differences to do it, which works when you have a readily accessible simulation of the system on hand. The fun part is that by using finite differences you can get this to work without working out the dynamics equations yourself.

**Using LQRs on non-linear systems**

As you may have noticed, non-linear systems violate the first assumption of a linear quadratic regulator; that the system is linear. That doesn’t mean that we can’t apply it, it just means that it’s not going to be optimal. How poorly the LQR will perform depends on a few things, two important factors being how non-linear the system dynamics actually are, and how often you’re able to update the feedback gain matrix . To apply LQR to non-linear systems we’re just going to close our eyes and pretend that the system dynamics *are* linear, i.e. they fit the form

We’ll do this by approximating the actual dynamics of the system linearly. We’ll then solve for our gain value , generate our control signal for this timestep, and then re-approximate the dynamics again at the next time step and solve for from the new state. The more non-linear the system dynamics are, the less appropriate will be for generating our control signal as we move away from the state was calculated in; this is why update time of the LQR can become an important factor.

**Using finite-differences to approximate system dynamics**

An important question, then, is how do we find this system approximation? How can we calculate the and matrices that we then use to solve for ? If we know the dynamics of the system to be

,

then we can calculate

.

If you’re going to try this for the 3-link arm, though, get out Mathematica. Do not try this by hand. If you disregard my warning and foolhardily attempt such a derivation you will regret, repent, and then appeal to Wolfram Alpha for salvation. These equations quickly become terrible and long even for seemingly not-so-complicated systems.

There are a few ways to skirt this. Here we’re going to assume that the system under control is a simulation, or that we at least have access to an accurate model, and use the finite differences method to compute these values. The idea behind finite differences is to approximate the rate of change of the function at the point by sampling near and using the difference to calculate . Here’s a picture for a 1D system:

So here, our current state is the blue dot, and the red dots represent the sample points and . We can then calculate

and you can see the actual rate of change of at plotted in the blue dashed line, and the approximated rate of change calculated using finite differences plotted in the red dashed line. We can also see that the approximated derivative is only accurate near (the blue dot).

Back in our multi-dimensional system, to use finite differences to calculate the derivative with respect to the state and the input we’re going to vary each of the dimensions of the state and input by some small amount one at a time, calculating the effects of each one by one. Here’s a chunk of pseudo-code to hopefully clarify this idea:

eps = 1e-5 A = np.zeros((len(current_state), len(current_state)) for ii in range(len(current_state)): x = current_state.copy() x[ii] += eps x_inc = simulate_system(state=x, input=control_signal) x = current_state.copy() x[ii] -= eps x_dec = simulate_system(state=x, input=control_signal) A[:,ii] = (x_inc - x_dec) / (2 * eps) B = np.zeros((len(current_state), len(control_signal)) for ii in range(len(control_signal)): u = control_signal.copy() u[ii] += eps x_inc = simulate_system(state=current_state, input=u) u = control_signal.copy() u[ii] -= eps x_dec = simulate_system(state=current_state, input=u) B[:,ii] = (x_inc - x_dec) / (2 * eps)

Now we’re able to generate our and matrices we have everything we need to solve for our feedback gain matrix ! Which is great.

**Note on using finite differences in continuous vs discrete setup**

Something that’s important to straighten out too is what exactly is returned by the `simulate_system`

function in the code above. In the continuous case, your system is captured as

,

where in the discrete case your system is defined

If you are calculating your feedback gain matrix using the continuous solution to the algebraic Riccati equation, then you need to be returning . If you’re solving for using the discrete solution to the algebraic Riccati equation you need to return . This was just something that I came across as I was coding and so I wanted to mention it here in case anyone else stumbled across it!

**Applying LQR to 2 and 3 link arm control**

Alright! Let’s have a look at how the LQR does controlling non-linear systems. Below we have the control of a 2-link arm compared to a 3-link arm, and you can see the control of the 2-link arm is better. This is a direct result of the dynamics of a 3-link arm being significantly more complex.

**Note on controlling at different timesteps**

When I was first testing the LQR controller I expected the effects of different control update times to be a lot more significant than it was. As it turns out, for controlling a 3-link arm, there’s not really a visible difference in a controller that is updating every .01 seconds vs every .001 seconds vs every .0001 seconds. Let’s have a look:

Can’t even tell, eh? Fun fact, the simulation took 1 minute 30 seconds at .01 seconds time step and 45 minutes at .0001 seconds time step. The left-most animation is the .01 seconds and the right-most the .0001 seconds. But why is there seemingly so little difference? Well, this boils down to the dynamics of the 3-link arm changing actually pretty slowly. Below I’ve plotted just a few of the elements from the , , and matrices over .5 seconds of simulation time:

So, there are some obvious points where sampling the dynamics at a .01 time step is noticeably less accurate, but all in all there’s not a huuuggge difference between sampling at .01 and .0001 seconds. If you’re just watching the end-effector path it’s really not very noticeable. You can see how the elements of and are changing fairly slowly; this means that is going to be an effective feedback gain for a fair chunk of time. And the computational savings you get by sampling the dynamics and regenerating every .01 seconds instead of every .0001 seconds are pretty big. This was just another thing that I came across when playing around with the LQR, the take away being don’t just assume you need to update your system crazy often. You might get very comparable performance for much less computational cost.

**Conclusions**

All in all, the LQR controller is pretty neat! It’s really simple to set up, and generic. We don’t need any specific information about the system dynamics, like we do for effective operational space control (OSC). When we estimate the dynamics with finite differences, all need is a decent system model that we can sample. Again, the more non-linear the system, of course, the less effective a LQR will be. If you’re interested in playing around with one, or generating the figures that I show above, the code is all up and running on my Github for you to explore.

[…] few months ago I posted on Linear Quadratic Regulators (LQRs) for control of non-linear systems using finite-differences. The gist of it was at every time step linearize the dynamics, quadratize (it could be a word) the […]

[…] This paper, by Drs. Jan Peters and Stepfan Schaal, is where I first stumbled across the above formulation of gradient approximation and read about SPSA (side note: I really recommend reading instead the Scholarpedia article on policy gradients, because it has fixes to a couple notation mistakes from the paper). Reading about this I thought, oh interesting, an alternative to FDSA for gradient approximation, let’s see how it well it does replacing FDSA in a linear quadratic regulator (LQR). […]

First, thank you for your kind explanation!!!

However, there is nothing in the link to your github….

Could you please upload them…?

Thank you

Wonchul Kim

Hello, thanks! And whoooops, sorry about that. I’ve updated the github link. I changed the repo to be in it’s own at https://github.com/studywolf/control/

thought I updated all the links, thanks for catching this!

Thank you!!

your posts are really helpful !!

I expect what is coming next!!