Tag Archives: 2 link arm

Deep learning for control using augmented Hessian-free optimization

Traditionally, deep learning is applied to feed-forward tasks, like classification, where the output of the network doesn’t affect the input to the network. It is a decidedly harder problem when the output is recurrently connected such that network output affects the input. Because of this application of deep learning methods to control was largely unexplored until a few years ago. Recently, however, there’s been a lot of progress and research in this area. In this post I’m going to talk about an implementation of deep learning for control presented by Dr. Ilya Sutskever in his thesis Training Recurrent Neural Networks.

In his thesis, Dr. Sutskever uses augmented Hessian-free (AHF) optimization for learning. There are a bunch of papers and posts that go into details about AHF, here’s a good one by Andrew Gibiansky up on his blog, that I recommend you check out. I’m not going to really talk much here about what AHF is specifically, or how it differs from other methods, if you’re unfamiliar there are lots of places you can read up on it. Quickly, though, AHF is kind of a bag of tricks you can use with a fast method for estimating the curvature of the loss function with respect to the weights of a neural network, as well as the gradient, which allows it to make larger updates in directions where the loss function doesn’t change quickly. So rather than estimating the gradient and then doing a small update along each dimension, you can make the size of your update large in directions that change slowly and small along dimensions where things change quickly. And now that’s enough about that.

In this post I’m going to walk through using a Hessian-free optimization library (version 0.3.8) written by my compadre Dr. Daniel Rasmussen to train up a neural network to train up a 2-link arm, and talk about the various hellish gauntlets you need run to get something that works. Whooo! The first thing to do is install this Hessian-free library, linked above.

I’ll be working through code edited a bit for readability, to find the code in full you can check out the files up on my GitHub.

Build the network

Dr. Sutskever specified the structure of the network in his thesis to be 4 layers: 1) a linear input layer, 2) 100 Tanh nodes, 3) 100 Tanh nodes, 4) linear output layer. The network is connected up with the standard feedforward connections from 1 to 2 to 3 to 4, plus recurrent connections on 2 and 3 to themselves, plus a ‘skip’ connection from layer 1 to layer 3. Finally, the input to the network is the target state for the plant and the current state of the plant. So, lots of recursion! Here’s a picture:

network structure

The output layer connects in to the plant, and, for those unfamiliar with control theory terminology, ‘plant’ just means the system that you’re controlling. In this case an arm simulation.

Before we can go ahead and set up the network that we want to train, we also need to specify the loss function that we’re going to be using during training. The loss function in Ilya’s thesis is a standard one:

L(\theta) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{u}_t) + \ell_f(\textbf{x}_N),
\ell(\textbf{u}_t) = \alpha \frac{||\textbf{u}_t||^2}{2},
\ell_f(\textbf{x}_N) = \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2}

where L(\theta) is the total cost of the trajectory generated with \theta, the set of network parameters, \ell(\textbf{u}) is the immediate state cost, \ell_f(\textbf{x}) is the final state cost, \textbf{x} is the state of the arm, \textbf{x}^* is the target state of the arm, \textbf{u} is the control signal (torques) that drives the arm, and \alpha is a gain value.

To code this up using the hessianfree library we do:

from hessianfree import RNNet
from hessianfree.nonlinearities import (Tanh, Linear, Plant)
from hessianfree.loss_funcs import SquaredError, SparseL2

l2gain = 10e-3 * dt # gain on control signal loss
rnn = RNNet(
    # specify the number of nodes in each layer
    shape=[num_states * 2, 96, 96, num_states, num_states],
    # specify the function of the nodes in each layer
    layers=[Linear(), Tanh(), Tanh(), Linear(), plant],
    # specify the layers that have recurrent connections
    rec_layers=[1,2],
    # specify the connections between layers
    conns={0:[1, 2], 1:[2], 2:[3], 3:[4]},
    # specify the loss function
    loss_type=[
        # squared error between plant output and targets
        SquaredError(),
        # penalize magnitude of control signal (output of layer 3)
        SparseL2(l2gain, layers=[3])],
    load_weights=W,
    use_GPU=True)

Note that if you want to run it on your GPU you’ll need PyCuda and sklearn installed. And a GPU.

An important thing to note as well is that in Dr. Sustkever’s thesis when we’re calculating the squared error of the distance from the arm state to the target, this is measured in joint angles. So it’s kind of a weird set up to be looking at the movement of the hand but have your cost function in joint-space instead of end-effector space, but it definitely simplifies training by making the cost more directly relatable to the control signal. So we need to calculate the joint angles of the arm that will have the hand at different targets around a circle. To do this we’ll take advantage of our inverse kinematics solver from way back when, and use the following code:

def gen_targets(arm, n_targets=8, sig_len=100):
    #Generate target angles corresponding to target
    #(x,y) coordinates around a circle
    import scipy.optimize

    x_bias = 0
    if arm.DOF == 2:
        y_bias = .35
        dist = .075
    elif arm.DOF == 3:
        y_bias = .5
        dist = .2

    # set up the reaching trajectories around circle
    targets_x = [dist * np.cos(theta) + x_bias \
                    for theta in np.linspace(0, np.pi*2, 65)][:-1]
    targets_y = [dist * np.sin(theta) + y_bias \
                    for theta in np.linspace(0, np.pi*2, 65)][:-1]

    joint_targets = []
    for ii in range(len(targets_x)):
        joint_targets.append(arm.inv_kinematics(xy=(targets_x[ii],
                                                    targets_y[ii])))
    targs = np.asarray(joint_targets)

    # repeat the targets over time
    for ii in range(targs.shape[1]-1):
        targets = np.concatenate(
            (np.outer(targs[:, ii], np.ones(sig_len))[:, :, None],
             np.outer(targs[:, ii+1], np.ones(sig_len))[:, :, None]), axis=-1)
    targets = np.concatenate((targets, np.zeros(targets.shape)), axis=-1)
    # only want to penalize the system for not being at the
    # target at the final state, set everything before to np.nan
    targets[:, :-1] = np.nan 

    return targets

And you can see in the last couple lines that to implement the distance to target as a final state cost penalty only we just set all of the targets before the final time step equal to np.nan. If we wanted to penalize distance to target throughout the whole trajectory we would just comment that line out.

Create the plant

You’ll notice in the code that defines our RNN I set the last layer of the network to be plant, but that that’s not defined anywhere. Let’s talk. There are a couple of things that we’re going to need to incorporate our plant into this network and be able to use any deep learning method to train it. We need to be able to:

  1. Simulate the plant forward; i.e. pass in input and get back the resulting plant state at the next timestep.
  2. Calculate the derivative of the plant state with respect to the input; i.e. how do small changes in the input affect the state.
  3. Calculate the derivative of the plant state with respect to the previous state; i.e. how do small changes in the plant state affect the state at the next timestep.
  4. Calculate the derivative of the plant output with respect to its state; i.e. how do small changes in the current position of the state affect the output of the plant.

So 1 is easy, we have the arm simulations that we want already, they’re up on my GitHub. Number 4 is actually trivial too, because the output of our plant is going to be the state itself, so the derivative of the output with respect to the state is just the identity matrix.

For 2 and 3, we’re going to need to calculate some derivatives. If you’ve read the last few posts you’ll note that I’m on a finite differences kick. So let’s get that going! Because no one wants to calculate derivatives!

Important note, the notation in these next couple pieces of code is going to be a bit different from my normal notation because they’re matching with the hessianfree library notation, which is coming from a reinforcement learning literature background instead of a control theory background. So, s is the state of the plant, and x is the input to the plant. I know, I know. All the same, make sure to keep that in mind.

# calculate ds0/dx0 with finite differences
d_input_FD = np.zeros((x.shape[0], # number of trials
                       x.shape[1], # number of inputs
                       self.state.shape[1])) # number of states
for ii in range(x.shape[1]):
    # calculate state adding eps to x[ii]
    self.reset_plant(self.prev_state)
    inc_x = x.copy()
    inc_x[:, ii] += self.eps
    self.activation(inc_x)
    state_inc = self.state.copy()
    # calculate state subtracting eps from x[ii]
    self.reset_plant(self.prev_state)
    dec_x = x.copy()
    dec_x[:, ii] -= self.eps
    self.activation(dec_x)
    state_dec = self.state.copy()

    d_input_FD[:, :, ii] = \
        (state_inc - state_dec) / (2 * self.eps)
d_input_FD = d_input_FD[..., None]

Alrighty. First we create a tensor to store the results. Why is it a tensor? Because we’re going to be doing a bunch of runs at once. So our state dimensions are actually trials x size_input. When we then take the partial derivative, we end up with trials many size_input x size_state matrices. Then we increase each of the parameters of the input slightly one at a time and store the results, decrease them all one at a time and store the results, and compute our approximation of the gradient.

Next we’ll do the same for calculating the derivative of the state with respect to the previous state.

# calculate ds1/ds0
d_state_FD = np.zeros((x.shape[0], # number of trials
                       self.state.shape[1], # number of states
                       self.state.shape[1])) # number of states
for ii in range(self.state.shape[1]):
    # calculate state adding eps to self.state[ii]
    state = np.copy(self.prev_state)
    state[:, ii] += self.eps
    self.reset_plant(state)
    self.activation(x)
    state_inc = self.state.copy()
    # calculate state subtracting eps from self.state[ii]
    state = np.copy(self.prev_state)
    state[:, ii] -= self.eps
    self.reset_plant(state)
    self.activation(x)
    state_dec = self.state.copy()

    d_state_FD[:, :, ii] = \
        (state_inc - state_dec) / (2 * self.eps)
d_state_FD = d_state_FD[..., None]

Great! We’re getting closer to having everything we need. Another thing we need is a wrapper for running our arm simulation. It’s going to look like this:

def activation(self, x):
    state = []
    # iterate through and simulate the plant forward
    # for each trial
    for ii in range(x.shape[0]):
        self.arm.reset(q=self.state[ii, :self.arm.DOF],
                       dq=self.state[ii, self.arm.DOF:])
        self.arm.apply_torque(u[ii])
        state.append(np.hstack([self.arm.q, self.arm.dq]))
    state = np.asarray(state)

    self.state = self.squashing(state)

This is definitely not the fastest code to run. Much more ideally we would put the state and input into vectors and do a single set of computations for each call to activation rather than having that for loop in there. Unfortunately, though, we’re not assuming that we have access to the dynamics equations / will be able to pass in vector states and inputs.

Squashing
Looking at the above code that seems pretty clear what’s going on, except you might notice that last line calling self.squashing. What’s going on there?

The squashing function looks like this:

def squashing(self, x):
    index_below = np.where(x < -2*np.pi)
    x[index_below] = np.tanh(x[index_below]+2*np.pi) - 2*np.pi
    index_above = np.where(x > 2*np.pi)
    x[index_above] = np.tanh(x[index_above]-2*np.pi) + 2*np.pi
    return x

All that’s happening here is that we’re taking our input, and doing nothing to it as long as it doesn’t start to get too positive or too negative. If it does then we just taper it off and prevent it from going off to infinity. So running a 1D vector through this function we get:

squashed
This ends up being a pretty important piece of the code here. Basically it prevents wild changes to the weights during learning to result in the system breaking down. So the state of the plant can’t go off to infinity and cause an error to be thrown, stopping our simulation. But because the target state is well within the bounds of where the squashing function does nothing, post-training we’ll still be able to use the resulting network to control a system that doesn’t have this fail safe built in. Think of this function as training wheels that catch you only if you start falling.

With that, we no have pretty much all of the parts necessary to begin training our network!

Training the network

We’re going to be training this network on the centre-out reaching task, where you start at a centre point and reach to a bunch of target locations around a circle. I’m just going to be re-implementing the task as it was done in Dr. Sutskever’s thesis, so we’ll have 64 targets around the circle, and train using a 2-link arm. Here’s the code that we’ll use to actually run the training:

for ii in range(last_trial+1, num_batches):
    # train a bunch of batches using the same input every time
    # to allow the network a chance to minimize things with
    # stable input (speeds up training)
    err = rnn.run_batches(plant, targets=None,
              max_epochs=batch_size,
              optimizer=HessianFree(CG_iter=96, init_damping=100))

    # save the weights to file, track trial and error
    # err = rnn.error(inputs)
    err = rnn.best_error
    name = 'weights/rnn_weights-trial%04i-err%.3f'%(ii, err)
    np.savez_compressed(name, rnn.W)

Training your own network

A quick aside: if you want to run this code yourself, get a real good computer, have an arm simulation ready, the hessianfree Python library installed, and download and run this train_hf.py file. (Note: I used version 0.3.8 of the hessianfree library, which you can install using pip install hessianfree==0.3.8) This will start training and save the weights into a weights/ folder, so make sure that that exists in the same folder as train_hf.py. If you want to view the results of the training at any point run the plot_error.py file, which will load in the most recent version of the weights and plot the error so far. If you want to generate an animated plot like I have below run gen_animation_plots.py and then the last command from my post on generating animated gifs.

Another means of seeing the results of your trained up network is to use the controller I’ve implemented in my controls benchmarking suite, which looks for a set of saved weights in the controllers/weights folder, and will load it in and use it to generate command signals for the arm by running it with

python run.py arm2_python ahf reach --dt=1e-2

where you replace arm2_python with whatever arm model you trained your model on. Note the --dt=1e-2 flag, that is important because the model was trained with a .01 timestep and things get a bit weird if you suddenly change the dynamics on the controller.

OK let’s look at some results!

Results

OK full discretion, these results are not optimizing the cost function we discussed above. They’re implementing a simpler cost function that only looks at the final state, i.e. it doesn’t penalize the magnitude of the control signal. I did this because Dr. Sutskever says in his thesis he was able to optimize with just the final state cost using much smaller networks. I originally looked at neurons with 96 neurons in each layer, and it just took forgoddamnedever to run. So after running for 4 weeks (not joking) and needing to make some more changes I dropped the number of neurons and simplified the task.

The results below are from running a network with 32 neurons in each layer controlling this 2-link arm, and took another 4-5 weeks to train up.
weights32

Hey that looks good! Not bad, augmented Hessian-free learning, not bad. It had pretty consistent (if slow) decline in the error rate, with a few crazy bumps from which it quickly recovered. Also take note that each training iteration is actually 32 runs, so it’s not 12,50-ish runs it’s closer to 400,000 training runs that it took to get here.

One biggish thing that was a pain was that it turns out that I only trained the neural network for reaching in the one direction, and when you only train it to reach one way it doesn’t generalize to reaching back to the starting point (which, fair enough). But, I didn’t realize this until I was took the trained network and ran it in the benchmarking code, at which point I was not keen to redo all of the training it took to get the neural network to the level of accuracy it was at under a more complicated training set. The downside of this is that even though I’ve implemented a controller that takes in the trained network and uses it to control the arm, to do the reaching task I have to just do a hard reset after the arm reaches the target, because it can’t reach back to the center, like all the other controllers. All the same, here’s an animation of the trained up AHF controller reaching to 8 targets (it was trained on all 64 above though):

animation

Things don’t always go so smoothly, though. Here’s results from another training run that took around 2-3 weeks, and uses a different 2-link arm model (translated from Matlab code written by Dr. Emo Todorov):

weights32_suts2

What I found frustrating about this was that if you look at the error over time then this arm is doing as well or better than the previous arm at a lot of points. But the corresponding trajectories look terrible, like something you would see in a horror movie based around getting good machine learning results. This of course comes down to how I specified the cost function, and when I looked at the trajectories plotted over time the velocity of the arm is right at zero at the final time step, which it is not quiiiitte the case for the first controller. So this second network has found a workaround to minimize the cost function I specified in a way I did not intend. To prevent this, doing something like weighting the distance to target heavier than non-zero velocity would probably work. Or possibly just rerunning the training with a different random starting point you could get out a better controller, I don’t have a great feel for how important the random initialization is, but I’m hoping that it’s not all too important and its effects go to zero with enough training. Also, it should be noted I’ve run the first network for 12,500 iterations and the second for less than 6,000, so I’ll keep letting them run and maybe it will come around. The first one looked pretty messy too until about 4,000 iterations in.

Training regimes

Frustratingly, the way that you train deep networks is very important. So, very much like the naive deep learning network trainer that I am, I tried the first thing that pretty much anyone would try:

  • run the network,
  • update the weights,
  • repeat.

This is what I’ve done in the results above. And it worked well enough in that case.

If you remember back to the iLQR I made a little while ago, I was able to change the cost function to be

L(\theta) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{u}_t) + \ell_f(\textbf{x}_N),
\ell(\textbf{u}_t, \textbf{x}_t) = \alpha \frac{||\textbf{u}_t||^2}{2} + \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2},
\ell_f(\textbf{x}_N) = \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2}

(i.e. to include a penalty for distance to target throughout the trajectory and not just at the final time step) which resulted in straighter trajectories when controlling the 2-link arm. So I thought I would try this here as well. Sadly (incredibly sadly), this was fairly fruitless. The network didn’t really learn or improve much at all.

After much consideration and quandary on my part, I talked with Dr. Dan and he suggested that I try another method:

  • run the network,
  • record the input,
  • hold the input constant for a few batches of weight updating,
  • repeat.

This method gave much better results. BUT WHY? I hear you ask! Good question. Let me give giving explanation a go.

Essentially, it’s because the cost function is more complex now. In the first training method, the output from the plant is fed back into the network as input at every time step. When the cost function was simpler this was OK, but now we’re getting very different input to train on at every iteration. So the system is being pulled in different directions back and forth at every iteration. In the second training regime, the same input is given several times in a row, which let’s the system follow the same gradient for a few training iterations before things change again. In my head I picture this as giving the algorithm a couple seconds to catch its breath dunking it back underwater.

This is a method that’s been used in a bunch of places recently. One of the more high-profile instances is in the results published from DeepMind on deep RL for control and for playing Go. And indeed, it also works well here.

To implement this training regime, we set up the following code:

for ii in range(last_trial+1, num_batches):

    # run the plant forward once
    rnn.forward(input=plant, params=rnn.W)

    # get the input and targets from above rollout
    inputs = plant.get_vecs()[0].astype(np.float32)
    targets = np.asarray(plant.get_vecs()[1], dtype=np.float32)

    # train a bunch of batches using the same input every time
    # to allow the network a chance to minimize things with
    # stable input (speeds up training)
    err = rnn.run_batches(inputs, targets, max_epochs=batch_size,
              optimizer=HessianFree(CG_iter=96, init_damping=100))

    # save the weights to file, track trial and error
    # err = rnn.error(inputs)
    err = rnn.best_error
    name = 'weights/rnn_weights-trial%04i-err%.3f'%(ii, err)
    np.savez_compressed(name, rnn.W)

So you can see that we do one rollout with the weights, then go in and get the inputs and targets that were used in that rollout, and start training the network while holding those constant for batch_size epochs (training sessions). From a little bit of messing around I’ve found batch_size=32 to be a pretty good number. So then it runs 32 training iterations where it’s updating the weights, and then saves those weights (because we want a loooottttt of check-points) and then restarts the loop.

Embarrassingly, I’ve lost my simulation results from this trial, somehow…so I don’t have any nice plots to back up the above, unfortunately. But since this is just a blog post I figured I would at least talk about it a little bit, since people might still find it useful if they’re just getting into the field like me. and just update this post whenever I re-run them. If I rerun them.

What I do have, however, are results where this method doesn’t work! I tried this with the simpler cost function, that only looks at the final state distance from the target, and it did not go so well. Let’s look at that one!

weights32_suts

My guess here is basically that the system has gotten to a point where it’s narrowed things down in the parameter space and now when you run 32 batches it’s overshooting. It needs feedback about its updates after every update at this point. That’s my guess, at least. So it could be the case that for more complex cost functions you’d want to train it while holding the input constant for a while, and then when the error starts to plateau switch to updating the input after every parameter update.

Conclusions

All in all, AHF for training neural networks in control is pretty awesome. There are of course still some major hold-backs, mostly related to how long it takes to train up a network, and having to guess at effective training regimes and network structures etc. But! It was able to train up a relatively small neural network to move an arm model from a center point to 64 targets around a circle, with no knowledge of the system under control at all. In Dr. Sutskever’s thesis he goes on to use the same set up under more complicated circumstances, such as when there’s a feedback delay, or a delay on the outgoing control signal, and unexpected noise etc, so it is able to learn under a number of different, fairly complex situations. Which is pretty slick.

Related to the insane training time required, I very easily could be missing some basic thing that would help speed things up. If you, reader, get ambitious and run the code on your own machine and find out useful methods for speeding up the training please let me know! Personally, my plan is to next investigate guided policy search, which seems like it’s found a way around this crazy training time.

Advertisements
Tagged , , , , , ,

Linear-Quadratic Regulation for non-linear systems using finite differences

One of the standard controllers in basic control theory is the linear-quadratic regulator (LQR). There is a finite-horizon case (where you have a limited amount of time), and an infinite-horizon case (where you don’t); in this post, for simplicity, we’re only going to be dealing with the infinite-horizon case.

The LQR is designed to handle a very specific kind of problem. First, it assumes you are controlling a system with linear dynamics, which means you can express them as

\dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u},

where \textbf{x} and \dot{\textbf{x}} are the state and its time derivative, \textbf{u} is the input, and \textbf{A} and \textbf{B} capture the effects of the state and input on the derivative. And second, it assumes that the cost function, denoted J, is a quadratic of the form

J = \int_0^{\infty} \left( (\textbf{x} - \textbf{x}^*)^T \textbf{Q} (\textbf{x} - \textbf{x}^*) + \textbf{u}^T \textbf{R} \textbf{u} \right) dt

where \textbf{x}^* is the target state, and \textbf{Q} = \textbf{Q}^T \geq 0 and \textbf{R} = \textbf{R}^T \geq 0 are weights on the cost of not being at the target state and applying a control signal. The higher \textbf{Q} is, the more important it is to get to the target state asap, the higher \textbf{R} is, the more important it is to keep the control signal small as you go to the target state.

The goal of the LQR is to calculate a feedback gain matrix \textbf{K} such that

\textbf{u} = -\textbf{K} \textbf{x},

drives the system to the target. When the system is a linear system with a quadratic cost function, this can be done optimally. There is lots of discussion elsewhere about LQRs and their derivation, so I’m not going to go into that with this post. Instead, I’m going to talk about applying LQRs to non-linear systems, and using finite differences to do it, which works when you have a readily accessible simulation of the system on hand. The fun part is that by using finite differences you can get this to work without working out the dynamics equations yourself.

Using LQRs on non-linear systems

As you may have noticed, non-linear systems violate the first assumption of a linear quadratic regulator; that the system is linear. That doesn’t mean that we can’t apply it, it just means that it’s not going to be optimal. How poorly the LQR will perform depends on a few things, two important factors being how non-linear the system dynamics actually are, and how often you’re able to update the feedback gain matrix \textbf{K}. To apply LQR to non-linear systems we’re just going to close our eyes and pretend that the system dynamics are linear, i.e. they fit the form

\dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u}.

We’ll do this by approximating the actual dynamics of the system linearly. We’ll then solve for our gain value \textbf{K}, generate our control signal for this timestep, and then re-approximate the dynamics again at the next time step and solve for \textbf{K} from the new state. The more non-linear the system dynamics are, the less appropriate \textbf{K} will be for generating our control signal \textbf{u} as we move away from the state \textbf{K} was calculated in; this is why update time of the LQR can become an important factor.

Using finite-differences to approximate system dynamics

An important question, then, is how do we find this system approximation? How can we calculate the \textbf{A} and \textbf{B} matrices that we then use to solve for \textbf{K}? If we know the dynamics of the system to be

\dot{\textbf{x}} = f(\textbf{x}, \textbf{u}),

then we can calculate

\textbf{A} = \frac{\partial f(\textbf{x}, \textbf{u})}{\partial \textbf{x}}, \;\;\;\; \textbf{B} = \frac{\partial f(\textbf{x}, \textbf{u})}{\partial \textbf{u}}.

If you’re going to try this for the 3-link arm, though, get out Mathematica. Do not try this by hand. If you disregard my warning and foolhardily attempt such a derivation you will regret, repent, and then appeal to Wolfram Alpha for salvation. These equations quickly become terrible and long even for seemingly not-so-complicated systems.

There are a few ways to skirt this. Here we’re going to assume that the system under control is a simulation, or that we at least have access to an accurate model, and use the finite differences method to compute these values. The idea behind finite differences is to approximate the rate of change of the function f at the point x by sampling f near x and using the difference to calculate \dot{f}(x). Here’s a picture for a 1D system:

finitedifferences
So here, our current state x is the blue dot, and the red dots represent the sample points x + \Delta x and x - \Delta x. We can then calculate

\dot{f}(x) \approx \frac{f(x+\Delta x) - f(x-\Delta x)}{2\Delta x},

and you can see the actual rate of change of f at x plotted in the blue dashed line, and the approximated rate of change calculated using finite differences plotted in the red dashed line. We can also see that the approximated derivative is only accurate near x (the blue dot).

Back in our multi-dimensional system, to use finite differences to calculate the derivative with respect to the state and the input we’re going to vary each of the dimensions of the state and input by some small amount one at a time, calculating the effects of each one by one. Here’s a chunk of pseudo-code to hopefully clarify this idea:

eps = 1e-5
A = np.zeros((len(current_state), len(current_state))
for ii in range(len(current_state)):
    x = current_state.copy()
    x[ii] += eps
    x_inc = simulate_system(state=x, input=control_signal)
    x = current_state.copy()
    x[ii] -= eps
    x_dec = simulate_system(state=x, input=control_signal)
    A[:,ii] = (x_inc - x_dec) / (2 * eps)

B = np.zeros((len(current_state), len(control_signal))
for ii in range(len(control_signal)):
    u = control_signal.copy()
    u[ii] += eps
    x_inc = simulate_system(state=current_state, input=u)
    u = control_signal.copy()
    u[ii] -= eps
    x_dec = simulate_system(state=current_state, input=u)
    B[:,ii] = (x_inc - x_dec) / (2 * eps)

Now we’re able to generate our \textbf{A} and \textbf{B} matrices we have everything we need to solve for our feedback gain matrix \textbf{K}! Which is great.

Note on using finite differences in continuous vs discrete setup

Something that’s important to straighten out too is what exactly is returned by the simulate_system function in the code above. In the continuous case, your system is captured as

\dot{\textbf{x}} = \textbf{A}\textbf{x} + \textbf{B}\textbf{u},,

where in the discrete case your system is defined

\textbf{x}(t+1) = \textbf{A}\textbf{x}(t) + \textbf{B}\textbf{u}(t).

If you are calculating your feedback gain matrix \textbf{K} using the continuous solution to the algebraic Riccati equation, then you need to be returning \dot{\textbf{x}}(t). If you’re solving for \textbf{K} using the discrete solution to the algebraic Riccati equation you need to return \textbf{x}(t+1). This was just something that I came across as I was coding and so I wanted to mention it here in case anyone else stumbled across it!

Applying LQR to 2 and 3 link arm control

Alright! Let’s have a look at how the LQR does controlling non-linear systems. Below we have the control of a 2-link arm compared to a 3-link arm, and you can see the control of the 2-link arm is better. This is a direct result of the dynamics of a 3-link arm being significantly more complex.

2linkarm 3linkarm

Note on controlling at different timesteps

When I was first testing the LQR controller I expected the effects of different control update times to be a lot more significant than it was. As it turns out, for controlling a 3-link arm, there’s not really a visible difference in a controller that is updating every .01 seconds vs every .001 seconds vs every .0001 seconds. Let’s have a look:

3link.01 3link.001 3link.0001

Can’t even tell, eh? Fun fact, the simulation took 1 minute 30 seconds at .01 seconds time step and 45 minutes at .0001 seconds time step. The left-most animation is the .01 seconds and the right-most the .0001 seconds. But why is there seemingly so little difference? Well, this boils down to the dynamics of the 3-link arm changing actually pretty slowly. Below I’ve plotted just a few of the elements from the \textbf{A}, \textbf{B}, and \textbf{K} matrices over .5 seconds of simulation time:

A matrixB matrixK matrix

So, there are some obvious points where sampling the dynamics at a .01 time step is noticeably less accurate, but all in all there’s not a huuuggge difference between sampling at .01 and .0001 seconds. If you’re just watching the end-effector path it’s really not very noticeable. You can see how the elements of \textbf{A} and \textbf{B} are changing fairly slowly; this means that \textbf{K} is going to be an effective feedback gain for a fair chunk of time. And the computational savings you get by sampling the dynamics and regenerating \textbf{K} every .01 seconds instead of every .0001 seconds are pretty big. This was just another thing that I came across when playing around with the LQR, the take away being don’t just assume you need to update your system crazy often. You might get very comparable performance for much less computational cost.

Conclusions

All in all, the LQR controller is pretty neat! It’s really simple to set up, and generic. We don’t need any specific information about the system dynamics, like we do for effective operational space control (OSC). When we estimate the dynamics with finite differences, all need is a decent system model that we can sample. Again, the more non-linear the system, of course, the less effective a LQR will be. If you’re interested in playing around with one, or generating the figures that I show above, the code is all up and running on my Github for you to explore.

Tagged , , , , , , , ,

Setting up an arm simulation interface in Nengo 2

I got an email the other day asking about how to set up an arm controller in Nengo, where they had been working from the Spaun code to strip away things until they had just the motor portion left they could play with. I ended up putting together a quick script to get them started and thought I would share it here in case anyone else was interested. It’s kind of fun because it shows off some of the new GUI and node interfacing. Note that you’ll need nengo_gui version .15+ for this to work. In general I recommend getting the dev version installed, as it’s stable and updates are made all the time improving functionality.

Nengo 1.4 core was all written in Java, with Jython and Python scripting thrown in on top, and since then a lot of work has gone into the re-write of the entire code base for Nengo 2. Nengo 2 is now written in Python, all the scripting is in Python, and we have a kickass GUI and support for running neural simulations on CPUs, GPUs, and specialized neuromorphic hardware like SpiNNaKer. I super recommend checking it out if you’re at all interested in neural modelling, we’ve got a bunch of tutorials up and a very active support board to help with any questions or problems. You can find the simulator code for installation here: https://github.com/nengo/nengo and the GUI code here: https://github.com/nengo/nengo_gui, where you can also find installation instructions.

And once you have that up and running, to run an arm simulation you can download and run the following code I have up on my GitHub. When you pop it open at the top is a run_in_GUI boolean, which you can use to open the sim up in the GUI, if you set it to False then it will run in the Nengo simulator and once finished will pop up with some basic graphs. Shout out to Terry Stewart for putting together the arm-visualization. It’s a pretty slick little demo of the extensibility of the Nengo GUI, you can see the code for it all in the <code>arm_func</code> in the <code>nengo_arm.py</code> file.

As it’s set up right now, it uses a 2-link arm, but you can simply swap out the Arm.py file with whatever plant you want to control. And as for the neural model, there isn’t one implemented in here, it’s just a simple input node that runs through a neural population to apply torque to the two joints of the arm. But! It should be a good start for anyone looking to experiment with arm control in Nengo. Here’s what it looks like when you pull it up in the GUI (also note that the arm visualization only appears once you hit the play button!):

nengoarm

Tagged , , , , ,

Arm simulation visualization with Matplotlib

One of the downsides of switching to Python from Matlab is that it can be a pain to plot some kinds of things, and I’ve found animations to be one those things. In previous posts I’ve done the visualization of my arm simulations through Pyglet, but I recently started playing around with Matplotlib’s animation function, and the results are pretty smooth. The process is also relatively painless and quick to get up and running, so I thought I would throw up some Matplotlib code for visualizing my previously described 2 link arm MapleSim simulation.

So, let’s look at the code:

#Written by Travis DeWolf (Sept, 2013)
#Based on code by Jake Vanderplas - http://jakevdp.github.com

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import py2LinkArm

class TwoLinkArm:
    """
    :param list u: the torque applied to each joints
    """
    def __init__(self, u = [.1, 0]): 
        self.u = np.asarray(u, dtype='float') # control signal
        self.state = np.zeros(3) # vector for current state
        self.L1=0.37 # length of arm link 1 in m
        self.L2=0.27 # length of arm link 2 in m
        self.time_elapsed = 0

        self.sim = py2LinkArm.pySim()
        self.sim.reset(self.state)
    
    def position(self):
        """Compute x,y position of the hand"""

        x = np.cumsum([0,
                       self.L1 * np.cos(self.state[1]),
                       self.L2 * np.cos(self.state[2])])
        y = np.cumsum([0,
                       self.L1 * np.sin(self.state[1]),
                       self.L2 * np.sin(self.state[2])])
        return (x, y)

    def step(self, dt):
        """Simulate the system and update the state"""
        for i in range(1500):
            self.sim.step(self.state, self.u)
        self.time_elapsed = self.state[0]

#------------------------------------------------------------
# set up initial state and global variables
arm = TwoLinkArm()
dt = 1./30 # 30 fps

#------------------------------------------------------------
# set up figure and animation
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111, aspect='equal', autoscale_on=False,
                     xlim=(-1, 1), ylim=(-1, 1))
ax.grid()

line, = ax.plot([], [], 'o-', lw=4, mew=5)
time_text = ax.text(0.02, 0.95, '', transform=ax.transAxes)

def init():
    """initialize animation"""
    line.set_data([], [])
    time_text.set_text('')
    return line, time_text

def animate(i):
    """perform animation step"""
    global arm, dt
    arm.step(dt)
    
    line.set_data(*arm.position())
    time_text.set_text('time = %.2f' % arm.time_elapsed)
    return line, time_text

# frames=None for matplotlib 1.3
ani = animation.FuncAnimation(fig, animate, frames=None,
                              interval=25, blit=True, 
                              init_func=init)

# uncomment the following line to save the video in mp4 format.
# requires either mencoder or ffmpeg to be installed
#ani.save('2linkarm.mp4', fps=15, 
#         extra_args=['-vcodec', 'libx264'])

plt.show()

There’s not all too much to it, which is nice. I’ve created a class TwoLinkArm that wraps the actual arm simulator, stores the current state of the arm, and has a step function that gets called to move the system forward in time. Then, I created a line and stored a reference to it; this is what is going to be updated in the simulation. I then need functions that specify how the line will be initialized and updated. The init function doesn’t do anything except set the line and text data to nothing, and then the animate function calls the arm simulation’s step function and updates the line and text data.

For more details about it there’s this blog post which steps through the process a bit more. For simple arm simulations the above is all I need though, so I’ll leave it there for now!

Here’s an animation of the resulting sim visualization, I’ve removed a lot of the frames to keep the size down. It’s smoother when running the actual code, which can be found up on my github.

Hurray better visualization!

Tagged , , ,