Tag Archives: Python

ABR Control supports Mujoco!

https://github.com/abr/abr_control

And it has for a while, actually. ‘Oh great’, you may be thinking, ‘but I can never afford the Mujoco license, I’m not a tenured professor!’

Well fear not, friends! Mujoco recently became free for all to use! You can now download their public license and be on your merry way. The associated downside is there’s also no more official support from Dr. Emo Todorov for paid subscribers on the forum. This, I’m sure, is unrelated to my barrage of questions over the years (really I’ve only posted 6 times so it actually probably is unrelated…probably).

And ABR Control has all your favorite examples written up for Mujoco! We’ve got the all the classics:

You can find the full set of examples in the docs/examples folder of the repository.

If you’re not familiar with Mujoco, I highly recommend checking it out. What’s extra nice about Mujoco is that all of the simulation information used to compute the forward dynamics is available (we access it using the mujoco-py package through our mujoco interface file), which means our control is exact. For simulators like CoppelliaSim we don’t have access to this information, so we’re calculating the values ourselves, and there’s often a slight discrepancy. With Mujoco, however, if you set up a floating controller you can bet your bottom dollar that arm will float.

The other benefit of Mujoco, of course, is that it’s super fast, and handles contact dynamics well. Great for robotics applications. Also the visualizations are really slick:

The operational space position and orientation controller running with the path planner running with the Jaco2 arm from Kinova model. The red rectangle is the final target, and the blue rectangle is the path planner target.

I’m very excited that the Mujoco license is free, and I encourage everyone to download it and start playing around. It’s by far the best robotics simulator I’ve used, and it runs fast enough that you can use it to calculate everything you need for controlling a physical robot arm as well (which we’ve also done with the above Jaco2 arm)! The Jaco2, UR5, and a simple geometric arm are included in the ABR Control package, and if you want to have your own arm in there you can follow this previous post about how to build your own model.

Advertisement
Tagged , , , , ,

Force control of task-space orientation

So you want to use force control to control the orientation of your end-effector, eh? What a noble endeavour. I, too, wished to control the orientation of the end-effector. While the journey was long and arduous, the resulting code is short and quick to implement. All of the code for reproducing the results shown here is up on my GitHub and in the ABR Control repo. 

Introduction

There are numerous resources that introduce the topic of orientation control, so I’m not going to do a full rehash here. I will link to resources that I found helpful, but a quick google search will pull up many useful references on the basics. 

When describing the orientation of the end-effector there are three different primary methods used: Euler angles, rotation matrices, and quaternions. The Euler angles \alpha, \beta and \gamma denote roll, pitch, and yaw, respectively. Rotation matrices specify 3 orthogonal unit vectors, and I describe in detail how to calculate them in this post on forward transforms. And quaternions are 4-dimensional vectors used to describe 3 dimensional orientation, which provide stability and require less memory and compute but are more complicated to understand.

Most modern robotics control is done using quaternions because they do not have singularities, and it is straight forward to convert them to other representations. Fun fact: Quaternion trajectories also interpolate nicely, where Euler angles and rotation matrices do not, so they are used in computer graphics to generate a trajectory for an object to follow in orientation space.

While you won’t need a full understanding of quaternions to use the orientation control code, it definitely helps if anything goes wrong. If you are looking to learn about or brush up on quaternions, or even if you’re not but you haven’t seen this resource, you should definitely check out these interactive videos by Grant Sanderson and Ben Eater. They have done an incredible job developing modules to give people an intuition into how quaternions work, and I can’t recommend their work enough. There’s also a non-interactive video version that covers the same material.

In control literature, angular velocity and acceleration are denoted \pmb{\omega} and \dot{\pmb{\omega}}. It’s important to remember that \omega is denoting a velocity, and not a position, as we work through things or it could take you a lot longer to understand than it otherwise might…

How to generate task-space angular forces

So, if you recall a post long ago on Jacobians, our task-space Jacobian has 6 rows:

\left[ \begin{array}{c} \dot{\textbf{x}} \\ \pmb{\omega} \end{array} \right] = \textbf{J}(\textbf{q}) \; \dot{\textbf{q}}

In position control, where we’re only concerned about the (x, y, z) position of the hand, the angular velocity dimensions are stripped out of the Jacobian so that it’s a 3 x n_joints matrix rather than a 6 x n_joints matrix. So the orientation of the end-effector is not controlled at all. When we did this, we didn’t need to worry or know anything about the form of the angular velocities.

Now that we’re interested in orientation control, however, we will need to learn up (and thank you to Yitao Ding for clarifying this point in comments on the original version of this post). The Jacobian for the orientation describes the rotational velocities around each axis (x, y, z) with respect to joint velocities. The rotational velocity of each axis “happens” at the same time. It’s important to note that this is not the same thing as Euler angles, which are applied sequentially.

To generate our task-space angular forces we will have to generate an orientation angle error signal of the appropriate form. To do that, first we’re going to have to get and be able to manipulate the orientation representation of the end-effector and our target.

Transforming orientation between representations

We can get the current end-effector orientation in rotation matrix form quickly, using the transformation matrices for the robot. To get the target end-effector orientation in the examples below we’ll use the VREP remote API, which returns Euler angles.

It’s important to note that Euler angles can come in 12 different formats. You have to know what kind of Euler angles you’re dealing with (e.g. rotate around X then Y then Z, or rotate around X then Y then X, etc) for all of this to work properly. It should be well documented somewhere, for example the VREP API page tells us that it will return angles corresponding to x, y, and then z rotations.

The axes of rotation can be static (extrinsic rotations) or rotating (intrinsic rotations). NOTE: The VREP page says that they rotate around an absolute frame of reference, which I take to mean static, but I believe that’s a typo on their page. If you calculate the orientation of the end-effector of the UR5 using transform matrices, and then convert it to Euler angles with axes='rxyz' you get a match with the displayed Euler angles, but not with axes='sxyz'.

Now we’re going to have to be able to transform between Euler angles, rotation matrices, and quaternions. There are well established methods for doing this, and a bunch of people have coded things up to do it efficiently. Here I use the very handy transformations module from Christoph Gohlke at the University of California. Importantly, when converting to quaternions, don’t forget to normalize the quaternions to unit length.

from abr_control.arms import ur5 as arm
from abr_control.interfaces import VREP
from abr_control.utils import transformations

robot_config = arm.Config()
interface = VREP(robot_config)
interface.connect()

feedback = interface.get_feedback()
# get the end-effector orientation matrix
R_e = robot_config.R('EE', q=feedback['q'])
# calculate the end-effector unit quaternion
q_e = transformations.unit_vector(
    transformations.quaternion_from_matrix(R_e))

# get the target information from VREP
target = np.hstack([
    interface.get_xyz('target'),
    interface.get_orientation('target')])
# calculate the target orientation rotation matrix
R_d = transformations.euler_matrix(
    target[3], target[4], target[5], axes='rxyz')[:3, :3]
# calculate the target orientation unit quaternion
q_d = transformations.unit_vector(
    transformations.quaternion_from_euler(
        target[3], target[4], target[5],
        axes='rxyz'))  # converting angles from 'rotating xyz'

Generating the orientation error

I implemented 4 different methods for calculating the orientation error, from (Caccavale et al, 1998), (Yuan, 1988) and (Nakinishi et al, 2008), and then one based off some code I found on Stack Overflow. I’ll describe each below, and then we’ll look at the results of applying them in VREP.

Method 1 – Based on code from StackOverflow

Given two orientation quaternion \textbf{Q}_A and \textbf{Q}_B, we want to calculate the rotation quaternion \textbf{R} that takes us from \textbf{Q}_A to \textbf{Q}_B:

\textbf{R} \; \textbf{Q}_A = \textbf{Q}_B

To isolate \textbf{R}, we right multiply by the inverse of \textbf{Q}_A. All orientation quaternions are of unit length, and for unit quaternions the inverse is the same as the conjugate. To calculate the conjugate of a quaternion, denoted \bar{\textbf{Q}} here, we negate either the scalar or vector part of the quaternion, but not both.

\textbf{R} \; \textbf{Q}_A \; \bar{\textbf{Q}}_A = \textbf{Q}_B \; \bar{\textbf{Q}}_A
\textbf{R} = \textbf{Q}_B \; \bar{\textbf{Q}}_A

Great! Now we know how to calculate the rotation needed to get from the current orientation to the target orientation. Next, we have to get from \textbf{R} a set of target Euler angle forces. In A new method for performing digital control system attitude computations using quaternions (Ickes, 1968), he mentions mention that

For control purposes, the last three elements of the quaternion define the roll, pitch, and yaw rotational errors…

So you can just take the vector part of the quaternion \textbf{R} and use that as your desired Euler angle forces.

# calculate the rotation between current and target orientations
q_r = transformations.quaternion_multiply(
    q_target, transformations.quaternion_conjugate(q_e))

# convert rotation quaternion to Euler angle forces
u_task[3:] = ko * q_r[1:] * np.sign(q_r[0])

NOTE: You will run into issues when the angle \pi is crossed where the arm ‘goes the long way around’. To account for this, use q_r[1:] * np.sign(q_r[0]). This will make sure that you always rotate along a trajectory < 180 degrees towards the target angle. The reason that this crops up is because there are multiple different quaternions that can represent the same orientation.

The following figure shows the arm being directed from and to the same orientations, where the one on the left takes the long way around, and the one on the right multiplies by the sign of the scalar component of the \textbf{R} quaternion as specified above.


Method 2 – Quaternion feedback from Resolved-acceleration control of robot manipulators: A critical review with experiments (Caccavale et al, 1998)

In section IV, equation (34) of this paper they specify the orientation error to be calculated as

\pmb{\epsilon}_{de} = \textbf{R}_e \pmb{\epsilon}^e_{de}

where \textbf{R}_e is the rotation matrix for the end-effector, and \pmb{\epsilon}_{de}^e is the vector part of the unit quaternion that can be extracted from the rotation matrix

\textbf{R}_d^e = \textbf{R}^T_e \textbf{R}_d.

To implement this is pretty straight forward using the transforms.py module to handle the representation conversions:

# From (Caccavale et al, 1997)
# Section IV - Quaternion feedback
R_ed = np.dot(R_e.T, R_d)  # eq 24
q_ed = transformations.quaternion_from_matrix(R_ed)
q_ed = transformations.unit_vector(q_ed)
u_task[3:] = -np.dot(R_e, q_ed[1:])  # eq 34

Method 3 – Angle/axis feedback from Resolved-acceleration control of robot manipulators: A critical review with experiments (Caccavale et al, 1998)

In section V of the paper, they present an angle / axis feedback algorithm, which overcomes the singularity issues that classic Euler angle methods suffer from. The algorithm defines the orientation error in equation (45) to be calculated

\textbf{o}_{de} = 2 * \eta_{de}\pmb{\epsilon}_{de},

where \eta_{de} and \pmb{\epsilon}_{de} are the scalar and vector part of the quaternion representation of 

\textbf{R}_{de} = \textbf{R}_d \textbf{R}_e^T

Where \textbf{R}_d is the rotation matrix representing the desired orientation and \textbf{R}_e is the rotation matrix representing the end-effector orientation. The code implementation for this looks like:

# From (Caccavale et al, 1997)
# Section V - Angle/axis feedback
R_de = np.dot(R_d, R_e.T)  # eq 44
q_ed = transformations.quaternion_from_matrix(R_de)
q_ed = transformations.unit_vector(q_ed)
u_task[3:] = -2 * q_ed[0] * q_ed[1:]  # eq 45

From playing around with this briefly, it seems like this method also works. The authors note in the discussion that it may “suffer in the case of large orientation errors”, but I wasn’t able to elicit poor behaviour when playing around with it in VREP. The other downside they mention is that the computational burden is heavier with this method than with quaternion feedback.


Method 4 – From Closed-loop manipulater control using quaternion feedback (Yuan, 1988) and Operational space control: A theoretical and empirical comparison (Nakanishi et al, 2008)

This was the one method that I wasn’t able to get implemented / working properly. Originally presented in (Yuan, 1988), and then modified for representing the angular velocity in world and not local coordinates in (Nakanishi et al, 2008), the equation for generating error (Nakanishi eq 72):

\textbf{e}_o = \eta_d \pmb{\epsilon} - \eta \pmb{\epsilon}_d + \textbf{S}(\pmb{\epsilon}_d) \pmb{\epsilon}

where \eta, \pmb{\epsilon} and \eta_d, \pmb{\epsilon}_d are the scalar and vector components of the quaternions representing the end-effector and target orientations, respectively, and \textbf{S}(\textbf{x}) is defined in (Nakanishi eq 73):

\left[ \begin{array}{ccc} 0 & -\textbf{x}[2] & \textbf{x}[1] \\ \textbf{x}[2] & 0 & -\textbf{x}[0] \\ -\textbf{x}[1] & \textbf{x}[0] & 0 \end{array} \right]

My code for this implementation looks like:

S = np.array([
    [0.0, -q_d[2], q_d[1]],
    [q_d[2], 0.0, -q_d[0]],
    [-q_d[1], q_d[0], 0.0]])

u_task[3:] = -(q_d[0] * q_e[1:] - q_e[0] * q_d[1:] +
               np.dot(S, q_e[1:]))

If you understand why this isn’t working, if you can provide a working code example in the comments I would be very grateful.


Generating the full orientation control signal

The above steps generate the task-space control signal, and from here I’m just using standard operational space control methods to take u_task and transform it into joint torques to send out to the arm. With possibly the caveat that I’m accounting for velocity in joint-space, not task space. Generating the full control signal looks like:

 
# which dim to control of [x, y, z, alpha, beta, gamma]
ctrlr_dof = np.array([False, False, False, True, True, True])

feedback = interface.get_feedback()
# get the end-effector orientation matrix
R_e = robot_config.R('EE', q=feedback['q'])
# calculate the end-effector unit quaternion
q_e = transformations.unit_vector(
    transformations.quaternion_from_matrix(R_e))

# get the target information from VREP
target = np.hstack([
    interface.get_xyz('target'),
    interface.get_orientation('target')])
# calculate the target orientation rotation matrix
R_d = transformations.euler_matrix(
    target[3], target[4], target[5], axes='rxyz')[:3, :3]
# calculate the target orientation unit quaternion
q_d = transformations.unit_vector(
    transformations.quaternion_from_euler(
        target[3], target[4], target[5],
        axes='rxyz'))  # converting angles from 'rotating xyz'

# calculate the Jacobian for the end effectora
# and isolate relevate dimensions
J = robot_config.J('EE', q=feedback['q'])[ctrlr_dof]

# calculate the inertia matrix in task space
M = robot_config.M(q=feedback['q'])

# calculate the inertia matrix in task space
M_inv = np.linalg.inv(M)
Mx_inv = np.dot(J, np.dot(M_inv, J.T))
if np.linalg.det(Mx_inv) != 0:
    # do the linalg inverse if matrix is non-singular
    # because it's faster and more accurate
    Mx = np.linalg.inv(Mx_inv)
else:
    # using the rcond to set singular values < thresh to 0
    # singular values < (rcond * max(singular_values)) set to 0
    Mx = np.linalg.pinv(Mx_inv, rcond=.005)

u_task = np.zeros(6)  # [x, y, z, alpha, beta, gamma]

# generate orientation error
# CODE FROM ONE OF ABOVE METHODS HERE

# remove uncontrolled dimensions from u_task
u_task = u_task[ctrlr_dof]

# transform from operational space to torques and
# add in velocity and gravity compensation in joint space
u = (np.dot(J.T, np.dot(Mx, u_task)) -
     kv * np.dot(M, feedback['dq']) -
     robot_config.g(q=feedback['q']))

# apply the control signal, step the sim forward
interface.send_forces(u)

The control script in full context is available up on my GitHub along with the corresponding VREP scene. If you download and run both (and have the ABR Control repo installed), then you can generate fun videos like the following:

Here, the green ball is the target, and the end-effector is being controlled to match the orientation of the ball. The blue box is just a visualization aid for displaying the orientation of the end-effector. And that hand is on there just from another project I was working on then forgot to remove but already made the videos so here we are. It’s set to not affect the dynamics so don’t worry. The target changes orientation once a second. The orientation gain for these trials is ko=200 and kv=np.sqrt(600).

The first three methods all perform relatively similarly to each other, although method 3 seems to be a bit faster to converge to the target orientation after the first movement. But it’s pretty clear something is terribly wrong with the implementation of the Yuan algorithm in method 4; brownie points for whoever figures out what!

Controlling position and orientation

So you want to use force control to control both position and orientation, eh? You are truly reaching for the stars, and I applaud you. For the most part, this is pretty straight-forward. But there are a couple of gotchyas so I’ll explicitly go through the process here.

How many degrees-of-freedom (DOF) can be controlled?

If you recall from my article on Jacobians, there was a section on analysing the Jacobian. It comes down to two main points: 1) The Jacobian specifies which task-space DOF can be controlled. If there is a row of zeros, for example, the corresponding task-space DOF (i.e. (x, y, z, \alpha, \beta, \gamma) cannot be controlled. 2) The rank of the Jacobian determines how many DOF can be controlled at the same time.

For example, in a two joint planar arm, the (x, y, \gamma) variables can be controlled, but (z, \alpha, \beta) cannot be controlled because their corresponding rows are all zeros. So 3 variables can potentially be controlled, but because the Jacobian is rank 2 only two variables can be controlled at a time. If you try to control more than 2 DOF at a time things are going to go poorly. Here are some animations of trying to control 3 DOF vs 2 DOF in a 2 joint arm:

How to specify which DOF are being controlled?

Okay, so we don’t want to try to control too many DOF at once. Got it. Let’s say we know that our arm has 3 DOF, how do we choose which DOF to control? Simple: You remove the rows from you Jacobian and your control signal that correspond to task-space DOF you don’t want to control.

To implement this in code in a flexible way, I’ve chosen to specify an array with 6 boolean elements, set to True if you want to control the corresponding task space parameter and False if you don’t. For example, if you to control just the (x, y, z) parameters, you would set ctrl_dof = [True, True, True, False, False, False].

We then strip the Jacobian and task space control signal down to the relevant rows with J = robot_config.('EE', q)[ctrlr_dof] and u_task = (current - target)[ctrlr_dof]. This means that both current and target must be 6-dimensional vectors specifying the current and target (x, y, z, \alpha, \beta, \gamma) values, respectively, regardless of how many dimensions we’re actually controlling.

Generating a position + orientation control signal

The UR5 has 6 degrees of freedom, so we’re able to fully control the task space position and orientation. To do this, in the above script just ctrl_dof = np.array([True, True, True, True, True, True]), and there you go! In the following animations the gain values used were kp=300, ko=300, and kv=np.sqrt(kp+ko)*1.5. The full script can be found up on my GitHub.

NOTE: Setting the gains properly for this task is pretty critical, and I did it just by trial and error until I got something that was decent for each. For a real comparison, better parameter tuning would have to be undertaken more rigorously. 

NOTE: When implementing this minimal code example script I ran into a problem that was caused by the task-space inertia matrix calculation. It turns out that using np.linalg.pinv gives very different results than np.linalg.inv, and I did not realise this. I’m going to have to explore this more fully later, but basically heads up that you want to be using np.linalg.inv as much as possible. So you’ll notice in the above code I check the dimensionality of Mx_inv and first try to use np.linalg.inv before resorting to np.linalg.pinv.

NOTE: If you start playing around with controlling only one or two of the orientation angles, something to keep in mind: Because we’re using rotating axes, if you set up False, False, True then it’s not going to look like \gamma of the end-effector is lining up with the \gamma of the target. This is because \alpha and \beta weren’t set first. If you generate a plot of the target orientations vs the end-effector orientations, however, you’ll see that you are in face reaching the target orientation for \gamma.

In summary

So that’s that! Lots of caveats, notes, and more work to be done, but hopefully this will be a useful resource for any others embarking on the same journey. You can download the code, try it out, and play around with the gains and targets. Let me know below if you have any questions or enjoyed the post, or want to share any other resources on force control of task-space orientation.

And once again thanks to Yitao Ding for his helpful comments and corrections on the original version of this article.

Tagged , , , , , , ,

Matplotlib legends for mean and confidence interval plots

When plotting means and confidence intervals, sometimes the mean lines are hard to see and it’s nice to have included in your legend the color of the confidence interval shading. It turns out this is a bit of a chore in Matplotlib, but building off of their online examples you can get something that looks pretty alright:

mean_and_CI

So here’s code for getting the above plot, with an option for solid or dashed lines. The sky is the limit!

import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib.colors import colorConverter as cc
import numpy as np

def plot_mean_and_CI(mean, lb, ub, color_mean=None, color_shading=None):
    # plot the shaded range of the confidence intervals
    plt.fill_between(range(mean.shape[0]), ub, lb,
                     color=color_shading, alpha=.5)
    # plot the mean on top
    plt.plot(mean, color_mean)

# generate 3 sets of random means and confidence intervals to plot
mean0 = np.random.random(50)
ub0 = mean0 + np.random.random(50) + .5
lb0 = mean0 - np.random.random(50) - .5

mean1 = np.random.random(50) + 2
ub1 = mean1 + np.random.random(50) + .5
lb1 = mean1 - np.random.random(50) - .5

mean2 = np.random.random(50) -1
ub2 = mean2 + np.random.random(50) + .5
lb2 = mean2 - np.random.random(50) - .5

# plot the data
fig = plt.figure(1, figsize=(7, 2.5))
plot_mean_and_CI(mean0, ub0, lb0, color_mean='k', color_shading='k')
plot_mean_and_CI(mean1, ub1, lb1, color_mean='b', color_shading='b')
plot_mean_and_CI(mean2, ub2, lb2, color_mean='g--', color_shading='g')

class LegendObject(object):
    def __init__(self, facecolor='red', edgecolor='white', dashed=False):
        self.facecolor = facecolor
        self.edgecolor = edgecolor
        self.dashed = dashed

    def legend_artist(self, legend, orig_handle, fontsize, handlebox):
        x0, y0 = handlebox.xdescent, handlebox.ydescent
        width, height = handlebox.width, handlebox.height
        patch = mpatches.Rectangle(
            # create a rectangle that is filled with color
            [x0, y0], width, height, facecolor=self.facecolor,
            # and whose edges are the faded color
            edgecolor=self.edgecolor, lw=3)
        handlebox.add_artist(patch)

        # if we're creating the legend for a dashed line,
        # manually add the dash in to our rectangle
        if self.dashed:
            patch1 = mpatches.Rectangle(
                [x0 + 2*width/5, y0], width/5, height, facecolor=self.edgecolor,
                transform=handlebox.get_transform())
            handlebox.add_artist(patch1)

        return patch

bg = np.array([1, 1, 1])  # background of the legend is white
colors = ['black', 'blue', 'green']
# with alpha = .5, the faded color is the average of the background and color
colors_faded = [(np.array(cc.to_rgb(color)) + bg) / 2.0 for color in colors]

plt.legend([0, 1, 2], ['Data 0', 'Data 1', 'Data 2'],
           handler_map={
               0: LegendObject(colors[0], colors_faded[0]),
               1: LegendObject(colors[1], colors_faded[1]),
               2: LegendObject(colors[2], colors_faded[2], dashed=True),
            })

plt.title('Example mean and confidence interval plot')
plt.tight_layout()
plt.grid()
plt.show()

Side note, really enjoying the default formatting in Matplotlib 2+.

Tagged , , ,

ABR Control repo public release!

https://github.com/abr/abr_control

Last August I started working on a new version of my old control repo with a resolve to make something less hacky, as part of the work for Applied Brain Research, Inc, a startup that I co-founded with others from my lab after most of my cohort graduated. Together with Pawel Jaworski, who comprises other half of ABR’s motor team, we’ve been building up a library of useful functions for modeling, simulating, interfacing, and controlling robot arms.

Today we’re making the repository public, under the same free for non-commercial use that we’ve released our Nengo neural modelling software on. You can find it here: ABR_Control

It’s all Python 3, and here’s an overview of some of the features:

  • Automates generation of functions for computing the Jacobians, joint space and task space inertia matrices, centrifugal and Coriolis effects, and Jacobian derivative, provided each link’s inertia matrix and the transformation matrices
  • Option to compile these functions to speedy Cython implementations
  • Operational space, joint space, floating, and sliding controllers provided with PyGame and VREP example scripts
  • Obstacle avoidance and dynamics adaptation add-ons with PyGame example scripts
  • Interfaces with VREP
  • Configuration files for one, two, and three link arms, as well as the UR5 and Jaco2 arms in VREP
  • Provides Python simulations of two and three link arms, with PyGame visualization
  • Path planning using first and second order filtering of the target and example scripts.

Structure

The ABR Control library is divided into three sections:

  1. Arm descriptions (and simulations)
  2. Robotic system interfaces
  3. Controllers

The big goal was to make all of these interchangeable, so that to run any permutation of them you just have to change which arm / interface / controller you’re importing.

To support a new arm, the user only needs to create a configuration file specifying the transforms and inertia matrices. Code for calculating the necessary functions of the arm will be symbolically derived using SymPy, and compiled to C using Cython for efficient run-time execution.

Interfaces provide send_forces and send_target_angles functions, to apply torques and put the arm in a target state, as well as a get_feedback function, which returns a dictionary of information about the current state of the arm (joint angles and velocities at a minimum).

Controllers provide a generate function, which take in current system state information and a target, and return a set of joint torques to apply to the robot arm.

VREP example

The easiest way to show it is with some code examples. So, once you’ve cloned and installed the repo, you can open up VREP and the jaco2.ttt model in the abr_control/arms/jaco2 folder, and to control it using an operational space controller you would run the following:

import numpy as np
from abr_control.arms import jaco2 as arm
from abr_control.controllers import OSC
from abr_control.interfaces import VREP

# initialize our robot config for the ur5
robot_config = arm.Config(use_cython=True, hand_attached=True)

# instantiate controller
ctrlr = OSC(robot_config, kp=200, vmax=0.5)

# create our VREP interface
interface = VREP(robot_config, dt=.001)
interface.connect()

target_xyz = np.array([0.2, 0.2, 0.2])
# set the target object's position in VREP
interface.set_xyz(name='target', xyz=target_xyz)

count = 0.0
while count < 1500:  # run for 1.5 simulated seconds
    # get joint angle and velocity feedback
    feedback = interface.get_feedback()
    # calculate the control signal
    u = ctrlr.generate(
        q=feedback['q'],
        dq=feedback['dq'],
        target_pos=target_xyz)
    # send forces into VREP, step the sim forward
    interface.send_forces(u)

    count += 1
interface.disconnect()

This is a minimal example of the examples/VREP/reaching.py code. To run it with a different arm, you can just change the from abr_control.arms import as line. The repo comes with the configuration files for the UR5 and a onelink VREP arm model as well.

PyGame example

I’ve also found the PyGame simulations of the 2 and 3 link arms very helpful for quickly testing new controllers and code, as an easy low overhead proof of concept sandbox. To run the threelink arm (which runs in Linux and Windows fine but I’ve heard has issues in Mac OS), with the operational space controller, you can run this script:

import numpy as np
from abr_control.arms import threelink as arm
from abr_control.interfaces import PyGame
from abr_control.controllers import OSC

# initialize our robot config
robot_config = arm.Config(use_cython=True)
# create our arm simulation
arm_sim = arm.ArmSim(robot_config)

# create an operational space controller
ctrlr = OSC(robot_config, kp=300, vmax=100,
            use_dJ=False, use_C=True)

def on_click(self, mouse_x, mouse_y):
    self.target[0] = self.mouse_x
    self.target[1] = self.mouse_y

# create our interface
interface = PyGame(robot_config, arm_sim, dt=.001, 
                   on_click=on_click)
interface.connect()

# create a target
feedback = interface.get_feedback()
target_xyz = robot_config.Tx('EE', feedback['q'])
interface.set_target(target_xyz)

try:
    while 1:
        # get arm feedback
        feedback = interface.get_feedback()
        hand_xyz = robot_config.Tx('EE', feedback['q'])

        # generate an operational space control signal
        u = ctrlr.generate(
            q=feedback['q'],
            dq=feedback['dq'],
            target_pos=target_xyz)

        new_target = interface.get_mousexy()
        if new_target is not None:
            target_xyz[0:2] = new_target
        interface.set_target(target_xyz)

        # apply the control signal, step the sim forward
        interface.send_forces(u)

finally:
    # stop and reset the simulation
    interface.disconnect()

The extra bits of code just set up a hook so that when you click on the PyGame display somewhere the target moves to that point.

So! Hopefully some people find this useful for their research! It should be as easy to set up as cloning the repo, installing the requirements and running the setup file, and then trying out the examples.

If you find a bug please file an issue! If you find a way to improve it please do so and make a PR! And if you’d like to use anything in the repo commercially, please contact me.

Tagged , , , , ,

Quick calculations with SymPy and Cython

Alrighty! Last time I posted about SymPy we weren’t worrying too much about computation speed. It was worth the time saved by automating the Jacobian, inertia matrix, etc calculations and it didn’t affect simulation speed really all that much. But! When working with a real arm it suddenly becomes critical to have highly efficient calculations.

Turns out that I still don’t want to code those equations by hand. There are probably very nice options for generating optimized code using Mathematica or MapleSim or whatever pay software, but SymPy is free and already Python compatible so my goal is to stick with that.

Options

I did some internet scouring, and looked at installing various packages to help speed things up, including

  1. trying out the Sympy simplify function,
  2. trying out SymEngine,
  3. trying out the Sympy compile to Theano function,
  4. trying out the Sympy autowrap function, and
  5. various combinations of the above.

The SymEngine backend and Theano functions really didn’t give any improvements for the kind of low-dimensional vector calculations performed for control here. Disclaimer, it could be the case that I gave up on these implementations too quickly, but from some basic testing it didn’t seem like they were the way to go for this kind of problem.

So down to the simplify function and compiling to the Cython backend options. First thing you’ll notice when using the simplify is that the generation time for the function can take orders of magnitude longer (up to 12ish hours for inertia matrices for the Kinova Jaco 2 arm with simplify vs about 1-2 minutes without simplify, for example). And then, as a nice kick in the teeth after that, there’s almost no difference between straight Cython of the non-simplified functions and the simplified functions. Here are some graphs:

So you can see how the addition of the simplify drastically increases the compile time in the top half of the graphs there. In the lower half, you can see that the simplify does save some execution time relative to the baseline straight lambdify function, but once you start compiling to Cython with the autowrap the difference is on the order of hundredths to thousandths of milliseconds.

Results

So! My recommendation is

  • Don’t use simplify,
  • do use autowrap with the Cython backend.

To compile using the Cython backend:

from sympy.utilities.autowrap import autowrap
function = autowrap(sympy_expression, backend="cython",
                    args=parameters)

In the last Sympy post I looked at a hard-coded calculation against the Sympy generated function, using the timeit function, which runs a given chunk of code 1,000,000 times and tells you how long it took. To save tracking down the results from that post, here are the timeit results of a Sympy function generated using just the lambdify function vs the hard-coding the function in straight Numpy:

simtime

And now using the autowrap function to compile to Cython code, compared to hard-coding the function in Numpy:

cython_vs_hardcoded

This is a pretty crazy improvement, and means that we can have the best of both worlds. We can declare our transforms in SymPy and automate the calculation of our Jacobians and inertia matrices, and still have the code execute fast enough that we can use it to control real robots.

That said, this is all from my basic experimenting with some different optimisations, which is a far from thorough exploration of the space. If you know of any other ways to speed up execution, please comment below!

You can find the code I used to generate the above plots up on my GitHub.

Tagged , , , , ,

Velocity limiting in operational space control

Recently, I was reading through an older paper on effective operational space control, talking specifically point to point control in operational space. The paper mentioned that even if you have a perfect model of the system, you’re going to run into trouble if you use just a basic PD formula to define your control signal in operational space:

u_x = k_p (\textbf{x}^* - \textbf{x}) - k_v \dot{\textbf{x}},

where \textbf{x} and \dot{\textbf{x}} are the system position and velocity in operational space, \textbf{x}^* is the target position, and k_p and k_v are gains.

If you define your operational space control signal like this, and then translate this signal into joint torques (using, for example, methods discussed in other posts), you’re going to see a very non-straight trajectory emerge in larger movements as a result of “actuator saturation, and bandwidth and velocity limitations”. In the example of a standard robot, you might run into issues with your motors not being able to actually generate the torques that have been specified, the frequency of control and feedback might not be sufficient, and you could hit hard constraints on system velocity. The solution to this problem was presented in this 1987 paper by Dr. Oussama Khatib, and is pretty slick and very useful, so I thought I’d write it up here for any other unfortunate souls wandering around in ignorance. First though, here’s what it looks like to move large point to point distances without velocity limiting:

As you can see, the system isn’t moving in a straight line, which can be very aggravating if you’ve worked and reworked out the equations and double checked all your parameters, etc etc. A few things, first, when working with simulations it’s easy to forget how ridiculously fast this actually would be in real-time. Even though it takes a minute to simulate the above movement, in real-time, is happening over the course of 200ms. Taking that into account, this is pretty good. Also, there’s an obvious solution here, slow down your movement. The source of this problem is largely that all of the motors are not able to apply the torques specified and move at the required speed. Some of the motors have a lot less mass to throw around and will be able to move at the specified torques, but not all. Hence the not straight trajectory.

You can of course drop the gains on your PD signal, but that’s not really a great methodical solution. So, what can we do?

Well, if we rearrange the PD control signal specified above into

u_x = k_v (\dot{\textbf{x}}^* - \dot{\textbf{x}}),

where \dot{\textbf{x}}^* is the desired velocity, we see that this signal can be interpreted as a pure velocity servo-control signal, with velocity gain k_v and a desired velocity

\dot{\textbf{x}}^* = \frac{k_p}{k_v}(\textbf{x}^* - \textbf{x}).

When things are in this form, it becomes a bit more clear what we have to do: limit the desired velocity to be at most some specified maximum velocity of the end-effector, V_\textrm{max}. This value should be low enough that the transformation into joint torques doesn’t result in anything larger than the actuators can generate.

Taking V_\textrm{max}, what we want is to clip the magnitude of the control signal to be V_\textrm{max} if it’s ever larger (in positive or negative directions), and to be equal to \frac{kp}{kv}(\textbf{x}^* - \textbf{x}) otherwise. The math for this works out such that we can accomplish this with a control signal of the form:

\textbf{u}_\textbf{x} = -k_v (\dot{\textbf{x}} + \textrm{sat}\left(\frac{V_\textrm{max}}{\lambda |\tilde{\textbf{x}}|} \right) \lambda \tilde{\textbf{x}}),

where \lambda = \frac{k_p}{k_v} , \tilde{\textbf{x}} = \textbf{x} - \textbf{x}^*, and \textrm{sat} is the saturation function, such that

\textrm{sat}(y) = \left\{ \begin{array}{cc} |y| \leq 1 & \Rightarrow y \\ |y| > 1 & \Rightarrow 1 \end{array} \right.

where |y| is the absolute value of y, and is applied element wise to the vector \tilde{\textbf{x}} in the control signal.

As a result of using this saturation function, the control signal behaves differently depending on whether or not \dot{\textbf{x}}^* > V_\textrm{max}:

\textbf{u}_\textbf{x} = \left\{ \begin{array}{cc} \dot{\textbf{x}}^* \geq V_\textrm{max} & \Rightarrow -k_v (\dot{\textbf{x}} + V_\textbf{max} \textrm{sgn}(\tilde{\textbf{x}})) \\ \dot{\textbf{x}}^* < V_\textrm{max} & \Rightarrow -k_v \dot{\textbf{x}} + k_p \tilde{\textbf{x}} \end{array} \right.

where \textrm{sgn}(y) is a function that returns -1 if y < 0 and 1 if y \geq 0, and is again applied element-wise to vectors. Note that the control signal in the second condition is equivalent to our original PD control signal k_p(\textbf{x}^* - \textbf{x}) - k_v \dot{\textbf{x}}. If you’re wondering about negative signs make sure you note that \tilde{\textbf{x}} = \textbf{x} - \textbf{x}^* and not \textbf{x}^* - \textbf{x}, as you might assume.

So now the control signal is behaving exactly as desired! Moves the system towards the target, but no faster than the specified maximum velocity. Now our trajectories look like this:

So those are starting to look a lot better! The first thing you’ll notice is that this is a fair bit slower of a movement. Well, actually, it’s waaaayyyy slower because the playback speed here is 4x faster than in that first animation, and this is a movement over 2s. Which has pros and cons, con: it’s slower, pro: it’s straighter, and you’re less likely to be murdered by it. When you move from simulations to real systems that latter point really moves way up the priority list.

Second thing to notice, the system seems to be minimising the error along one dimension, and then along the next, and then arrives at the target. What’s going on?  Because the error along each of the (x,y,z) dimensions isn’t the same, when speed gets clipped along one of the dimensions you’re no longer going to be moving in a straight line directly to the target. To address this, we’re going to add a scaling term whenever clipping happens, such that you reduce the speed you move along each dimension by the same ratio, so that you’re still moving in a straight line.

It’s a liiiiittle bit more complicated than that, but not much. First, we’ll calculate the values being passed in to the saturation function for each (x,y,z) dimension. We’ll then check to see if any of them are going to get clipped, and if there’s more than one that saturates we’ll find the one that is affected the most. After we’ve identified which dimension it is, we go through and calculate what the control signal would have been without velocity limiting, and what it will be now with velocity limiting. This scaling term tells us how much the control signal was reduced, and we can then use it to reduce the control signals of the other dimensions by the same amount. These other dimensions might still saturate, though, so we have to recalculate the saturation function for them once they’ve been scaled. Here’s what this all looks like in code:

# implement velocity limiting
lamb = kp / kv
x_tilde = xyz - target_xyz
sat = vmax / (lamb * np.abs(x_tilde))
scale = np.ones(3)
if np.any(sat < 1):
    index = np.argmin(sat)
    unclipped = kp * x_tilde[index]
    clipped = kv * vmax * np.sign(x_tilde[index])
    scale = np.ones(3) * clipped / unclipped
    scale[index] = 1
u_xyz = -kv * (dx + np.clip(sat / scale, 0, 1) *
               scale * lamb * x_tilde)
 

And now, finally, we start getting the trajectories that we’ve been wanting the whole time:

And finally we can rest easy, knowing that our robot is moving at a reasonable speed along a direct path to its goals. Wherever you’d like to use this neato ‘ish you should be able to just paste in the above code, define your vmax, kp, and kv values and be good to go!

Tagged , , , ,

Using SymPy’s lambdify for generating transform matrices and Jacobians

I’ve been working in VREP with some of their different robot models and testing out force control, and one of the things that becomes pretty important for efficient workflow is to have a streamlined method for setting up the transform matrices and calculating the Jacobians for different robots. You do not want to be working these all out by hand and then writing them in yourself.

A solution that’s been working well for me (and is fully implemented in Python) is to use SymPy to set up the basic transform matrices, from each joint to the next, and then use it’s derivative function to calculate the Jacobian. Once this is calculated you can then use SymPy’s lambdify function to parameterize this, and off you go! In this post I’m going to work through an example for controlling VREP’s UR5 arm using force control. And if you’re just looking for code examples, you can find it all up on my GitHub.

Edit: Updated the code to be much nicer, added saving of calculated functions, and a null space controller.
Edit 2: Removed the simplify calls from the code, not worth the generation time increase (if execution is a great concern can generate the functions using cython).

Setting up the transform matrices

This is the part of this process that is unique to each arm. The goal is to set up a system so that you can specify your transforms for each joint (and to each centre-of-mass (COM) too, of course) and then be on your way. So here’s the UR5, cute thing that it is:

ur5.png

For the UR5, there are 6 joints, and we’re going to be specifying 9 transform matrices: 6 joints and the COM for three arm segments (the remaining arm segment COMs are centered at their respective joint’s reference frame). The joints are all rotary, with 0 and 4 rotating around the z-axis, and the rest all rotating around x.

So first, we’re going to create a class called robot_config. Then, to create our transform matrices in SymPy first we need to set up the variables we’re going to be using:

# set up our joint angle symbols (6th angle doesn't affect any kinematics)
self.q = [sp.Symbol('q%i'%ii) for ii in range(6)]
# segment lengths associated with each joint
self.L = np.array([0.0935, .13453, .4251, .12, .3921, .0935, .0935, .0935])

where self.q is an array storing all our joint angle symbols, and self.L is an array of all of the offsets for each joint and arm segment lengths.

Using these to create the transform matrices for the joints, we get a set up that looks like this:

# transform matrix from origin to joint 0 reference frame
self.T0org = sp.Matrix([[sp.cos(self.q[0]), -sp.sin(self.q[0]), 0, 0],
                        [sp.sin(self.q[0]), sp.cos(self.q[0]), 0, 0],
                        [0, 0, 1, self.L[0]],
                        [0, 0, 0, 1]])

# transform matrix from joint 0 to joint 1 reference frame
self.T10 = sp.Matrix([[1, 0, 0, -L[1]],
                      [0, sp.cos(-self.q[1] + sp.pi/2),
                       -sp.sin(-self.q[1] + sp.pi/2), 0],
                      [0, sp.sin(-self.q[1] + sp.pi/2),
                       sp.cos(-self.q[1] + sp.pi/2), 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 1 to joint 2 reference frame
self.T21 = sp.Matrix([[1, 0, 0, 0],
                      [0, sp.cos(-self.q[2]),
                       -sp.sin(-self.q[2]), self.L[2]],
                      [0, sp.sin(-self.q[2]),
                       sp.cos(-self.q[2]), 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 2 to joint 3
self.T32 = sp.Matrix([[1, 0, 0, L[3]],
                      [0, sp.cos(-self.q[3] - sp.pi/2),
                       -sp.sin(-self.q[3] - sp.pi/2), self.L[4]],
                      [0, sp.sin(-self.q[3] - sp.pi/2),
                       sp.cos(-self.q[3] - sp.pi/2), 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 3 to joint 4
self.T43 = sp.Matrix([[sp.sin(-self.q[4] - sp.pi/2),
                       sp.cos(-self.q[4] - sp.pi/2), 0, -self.L[5]],
                      [sp.cos(-self.q[4] - sp.pi/2),
                       -sp.sin(-self.q[4] - sp.pi/2), 0, 0],
                      [0, 0, 1, 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 4 to joint 5
self.T54 = sp.Matrix([[1, 0, 0, 0],
                      [0, sp.cos(self.q[5]), -sp.sin(self.q[5]), 0],
                      [0, sp.sin(self.q[5]), sp.cos(self.q[5]), self.L[6]],
                      [0, 0, 0, 1]])

# transform matrix from joint 5 to end-effector
self.TEE5 = sp.Matrix([[1, 0, 0, self.L[7]],
                       [0, 1, 0, 0],
                       [0, 0, 1, 0],
                       [0, 0, 0, 1]])

You can see a bunch of offsetting of the joint angles by -sp.pi/2 and this is to account for the expected 0 angle (straight along the reference frame’s x-axis) at those joints being different than the 0 angle defined in the VREP simulation (at a 90 degrees from the x-axis). You can find these by either looking at and finding the joints’ 0 position in VREP or by trial-and-error empirical analysis.

Once you have your transforms, then you have to specify how to move from the origin to each reference frame of interest (the joints and COMs). For that, I’ve just set up a simple function with a switch statement:

# point of interest in the reference frame (right at the origin)
self.x = sp.Matrix([0,0,0,1])

def _calc_T(self, name, lambdify=True):
    """ Uses Sympy to generate the transform for a joint or link

    name string: name of the joint or link, or end-effector
    lambdify boolean: if True returns a function to calculate
                      the transform. If False returns the Sympy
                      matrix
    """

    # check to see if we have our transformation saved in file
    if os.path.isfile('%s/%s.T' % (self.config_folder, name)):
        Tx = cloudpickle.load(open('%s/%s.T' % (self.config_folder, name),
                                   'rb'))
    else:
        if name == 'joint0' or name == 'link0':
            T = self.T0org
        elif name == 'joint1' or name == 'link1':
            T = self.T0org * self.T10
        elif name == 'joint2':
            T = self.T0org * self.T10 * self.T21
        elif name == 'link2':
            T = self.T0org * self.T10 * self.Tl21
        elif name == 'joint3':
            T = self.T0org * self.T10 * self.T21 * self.T32
        elif name == 'link3':
            T = self.T0org * self.T10 * self.T21 * self.Tl32
        elif name == 'joint4' or name == 'link4':
            T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43
        elif name == 'joint5' or name == 'link5':
            T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43 * \
                self.T54
        elif name == 'link6' or name == 'EE':
            T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43 * \
                self.T54 * self.TEE5
        Tx = T * self.x  # to convert from transform matrix to (x,y,z)

        # save to file
        cloudpickle.dump(Tx, open('%s/%s.T' % (self.config_folder, name),
                                  'wb'))

    if lambdify is False:
        return Tx
    return sp.lambdify(self.q, Tx)

So the first part is pretty straight forward, create the transform matrix, and then at the end to get the (x,y,z) position we just multiply by a vector we created that represents a point at the origin of the last reference frame. Some of the transform matrices (the ones to the arm segments) I didn’t specify above just to cut down on space.

The second part is where we use this awesome lambify function, which lets us turn the matrix we’ve defined into a function, so that we can pass in joint angles and get back out the resulting (x,y,z) position. There’s also the option to get the straight up SymPy matrix return, in case you need the symbolic form (which we will!).

NOTE: You can also see that there’s a check built in to look for saved files, and to just load those saved files instead of recalculating things if they’re available. This is because calculating some of these matrices and their derivatives takes a long, long time. I used the cloudpickle module to do this because it’s able to easily handle saving a whole bunch of weird things that makes normal pickle sour.

Calculating the Jacobian

So now that we’re able to quickly generate the transform matrix for each point of interest on the UR5, we simply take the derivative of the equation for each (x,y,z) coordinate with respect to each joint angle to generate our Jacobian.

def _calc_J(self, name, lambdify=True):
    """ Uses Sympy to generate the Jacobian for a joint or link

    name string: name of the joint or link, or end-effector
    lambdify boolean: if True returns a function to calculate
                      the Jacobian. If False returns the Sympy
                      matrix
    """

    # check to see if we have our Jacobian saved in file
    if os.path.isfile('%s/%s.J' % (self.config_folder, name)):
        J = cloudpickle.load(open('%s/%s.J' %
                             (self.config_folder, name), 'rb'))
    else:
        Tx = self._calc_T(name, lambdify=False)
        J = []
        # calculate derivative of (x,y,z) wrt to each joint
        for ii in range(self.num_joints):
            J.append([])
            J[ii].append(Tx[0].diff(self.q[ii]))  # dx/dq[ii]
            J[ii].append(Tx[1].diff(self.q[ii]))  # dy/dq[ii]
            J[ii].append(Tx[2].diff(self.q[ii]))  # dz/dq[ii]

Here we retrieve the Tx vector from our _calc_T function, and then calculate the derivatives. When calculating the Jacobian for the end-effector, this is all we need! Huzzah!

But to calculate the Jacobian for transforming the inertia matrices of each arm segment into joint space we’re going to need the orientation information added to our Jacobian as well. This we know ahead of time, for each joint it’s a 3D vector with a 1 on the axis being rotated around. So we can predefine this:

# orientation part of the Jacobian (compensating for orientations)
self.J_orientation = [
    [0, 0, 1], # joint 0 rotates around z axis
    [1, 0, 0], # joint 1 rotates around x axis
    [1, 0, 0], # joint 2 rotates around x axis
    [1, 0, 0], # joint 3 rotates around x axis
    [0, 0, 1], # joint 4 rotates around z axis
    [1, 0, 0]] # joint 5 rotates around x axis

And then we just fill in the Jacobians for each reference frame with self.J_orientation up to the last joint, and fill in the rest of the Jacobian with zeros. So e.g. when we’re calculating the Jacobian for the arm segment just past the second joint we’ll use the first two rows of self.J_orientation and the rest of the rows will be 0.

So this leads us to the second half of the _calc_J function:

        end_point = name.strip('link').strip('joint')
        if end_point != 'EE':
            end_point = min(int(end_point) + 1, self.num_joints)
            # add on the orientation information up to the last joint
            for ii in range(end_point):
                J[ii] = J[ii] + self.J_orientation[ii]
            # fill in the rest of the joints orientation info with 0
            for ii in range(end_point, self.num_joints):
                J[ii] = J[ii] + [0, 0, 0]

        # save to file
        cloudpickle.dump(J, open('%s/%s.J' %
                                 (self.config_folder, name), 'wb'))

    J = sp.Matrix(J).T  # correct the orientation of J
    if lambdify is False:
        return J
    return sp.lambdify(self.q, J)

The orientation information is added in, we save the result to file, and a function that takes in the joint angles and outputs the Jacobian is created (unless lambdify == False in which case the SymPy symbolic form is returned.)

Then finally, two wrapper functions are added in to make creating / accessing these functions easier. First, define a couple of dictionaries

# create function dictionaries
self._T = {}  # for transform calculations
self._J = {}  # for Jacobian calculations

and then our wrapper functions look like this

def T(self, name, q):
    """ Calculates the transform for a joint or link
    name string: name of the joint or link, or end-effector
    q np.array: joint angles
    """
    # check for function in dictionary
    if self._T.get(name, None) is None:
        print('Generating transform function for %s'%name)
        self._T[name] = self.calc_T(name)
    return self._T[name](*q)[:-1].flatten()

def J(self, name, q):
   """ Calculates the transform for a joint or link
   name string: name of the joint or link, or end-effector
   q np.array: joint angles
   """
   # check for function in dictionary
   if self._J.get(name, None) is None:
        print('Generating Jacobian function for %s'%name)
        self._J[name] = self.calc_J(name)
   return np.array(self._J[name](*q)).T

So how you use this class (all of this is in a class) is to call these T and J functions with the current joint angles. They’ll check to see if the functions have already be created or stored in file, if they haven’t then the T and / or J functions will be created, then our wrappers do a bit of formatting to get them into the proper form (i.e. transposing or cropping), and return you your (x,y,z) or Jacobian!

NOTE: It’s a bit of a misnomer to have the function be called T and actually return to you Tx, but hey this is a blog. Lay off.

Calculating the inertia matrix in joint-space and force of gravity
Now, since we’re here we might as well also calculate the functions for our inertia matrix in joint space and the effect of gravity. So, define a couple more placeholders in our robot_config class to help us:

self._M = []  # placeholder for (x,y,z) inertia matrices
self._Mq = None  # placeholder for joint space inertia matrix function
self._Mq_g = None  # placeholder for joint space gravity term function

and then add in our inertia matrix information (defined in each link’s centre-of-mass (COM) reference frame)

# create the inertia matrices for each link of the ur5
self._M.append(np.diag([1.0, 1.0, 1.0,
                        0.02, 0.02, 0.02]))  # link0
self._M.append(np.diag([2.5, 2.5, 2.5,
                        0.04, 0.04, 0.04]))  # link1
self._M.append(np.diag([5.7, 5.7, 5.7,
                        0.06, 0.06, 0.04]))  # link2
self._M.append(np.diag([3.9, 3.9, 3.9,
                        0.055, 0.055, 0.04]))  # link3
self._M.append(np.copy(self._M[1]))  # link4
self._M.append(np.copy(self._M[1]))  # link5
self._M.append(np.diag([0.7, 0.7, 0.7,
                        0.01, 0.01, 0.01]))  # link6

and then using our equations for calculating the system’s inertia and gravity we create our _calc_Mq and _calc_Mq_g functions

def _calc_Mq(self, lambdify=True):
    """ Uses Sympy to generate the inertia matrix in
    joint space for the ur5

    lambdify boolean: if True returns a function to calculate
                      the Jacobian. If False returns the Sympy
                      matrix
    """

    # check to see if we have our inertia matrix saved in file
    if os.path.isfile('%s/Mq' % self.config_folder):
        Mq = cloudpickle.load(open('%s/Mq' % self.config_folder, 'rb'))
    else:
        # get the Jacobians for each link's COM
        J = [self._calc_J('link%s' % ii, lambdify=False)
             for ii in range(self.num_links)]

        # transform each inertia matrix into joint space
        # sum together the effects of arm segments' inertia on each motor
        Mq = sp.zeros(self.num_joints)
        for ii in range(self.num_links):
            Mq += J[ii].T * self._M[ii] * J[ii]

        # save to file
        cloudpickle.dump(Mq, open('%s/Mq' % self.config_folder, 'wb'))

    if lambdify is False:
        return Mq
    return sp.lambdify(self.q, Mq)

def _calc_Mq_g(self, lambdify=True):
    """ Uses Sympy to generate the force of gravity in
    joint space for the ur5

    lambdify boolean: if True returns a function to calculate
                      the Jacobian. If False returns the Sympy
                      matrix
    """

    # check to see if we have our gravity term saved in file
    if os.path.isfile('%s/Mq_g' % self.config_folder):
        Mq_g = cloudpickle.load(open('%s/Mq_g' % self.config_folder,
                                     'rb'))
    else:
        # get the Jacobians for each link's COM
        J = [self._calc_J('link%s' % ii, lambdify=False)
             for ii in range(self.num_links)]

        # transform each inertia matrix into joint space and
        # sum together the effects of arm segments' inertia on each motor
        Mq_g = sp.zeros(self.num_joints, 1)
        for ii in range(self.num_joints):
            Mq_g += J[ii].T * self._M[ii] * self.gravity

        # save to file
        cloudpickle.dump(Mq_g, open('%s/Mq_g' % self.config_folder,
                                    'wb'))

    if lambdify is False:
        return Mq_g
    return sp.lambdify(self.q, Mq_g)

and wrapper functions

def Mq(self, q):
    """ Calculates the joint space inertia matrix for the ur5

    q np.array: joint angles
    """
    # check for function in dictionary
    if self._Mq is None:
        print('Generating inertia matrix function')
        self._Mq = self._calc_Mq()
    return np.array(self._Mq(*q))

def Mq_g(self, q):
    """ Calculates the force of gravity in joint space for the ur5

    q np.array: joint angles
    """
    # check for function in dictionary
    if self._Mq_g is None:
        print('Generating gravity effects function')
        self._Mq_g = self._calc_Mq_g()
    return np.array(self._Mq_g(*q)).flatten()

and we’re all set!

Putting it all together

Now we have nice clean code to generate everything we need for our controller. Using the controller developed in this post as a base, we can replace those calculations with the following nice compact code (which also includes a secondary null-space controller to keep the arm near resting joint angles):

# calculate position of the end-effector
# derived in the ur5 calc_TnJ class
xyz = robot_config.T('EE', q)
# calculate the Jacobian for the end effector
JEE = robot_config.J('EE', q)
# calculate the inertia matrix in joint space
Mq = robot_config.Mq(q)
# calculate the effect of gravity in joint space
Mq_g = robot_config.Mq_g(q)

# convert the mass compensation into end effector space
Mx_inv = np.dot(JEE, np.dot(np.linalg.inv(Mq), JEE.T))
svd_u, svd_s, svd_v = np.linalg.svd(Mx_inv)
# cut off any singular values that could cause control problems
singularity_thresh = .00025
for i in range(len(svd_s)):
    svd_s[i] = 0 if svd_s[i] < singularity_thresh else \
        1./float(svd_s[i])
# numpy returns U,S,V.T, so have to transpose both here
Mx = np.dot(svd_v.T, np.dot(np.diag(svd_s), svd_u.T))

kp = 100
kv = np.sqrt(kp)
# calculate desired force in (x,y,z) space
u_xyz = np.dot(Mx, target_xyz - xyz)
# transform into joint space, add vel and gravity compensation
u = (kp * np.dot(JEE.T, u_xyz) - np.dot(Mq, kv * dq) - Mq_g)

# calculate our secondary control signal
# calculated desired joint angle acceleration
q_des = (((robot_config.rest_angles - q) + np.pi) %
         (np.pi*2) - np.pi)
u_null = np.dot(Mq, (kp * q_des - kv * dq))

# calculate the null space filter
Jdyn_inv = np.dot(Mx, np.dot(JEE, np.linalg.inv(Mq)))
null_filter = (np.eye(robot_config.num_joints) -
               np.dot(JEE.T, Jdyn_inv))
u_null_filtered = np.dot(null_filter, u_null)

u += u_null_filtered

And there you go!

You can see all of this code up on my GitHub, along a full example controlling a UR5 VREP model though reaching to a series of targets. It looks something pretty much like this (slightly better because this gif was made before adding in the null space controller):

ur5.gif

Overhead of using lambdify instead of hard-coding

This was a big question that I had, because when I’m running simulations the time step is on the order of a few milliseconds, with the controller code called at every time step. So I reaaaally can’t afford a crazy overhead for using lambdify.

To test this, I used the handy Python timeit, which requires a bit awkward setup, but quite nicely calls the function a whole bunch of times (1,000,000 by default) and accounts for various other things going on that could affect the execution time.

I tested two sample functions, one simpler than the other. Here’s the code for setting up and testing the first function:

import timeit
import seaborn

# Test 1 ----------------------------------------------------------------------
print('\nTest function 1: ')
time_sympy1 = timeit.timeit(
        stmt = 'f(np.random.random(), np.random.random())',
        setup = 'import numpy as np;\
                import sympy as sp;\
                q0 = sp.Symbol("q0");\
                l0 = sp.Symbol("l0");\
                a = sp.cos(q0) * l0;\
                f = sp.lambdify((q0, l0), a, "numpy")')
print('Sympy lambdify function 1 time: ', time_sympy1)

time_hardcoded1 = timeit.timeit(
        stmt = 'np.cos(np.random.random())*np.random.random()',
        setup = 'import numpy as np')
print('Hard coded function 1 time: ', time_hardcoded1)

Pretty simple, a bit of a pain in the sympy setup, but other than that not bad at all. The second function I tested was just a random collection of cos and sin calls that resemble what gets computed in a Jacobian:

l1*np.sin(q0 - l0*np.sin(q1)*np.cos(q2) - l2*np.sin(q2) - l0*np.sin(q1) + q0*l0)*np.cos(q0) + l2*np.sin(q0)

And here’s the results:

simtime

So it’s slower for sure, but again this is the difference in time after 1,000,000 function calls, so until some big optimization needs to be done using the SymPy lambdify function is definitely worth the slight gain in execution time for the insane convenience.

The full code for the timing tests here are also up on my GitHub.

Tagged , , , ,

Using VREP for simulation of force-controlled models

I’ve been playing around a bit with different simulators, and one that we’re a big fan of in the lab is VREP. It’s free for academics and you can talk to them about licences if you’re looking for commercial use. I haven’t actually had much experience with it before myself, so I decided to make a simple force controlled arm model to get experience using it. All in all, there were only a few weird API things that I had to get through, and once you have them down it’s pretty straight forward. This post is going to be about the steps that I needed to take to get things all set up. For a more general start-up on VREP check out All the code in this post and the model I use can be found up on my GitHub.

Getting the right files where you need them

As discussed in the remote API overview, you’ll need three files in whatever folder you want to run your Python script from to be able to hook into VREP remotely:

  • remoteApi.dll, remoteApi.dylib or remoteApi.so (depending on what OS you’re using)
  • vrep.py
  • vrepConstants.py

You can find these files inside your VREP_HOME/programming/remoteApiBindings/python/python and VREP_HOME/programming/remoteApiBindings/lib/lib folders. Make sure that these files are in whatever folder you’re running your Python scripts from.

The model

It’s easy to create a new model to mess around with in VREP, so that’s the route I went, rather than importing one of their pre-made models and having some sneaky parameter setting cause me a bunch of grief. You can just right click->add then go at it. There are a bunch of tutorials so I’m not going to go into detail here. The main things are:

  • Make sure joints are in ‘Torque/force’ mode.
  • Make sure that joint ‘Motor enabled’ property is checked. The motor enabled property is in the ‘Show dynamic properties dialogue’ menu, which you find when you double click on the joint in the Scene Hierarchy.
  • Know the names of the joints as shown in the Scene Hierarchy.

So here’s a picture:

VREP
where you can see the names of the objects in the model highlighted in red, the Torque/force selection highlighted in blue, and the Motor enabled option highlighted in green. And of course my beautiful arm model in the background.

Setting up the continuous server

The goal is to connect VREP to Python so that we can send torques to the arm from our Python script and get the feedback necessary to calculate those torques. There are a few ways to set up a remote connection in VREP.

The basic one is they have you add a child script in VREP and attach it to an object in the world that sets up a port and then you hit go on your simulation and can then run your Python script to connect in. This gets old real fast. Fortunately, it’s easy to automate everything from Python so that you can connect in, start the simulation, run it for however long, and then stop the simulation without having to click back and forth.

The first step is to make sure that your remoteApiConnections.txt file in you VREP home directory is set up properly. A continuous server should be set up by default, but doesn’t hurt to double check. And you can take the chance to turn on debugging, which can be pretty helpful. So open up that file and make sure it looks like this:

portIndex1_port                 = 19997
portIndex1_debug                = true
portIndex1_syncSimTrigger       = true

Once that’s set up, when VREP starts we can connect in from Python. In our Python script, first we’ll close any open connections that might be around, and then we’ll set up a new connection in:

import vrep 

# close any open connections
vrep.simxFinish(-1)
# Connect to the V-REP continuous server
clientID = vrep.simxStart('127.0.0.1', 19997, True, True, 500, 5) 

if clientID != -1: # if we connected successfully
    print ('Connected to remote API server')

So once the connection is made, we check the clientID value to make sure that it didn’t fail, and then we carry on with our script.

Synchronizing

By default, VREP will run its simulation in its own thread, and both the simulation and the controller using the remote API will be running simultaneously. This can lead to some weird behaviour as things fall out of synch etc etc, so what we want instead is for the VREP sim to only run one time step for each time step the controller runs. To do that, we need to set VREP to synchronous mode. So the next few lines of our Python script look like:

    # --------------------- Setup the simulation 

    vrep.simxSynchronous(clientID,True)

and then later, once we’ve calculated our control signal, sent it over, and want the simulation to run one time step forward, we do that by calling

    # move simulation ahead one time step
    vrep.simxSynchronousTrigger(clientID)

Get the handles to objects of interest

OK the next chunk of code in our script uses the names of our objects (as specified in the Scene Hierarchy in VREP) to get an ID for each which to identify which object we want to send a command to or receive feedback from:

    joint_names = ['shoulder', 'elbow']
    # joint target velocities discussed below
    joint_target_velocities = np.ones(len(joint_names)) * 10000.0

    # get the handles for each joint and set up streaming
    joint_handles = [vrep.simxGetObjectHandle(clientID,
        name, vrep.simx_opmode_blocking)[1] for name in joint_names]

    # get handle for target and set up streaming
    _, target_handle = vrep.simxGetObjectHandle(clientID,
                    'target', vrep.simx_opmode_blocking)

Set dt and start the simulation

And the final thing that we’re going to do in our simulation set up is specify the timestep that we want to use, and then start the simulation. I found this in a forum post, and I must say whatever VREP lacks in intuitive API their forum moderators are on the ball. NOTE: To use a custom time step you have to also set the dt option in the VREP simulator to ‘custom’. The drop down is to the left of the ‘play’ arrow, and if you don’t have it set to ‘custom’ you won’t be able to set the time step through Python.

    dt = .01
    vrep.simxSetFloatingParameter(clientID,
            vrep.sim_floatparam_simulation_time_step,
            dt, # specify a simulation time step
            vrep.simx_opmode_oneshot)

    # --------------------- Start the simulation

    # start our simulation in lockstep with our code
    vrep.simxStartSimulation(clientID,
            vrep.simx_opmode_blocking)

Main loop

For this next chunk I’m going to cut out everything that’s not VREP, since I have a bunch of posts explaining the control signal derivation and forward transformation matrices.

So, once we’ve started the simulation, I’ve set things up for the arm to be controlled for 1 second and then for the simulation to stop and everything shut down and disconnect.

    while count < 1: # run for 1 simulated second

        # get the (x,y,z) position of the target
        _, target_xyz = vrep.simxGetObjectPosition(clientID,
                target_handle,
                -1, # retrieve absolute, not relative, position
                vrep.simx_opmode_blocking)
        if _ !=0 : raise Exception()
        track_target.append(np.copy(target_xyz)) # store for plotting
        target_xyz = np.asarray(target_xyz)

        q = np.zeros(len(joint_handles))
        dq = np.zeros(len(joint_handles))
        for ii,joint_handle in enumerate(joint_handles):
            # get the joint angles
            _, q[ii] = vrep.simxGetJointPosition(clientID,
                    joint_handle,
                    vrep.simx_opmode_blocking)
            if _ !=0 : raise Exception()
            # get the joint velocity
            _, dq[ii] = vrep.simxGetObjectFloatParameter(clientID,
                    joint_handle,
                    2012, # ID for angular velocity of the joint
                    vrep.simx_opmode_blocking)
            if _ !=0 : raise Exception()

In the above chunk of code, I think the big thing to point out is that I’m using vrep.simx_opmode_blocking in each call, instead of vrep.simx_opmode_buffer. The difference is that you for sure get the current values of the simulation when you use blocking, and you can be behind a time step using buffer.

Aside from that the other notable things are I raise an exception if the first parameter (which is the return code) is ever not 0, and that I use simxGetObjectFloatParameter to get the joint velocity instead of simxGetObjectVelocity, which has a rotational velocity component. Zero is the return code for ‘everything worked’, and if you don’t check it and have some silly things going on you can be veeerrrrryyy mystified when things don’t work. And what simxGetObjectVelocity returns is the rotational velocity of the joint relative to the world frame, and not the angular velocity of the joint in its own coordinates. That was also a briefly confusing.

So the next thing I do is calculate u, which we’ll skip, and then we need to set the forces for the joint. This part of the API is real screwy. You can’t set the force applied to the joint directly. Instead, you have to set the target velocity of the joint to some really high value (hence that array we made before), and then modulate the maximum force that can be applied to that joint. Also important: When you want to apply a force in the other direction, you change the sign on the target velocity, but keep the force sign positive.

        # ... calculate u ... 

        for ii,joint_handle in enumerate(joint_handles):
            # get the current joint torque
            _, torque = \
                vrep.simxGetJointForce(clientID,
                        joint_handle,
                        vrep.simx_opmode_blocking)
            if _ !=0 : raise Exception()

            # if force has changed signs,
            # we need to change the target velocity sign
            if np.sign(torque) * np.sign(u[ii]) < 0:
                joint_target_velocities[ii] = \
                        joint_target_velocities[ii] * -1
                vrep.simxSetJointTargetVelocity(clientID,
                        joint_handle,
                        joint_target_velocities[ii], # target velocity
                        vrep.simx_opmode_blocking)
            if _ !=0 : raise Exception()

            # and now modulate the force
            vrep.simxSetJointForce(clientID,
                    joint_handle,
                    abs(u[ii]), # force to apply
                    vrep.simx_opmode_blocking)
            if _ !=0 : raise Exception()

        # move simulation ahead one time step
        vrep.simxSynchronousTrigger(clientID)
        count += dt

So as you can see we check the current torque, see if we need to change the sign on the target velocity, modulate the maximum allowed force, and then finally step the VREP simulation forward.

Conclusions

And there you go! Here’s an animation of it in action (note this is a super low quality gif and it looks way better / smoother when actually running it yourself):

reach_to_target

All in all, VREP has been enjoyable to work with so far. It didn’t take long to get things moving and off the ground, the visualization is great, and I haven’t even scratched the surface of what you can do with it. Best of all (so far) you can fully automate everything from Python. Hopefully this is enough to help some people get their own models going and save a few hours and headaches! Again, the full code and the model are up on my GitHub.

Nits

  • When you’re applying your control signal, make sure you test each joint in isolation, to make sure your torques push things in the direction you think they do. I had checked the rotation direction in VREP, but the control signal for both joints ended up needing to be multiplied by -1.
  • Another nit when you’re building your model, if you use the rotate button from the VREP toolbar on your model, wherever that joint rotates to is now 0 degrees. If you want to set the joint to start at 45 degrees, instead double click and change Pos[deg] option inside ‘Joint’ in Scene Object Properties.
Tagged , , , ,

Deep learning for control using augmented Hessian-free optimization

Traditionally, deep learning is applied to feed-forward tasks, like classification, where the output of the network doesn’t affect the input to the network. It is a decidedly harder problem when the output is recurrently connected such that network output affects the input. Because of this application of deep learning methods to control was largely unexplored until a few years ago. Recently, however, there’s been a lot of progress and research in this area. In this post I’m going to talk about an implementation of deep learning for control presented by Dr. Ilya Sutskever in his thesis Training Recurrent Neural Networks.

In his thesis, Dr. Sutskever uses augmented Hessian-free (AHF) optimization for learning. There are a bunch of papers and posts that go into details about AHF, here’s a good one by Andrew Gibiansky up on his blog, that I recommend you check out. I’m not going to really talk much here about what AHF is specifically, or how it differs from other methods, if you’re unfamiliar there are lots of places you can read up on it. Quickly, though, AHF is kind of a bag of tricks you can use with a fast method for estimating the curvature of the loss function with respect to the weights of a neural network, as well as the gradient, which allows it to make larger updates in directions where the loss function doesn’t change quickly. So rather than estimating the gradient and then doing a small update along each dimension, you can make the size of your update large in directions that change slowly and small along dimensions where things change quickly. And now that’s enough about that.

In this post I’m going to walk through using a Hessian-free optimization library (version 0.3.8) written by my compadre Dr. Daniel Rasmussen to train up a neural network to train up a 2-link arm, and talk about the various hellish gauntlets you need run to get something that works. Whooo! The first thing to do is install this Hessian-free library, linked above.

I’ll be working through code edited a bit for readability, to find the code in full you can check out the files up on my GitHub.

Build the network

Dr. Sutskever specified the structure of the network in his thesis to be 4 layers: 1) a linear input layer, 2) 100 Tanh nodes, 3) 100 Tanh nodes, 4) linear output layer. The network is connected up with the standard feedforward connections from 1 to 2 to 3 to 4, plus recurrent connections on 2 and 3 to themselves, plus a ‘skip’ connection from layer 1 to layer 3. Finally, the input to the network is the target state for the plant and the current state of the plant. So, lots of recursion! Here’s a picture:

network structure

The output layer connects in to the plant, and, for those unfamiliar with control theory terminology, ‘plant’ just means the system that you’re controlling. In this case an arm simulation.

Before we can go ahead and set up the network that we want to train, we also need to specify the loss function that we’re going to be using during training. The loss function in Ilya’s thesis is a standard one:

L(\theta) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{u}_t) + \ell_f(\textbf{x}_N),
\ell(\textbf{u}_t) = \alpha \frac{||\textbf{u}_t||^2}{2},
\ell_f(\textbf{x}_N) = \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2}

where L(\theta) is the total cost of the trajectory generated with \theta, the set of network parameters, \ell(\textbf{u}) is the immediate state cost, \ell_f(\textbf{x}) is the final state cost, \textbf{x} is the state of the arm, \textbf{x}^* is the target state of the arm, \textbf{u} is the control signal (torques) that drives the arm, and \alpha is a gain value.

To code this up using the hessianfree library we do:

from hessianfree import RNNet
from hessianfree.nonlinearities import (Tanh, Linear, Plant)
from hessianfree.loss_funcs import SquaredError, SparseL2

l2gain = 10e-3 * dt # gain on control signal loss
rnn = RNNet(
    # specify the number of nodes in each layer
    shape=[num_states * 2, 96, 96, num_states, num_states],
    # specify the function of the nodes in each layer
    layers=[Linear(), Tanh(), Tanh(), Linear(), plant],
    # specify the layers that have recurrent connections
    rec_layers=[1,2],
    # specify the connections between layers
    conns={0:[1, 2], 1:[2], 2:[3], 3:[4]},
    # specify the loss function
    loss_type=[
        # squared error between plant output and targets
        SquaredError(),
        # penalize magnitude of control signal (output of layer 3)
        SparseL2(l2gain, layers=[3])],
    load_weights=W,
    use_GPU=True)

Note that if you want to run it on your GPU you’ll need PyCuda and sklearn installed. And a GPU.

An important thing to note as well is that in Dr. Sustkever’s thesis when we’re calculating the squared error of the distance from the arm state to the target, this is measured in joint angles. So it’s kind of a weird set up to be looking at the movement of the hand but have your cost function in joint-space instead of end-effector space, but it definitely simplifies training by making the cost more directly relatable to the control signal. So we need to calculate the joint angles of the arm that will have the hand at different targets around a circle. To do this we’ll take advantage of our inverse kinematics solver from way back when, and use the following code:

def gen_targets(arm, n_targets=8, sig_len=100):
    #Generate target angles corresponding to target
    #(x,y) coordinates around a circle
    import scipy.optimize

    x_bias = 0
    if arm.DOF == 2:
        y_bias = .35
        dist = .075
    elif arm.DOF == 3:
        y_bias = .5
        dist = .2

    # set up the reaching trajectories around circle
    targets_x = [dist * np.cos(theta) + x_bias \
                    for theta in np.linspace(0, np.pi*2, 65)][:-1]
    targets_y = [dist * np.sin(theta) + y_bias \
                    for theta in np.linspace(0, np.pi*2, 65)][:-1]

    joint_targets = []
    for ii in range(len(targets_x)):
        joint_targets.append(arm.inv_kinematics(xy=(targets_x[ii],
                                                    targets_y[ii])))
    targs = np.asarray(joint_targets)

    # repeat the targets over time
    for ii in range(targs.shape[1]-1):
        targets = np.concatenate(
            (np.outer(targs[:, ii], np.ones(sig_len))[:, :, None],
             np.outer(targs[:, ii+1], np.ones(sig_len))[:, :, None]), axis=-1)
    targets = np.concatenate((targets, np.zeros(targets.shape)), axis=-1)
    # only want to penalize the system for not being at the
    # target at the final state, set everything before to np.nan
    targets[:, :-1] = np.nan 

    return targets

And you can see in the last couple lines that to implement the distance to target as a final state cost penalty only we just set all of the targets before the final time step equal to np.nan. If we wanted to penalize distance to target throughout the whole trajectory we would just comment that line out.

Create the plant

You’ll notice in the code that defines our RNN I set the last layer of the network to be plant, but that that’s not defined anywhere. Let’s talk. There are a couple of things that we’re going to need to incorporate our plant into this network and be able to use any deep learning method to train it. We need to be able to:

  1. Simulate the plant forward; i.e. pass in input and get back the resulting plant state at the next timestep.
  2. Calculate the derivative of the plant state with respect to the input; i.e. how do small changes in the input affect the state.
  3. Calculate the derivative of the plant state with respect to the previous state; i.e. how do small changes in the plant state affect the state at the next timestep.
  4. Calculate the derivative of the plant output with respect to its state; i.e. how do small changes in the current position of the state affect the output of the plant.

So 1 is easy, we have the arm simulations that we want already, they’re up on my GitHub. Number 4 is actually trivial too, because the output of our plant is going to be the state itself, so the derivative of the output with respect to the state is just the identity matrix.

For 2 and 3, we’re going to need to calculate some derivatives. If you’ve read the last few posts you’ll note that I’m on a finite differences kick. So let’s get that going! Because no one wants to calculate derivatives!

Important note, the notation in these next couple pieces of code is going to be a bit different from my normal notation because they’re matching with the hessianfree library notation, which is coming from a reinforcement learning literature background instead of a control theory background. So, s is the state of the plant, and x is the input to the plant. I know, I know. All the same, make sure to keep that in mind.

# calculate ds0/dx0 with finite differences
d_input_FD = np.zeros((x.shape[0], # number of trials
                       x.shape[1], # number of inputs
                       self.state.shape[1])) # number of states
for ii in range(x.shape[1]):
    # calculate state adding eps to x[ii]
    self.reset_plant(self.prev_state)
    inc_x = x.copy()
    inc_x[:, ii] += self.eps
    self.activation(inc_x)
    state_inc = self.state.copy()
    # calculate state subtracting eps from x[ii]
    self.reset_plant(self.prev_state)
    dec_x = x.copy()
    dec_x[:, ii] -= self.eps
    self.activation(dec_x)
    state_dec = self.state.copy()

    d_input_FD[:, :, ii] = \
        (state_inc - state_dec) / (2 * self.eps)
d_input_FD = d_input_FD[..., None]

Alrighty. First we create a tensor to store the results. Why is it a tensor? Because we’re going to be doing a bunch of runs at once. So our state dimensions are actually trials x size_input. When we then take the partial derivative, we end up with trials many size_input x size_state matrices. Then we increase each of the parameters of the input slightly one at a time and store the results, decrease them all one at a time and store the results, and compute our approximation of the gradient.

Next we’ll do the same for calculating the derivative of the state with respect to the previous state.

# calculate ds1/ds0
d_state_FD = np.zeros((x.shape[0], # number of trials
                       self.state.shape[1], # number of states
                       self.state.shape[1])) # number of states
for ii in range(self.state.shape[1]):
    # calculate state adding eps to self.state[ii]
    state = np.copy(self.prev_state)
    state[:, ii] += self.eps
    self.reset_plant(state)
    self.activation(x)
    state_inc = self.state.copy()
    # calculate state subtracting eps from self.state[ii]
    state = np.copy(self.prev_state)
    state[:, ii] -= self.eps
    self.reset_plant(state)
    self.activation(x)
    state_dec = self.state.copy()

    d_state_FD[:, :, ii] = \
        (state_inc - state_dec) / (2 * self.eps)
d_state_FD = d_state_FD[..., None]

Great! We’re getting closer to having everything we need. Another thing we need is a wrapper for running our arm simulation. It’s going to look like this:

def activation(self, x):
    state = []
    # iterate through and simulate the plant forward
    # for each trial
    for ii in range(x.shape[0]):
        self.arm.reset(q=self.state[ii, :self.arm.DOF],
                       dq=self.state[ii, self.arm.DOF:])
        self.arm.apply_torque(u[ii])
        state.append(np.hstack([self.arm.q, self.arm.dq]))
    state = np.asarray(state)

    self.state = self.squashing(state)

This is definitely not the fastest code to run. Much more ideally we would put the state and input into vectors and do a single set of computations for each call to activation rather than having that for loop in there. Unfortunately, though, we’re not assuming that we have access to the dynamics equations / will be able to pass in vector states and inputs.

Squashing
Looking at the above code that seems pretty clear what’s going on, except you might notice that last line calling self.squashing. What’s going on there?

The squashing function looks like this:

def squashing(self, x):
    index_below = np.where(x < -2*np.pi)
    x[index_below] = np.tanh(x[index_below]+2*np.pi) - 2*np.pi
    index_above = np.where(x > 2*np.pi)
    x[index_above] = np.tanh(x[index_above]-2*np.pi) + 2*np.pi
    return x

All that’s happening here is that we’re taking our input, and doing nothing to it as long as it doesn’t start to get too positive or too negative. If it does then we just taper it off and prevent it from going off to infinity. So running a 1D vector through this function we get:

squashed
This ends up being a pretty important piece of the code here. Basically it prevents wild changes to the weights during learning to result in the system breaking down. So the state of the plant can’t go off to infinity and cause an error to be thrown, stopping our simulation. But because the target state is well within the bounds of where the squashing function does nothing, post-training we’ll still be able to use the resulting network to control a system that doesn’t have this fail safe built in. Think of this function as training wheels that catch you only if you start falling.

With that, we no have pretty much all of the parts necessary to begin training our network!

Training the network

We’re going to be training this network on the centre-out reaching task, where you start at a centre point and reach to a bunch of target locations around a circle. I’m just going to be re-implementing the task as it was done in Dr. Sutskever’s thesis, so we’ll have 64 targets around the circle, and train using a 2-link arm. Here’s the code that we’ll use to actually run the training:

for ii in range(last_trial+1, num_batches):
    # train a bunch of batches using the same input every time
    # to allow the network a chance to minimize things with
    # stable input (speeds up training)
    err = rnn.run_batches(plant, targets=None,
              max_epochs=batch_size,
              optimizer=HessianFree(CG_iter=96, init_damping=100))

    # save the weights to file, track trial and error
    # err = rnn.error(inputs)
    err = rnn.best_error
    name = 'weights/rnn_weights-trial%04i-err%.3f'%(ii, err)
    np.savez_compressed(name, rnn.W)

Training your own network

A quick aside: if you want to run this code yourself, get a real good computer, have an arm simulation ready, the hessianfree Python library installed, and download and run this train_hf.py file. (Note: I used version 0.3.8 of the hessianfree library, which you can install using pip install hessianfree==0.3.8) This will start training and save the weights into a weights/ folder, so make sure that that exists in the same folder as train_hf.py. If you want to view the results of the training at any point run the plot_error.py file, which will load in the most recent version of the weights and plot the error so far. If you want to generate an animated plot like I have below run gen_animation_plots.py and then the last command from my post on generating animated gifs.

Another means of seeing the results of your trained up network is to use the controller I’ve implemented in my controls benchmarking suite, which looks for a set of saved weights in the controllers/weights folder, and will load it in and use it to generate command signals for the arm by running it with

python run.py arm2_python ahf reach --dt=1e-2

where you replace arm2_python with whatever arm model you trained your model on. Note the --dt=1e-2 flag, that is important because the model was trained with a .01 timestep and things get a bit weird if you suddenly change the dynamics on the controller.

OK let’s look at some results!

Results

OK full discretion, these results are not optimizing the cost function we discussed above. They’re implementing a simpler cost function that only looks at the final state, i.e. it doesn’t penalize the magnitude of the control signal. I did this because Dr. Sutskever says in his thesis he was able to optimize with just the final state cost using much smaller networks. I originally looked at neurons with 96 neurons in each layer, and it just took forgoddamnedever to run. So after running for 4 weeks (not joking) and needing to make some more changes I dropped the number of neurons and simplified the task.

The results below are from running a network with 32 neurons in each layer controlling this 2-link arm, and took another 4-5 weeks to train up.
weights32

Hey that looks good! Not bad, augmented Hessian-free learning, not bad. It had pretty consistent (if slow) decline in the error rate, with a few crazy bumps from which it quickly recovered. Also take note that each training iteration is actually 32 runs, so it’s not 12,50-ish runs it’s closer to 400,000 training runs that it took to get here.

One biggish thing that was a pain was that it turns out that I only trained the neural network for reaching in the one direction, and when you only train it to reach one way it doesn’t generalize to reaching back to the starting point (which, fair enough). But, I didn’t realize this until I was took the trained network and ran it in the benchmarking code, at which point I was not keen to redo all of the training it took to get the neural network to the level of accuracy it was at under a more complicated training set. The downside of this is that even though I’ve implemented a controller that takes in the trained network and uses it to control the arm, to do the reaching task I have to just do a hard reset after the arm reaches the target, because it can’t reach back to the center, like all the other controllers. All the same, here’s an animation of the trained up AHF controller reaching to 8 targets (it was trained on all 64 above though):

animation

Things don’t always go so smoothly, though. Here’s results from another training run that took around 2-3 weeks, and uses a different 2-link arm model (translated from Matlab code written by Dr. Emo Todorov):

weights32_suts2

What I found frustrating about this was that if you look at the error over time then this arm is doing as well or better than the previous arm at a lot of points. But the corresponding trajectories look terrible, like something you would see in a horror movie based around getting good machine learning results. This of course comes down to how I specified the cost function, and when I looked at the trajectories plotted over time the velocity of the arm is right at zero at the final time step, which it is not quiiiitte the case for the first controller. So this second network has found a workaround to minimize the cost function I specified in a way I did not intend. To prevent this, doing something like weighting the distance to target heavier than non-zero velocity would probably work. Or possibly just rerunning the training with a different random starting point you could get out a better controller, I don’t have a great feel for how important the random initialization is, but I’m hoping that it’s not all too important and its effects go to zero with enough training. Also, it should be noted I’ve run the first network for 12,500 iterations and the second for less than 6,000, so I’ll keep letting them run and maybe it will come around. The first one looked pretty messy too until about 4,000 iterations in.

Training regimes

Frustratingly, the way that you train deep networks is very important. So, very much like the naive deep learning network trainer that I am, I tried the first thing that pretty much anyone would try:

  • run the network,
  • update the weights,
  • repeat.

This is what I’ve done in the results above. And it worked well enough in that case.

If you remember back to the iLQR I made a little while ago, I was able to change the cost function to be

L(\theta) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{u}_t) + \ell_f(\textbf{x}_N),
\ell(\textbf{u}_t, \textbf{x}_t) = \alpha \frac{||\textbf{u}_t||^2}{2} + \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2},
\ell_f(\textbf{x}_N) = \frac{||\textbf{x}^* - \textbf{x}_t||^2}{2}

(i.e. to include a penalty for distance to target throughout the trajectory and not just at the final time step) which resulted in straighter trajectories when controlling the 2-link arm. So I thought I would try this here as well. Sadly (incredibly sadly), this was fairly fruitless. The network didn’t really learn or improve much at all.

After much consideration and quandary on my part, I talked with Dr. Dan and he suggested that I try another method:

  • run the network,
  • record the input,
  • hold the input constant for a few batches of weight updating,
  • repeat.

This method gave much better results. BUT WHY? I hear you ask! Good question. Let me give giving explanation a go.

Essentially, it’s because the cost function is more complex now. In the first training method, the output from the plant is fed back into the network as input at every time step. When the cost function was simpler this was OK, but now we’re getting very different input to train on at every iteration. So the system is being pulled in different directions back and forth at every iteration. In the second training regime, the same input is given several times in a row, which let’s the system follow the same gradient for a few training iterations before things change again. In my head I picture this as giving the algorithm a couple seconds to catch its breath dunking it back underwater.

This is a method that’s been used in a bunch of places recently. One of the more high-profile instances is in the results published from DeepMind on deep RL for control and for playing Go. And indeed, it also works well here.

To implement this training regime, we set up the following code:

for ii in range(last_trial+1, num_batches):

    # run the plant forward once
    rnn.forward(input=plant, params=rnn.W)

    # get the input and targets from above rollout
    inputs = plant.get_vecs()[0].astype(np.float32)
    targets = np.asarray(plant.get_vecs()[1], dtype=np.float32)

    # train a bunch of batches using the same input every time
    # to allow the network a chance to minimize things with
    # stable input (speeds up training)
    err = rnn.run_batches(inputs, targets, max_epochs=batch_size,
              optimizer=HessianFree(CG_iter=96, init_damping=100))

    # save the weights to file, track trial and error
    # err = rnn.error(inputs)
    err = rnn.best_error
    name = 'weights/rnn_weights-trial%04i-err%.3f'%(ii, err)
    np.savez_compressed(name, rnn.W)

So you can see that we do one rollout with the weights, then go in and get the inputs and targets that were used in that rollout, and start training the network while holding those constant for batch_size epochs (training sessions). From a little bit of messing around I’ve found batch_size=32 to be a pretty good number. So then it runs 32 training iterations where it’s updating the weights, and then saves those weights (because we want a loooottttt of check-points) and then restarts the loop.

Embarrassingly, I’ve lost my simulation results from this trial, somehow…so I don’t have any nice plots to back up the above, unfortunately. But since this is just a blog post I figured I would at least talk about it a little bit, since people might still find it useful if they’re just getting into the field like me. and just update this post whenever I re-run them. If I rerun them.

What I do have, however, are results where this method doesn’t work! I tried this with the simpler cost function, that only looks at the final state distance from the target, and it did not go so well. Let’s look at that one!

weights32_suts

My guess here is basically that the system has gotten to a point where it’s narrowed things down in the parameter space and now when you run 32 batches it’s overshooting. It needs feedback about its updates after every update at this point. That’s my guess, at least. So it could be the case that for more complex cost functions you’d want to train it while holding the input constant for a while, and then when the error starts to plateau switch to updating the input after every parameter update.

Conclusions

All in all, AHF for training neural networks in control is pretty awesome. There are of course still some major hold-backs, mostly related to how long it takes to train up a network, and having to guess at effective training regimes and network structures etc. But! It was able to train up a relatively small neural network to move an arm model from a center point to 64 targets around a circle, with no knowledge of the system under control at all. In Dr. Sutskever’s thesis he goes on to use the same set up under more complicated circumstances, such as when there’s a feedback delay, or a delay on the outgoing control signal, and unexpected noise etc, so it is able to learn under a number of different, fairly complex situations. Which is pretty slick.

Related to the insane training time required, I very easily could be missing some basic thing that would help speed things up. If you, reader, get ambitious and run the code on your own machine and find out useful methods for speeding up the training please let me know! Personally, my plan is to next investigate guided policy search, which seems like it’s found a way around this crazy training time.

Tagged , , , , , ,

The iterative Linear Quadratic Regulator algorithm

A few months ago I posted on Linear Quadratic Regulators (LQRs) for control of non-linear systems using finite-differences. The gist of it was at every time step linearize the dynamics, quadratize (it could be a word) the cost function around the current point in state space and compute your feedback gain off of that, as though the dynamics were both linear and consistent (i.e. didn’t change in different states). And that was pretty cool because you didn’t need all the equations of motion and inertia matrices etc to generate a control signal. You could just use the simulation you had, sample it a bunch to estimate the dynamics and value function, and go off of that.

The LQR, however, operates with maverick disregard for changes in the future. Careless of the consequences, it optimizes assuming the linear dynamics approximated at the current time step hold for all time. It would be really great to have an algorithm that was able to plan out and optimize a sequence, mindful of the changing dynamics of the system.

This is exactly the iterative Linear Quadratic Regulator method (iLQR) was designed for. iLQR is an extension of LQR control, and the idea here is basically to optimize a whole control sequence rather than just the control signal for the current point in time. The basic flow of the algorithm is:

  1. Initialize with initial state x_0 and initial control sequence \textbf{U} = [u_{t_0}, u_{t_1}, ..., u_{t_{N-1}}].
  2. Do a forward pass, i.e. simulate the system using (x_0, \textbf{U}) to get the trajectory through state space, \textbf{X}, that results from applying the control sequence \textbf{U} starting in x_0.
  3. Do a backward pass, estimate the value function and dynamics for each (\textbf{x}, \textbf{u}) in the state-space and control signal trajectories.
  4. Calculate an updated control signal \hat{\textbf{U}} and evaluate cost of trajectory resulting from (x_0, \hat{\textbf{U}}).
    1. If |(\textrm{cost}(x_0, \hat{\textbf{U}}) - \textrm{cost}(x_0, \textbf{U})| < \textrm{threshold} then we've converged and exit.
    2. If \textrm{cost}(x_0, \hat{\textbf{U}}) < \textrm{cost}(x_0, \textbf{U}), then set \textbf{U} = \hat{\textbf{U}}, and change the update size to be more aggressive. Go back to step 2.
    3. If \textrm{cost}(x_0, \hat{\textbf{U}}) \geq \textrm{cost}(x_0, \textbf{U}) change the update size to be more modest. Go back to step 3.

There are a bunch of descriptions of iLQR, and it also goes by names like ‘the sequential linear quadratic algorithm’. The paper that I’m going to be working off of is by Yuval Tassa out of Emo Todorov’s lab, called Control-limited differential dynamic programming. And the Python implementation of this can be found up on my github in my Control repo. Also, a big thank you to Dr. Emo Todorov who provided Matlab code for the iLQG algorithm, which was super helpful.

Defining things

So let’s dive in. Formally defining things, we have our system \textbf{x}, and dynamics described with the function \textbf{f}, such that

\textbf{x}_{t+1} = \textbf{f}(\textbf{x}_t, \textbf{u}_t),

where \textbf{u} is the input control signal. The trajectory \{\textbf{X}, \textbf{U}\} is the sequence of states \textbf{X} = \{\textbf{x}_0, \textbf{x}_1, ..., \textbf{x}_N\} that result from applying the control sequence \textbf{U} = \{\textbf{u}_0, \textbf{u}_1, ..., \textbf{u}_{N-1}\} starting in the initial state \textbf{x}_0.

Now we need to define all of our cost related equations, so we know exactly what we’re dealing with.

Define the total cost function J, which is the sum of the immediate cost, \ell, from each state in the trajectory plus the final cost, \ell_f:

J(\textbf{x}_0, \textbf{U}) = \sum\limits^{N-1}\limits_{t=0} \ell(\textbf{x}_t, \textbf{u}_t) + \ell_f(\textbf{x}_N).

Letting \textbf{U}_t = \{\textbf{u}_t, \textbf{u}_{t+1}, ..., \textbf{U}_{N-1}\}, we define the cost-to-go as the sum of costs from time t to N:

J_t(\textbf{x}, \textbf{U}_t) = \sum\limits^{N-1}\limits_{i=t} \ell(\textbf{x}_i, \textbf{u}_i) + \ell_f(\textbf{x}_N).

The value function V at time t is the optimal cost-to-go from a given state:

V_t(\textbf{x}) = \min\limits_{\textbf{U}_t} J_t(\textbf{x}, \textbf{U}_t),

where the above equation just says that the optimal cost-to-go is found by using the control sequence \textbf{U}_t that minimizes J_t.

At the final time step, N, the value function is simply

V(\textbf{x}_N) = \ell_f(\textbf{x}_N).

For all preceding time steps, we can write the value function as a function of the immediate cost \ell(\textbf{x}, \textbf{u}) and the value function at the next time step:

V(\textbf{x}) = \min\limits_{\textbf{u}} \left[ \ell(\textbf{x}, \textbf{u}) + V(\textbf{f}(\textbf{x}, \textbf{u})) \right].

NOTE: In the paper, they use the notation V'(\textbf{f}(\textbf{x}, \textbf{u})) to denote the value function at the next time step, which is redundant since \textbf{x}_{t+1} = \textbf{f}(\textbf{x}_t, \textbf{u}_t), but it comes in handy later when they drop the dependencies to simplify notation. So, heads up: V' = V(\textbf{f}(\textbf{x}, \textbf{u}).

Forward rollout

The forward rollout consists of two parts. The first part is to simulating things to generate the (\textbf{X}, \textbf{U}), from which we can calculate the overall cost of the trajectory, and find out the path that the arm will take. To improve things though we’ll need a lot of information about the partial derivatives of the system, calculating these is the second part of the forward rollout phase.

To calculate all these partial derivatives we’ll use (\textbf{X}, \textbf{U}). For each (\textbf{x}_t, \textbf{u}_t) we’ll calculate the derivatives of \textbf{f}(\textbf{x}_t, \textbf{u}_t) with respect to \textbf{x}_t and \textbf{u}_t, which will give us what we need for our linear approximation of the system dynamics.

To get the information we need about the value function, we’ll need the first and second derivatives of \ell(\textbf{x}_t, \textbf{u}_t) and \ell_f(\textbf{x}_t, \textbf{x}_t) with respect to \textbf{x}_t and \textbf{u}_t.

So all in all, we need to calculate \textbf{f}_\textbf{x}, \textbf{f}_\textbf{u}, \ell_\textbf{x}, \ell_\textbf{u}, \ell_\textbf{xx}, \ell_\textbf{ux}, \ell_\textbf{uu}, where the subscripts denote a partial derivative, so \ell_\textbf{x} is the partial derivative of \ell with respect to \textbf{x}, \ell_\textbf{xx} is the second derivative of \ell with respect to \textbf{x}, etc. And to calculate all of these partial derivatives, we’re going to use finite differences! Just like in the LQR with finite differences post. Long story short, load up the simulation for every time step, slightly vary one of the parameters, and measure the resulting change.

Once we have all of these, we’re ready to move on to the backward pass.

Backward pass

Now, we started out with an initial trajectory, but that was just a guess. We want our algorithm to take it and then converge to a local minimum. To do this, we’re going to add some perturbing values and use them to minimize the value function. Specifically, we’re going to compute a local solution to our value function using a quadratic Taylor expansion. So let’s define Q(\delta \textbf{x}, \delta \textbf{u}) to be the change in our value function at (\textbf{x}, \textbf{u}) as a result of small perturbations (\delta \textbf{x}, \delta \textbf{u}):

Q(\delta \textbf{x}, \delta \textbf{u}) = \ell (\textbf{x} + \delta \textbf{x}, \textbf{u} + \delta \textbf{u}) + V(\textbf{f}(\textbf{x} + \delta\textbf{x}, \textbf{u} + \delta \textbf{u})).

The second-order expansion of Q is given by:

Q_\textbf{x} = \ell_\textbf{x} + \textbf{f}_\textbf{x}^T V'_\textbf{x},

Q_\textbf{u} = \ell_\textbf{u} + \textbf{f}_\textbf{u}^T V'_\textbf{x},

Q_\textbf{xx} = \ell_\textbf{xx} + \textbf{f}_\textbf{x}^T V'_\textbf{xx} \textbf{f}_\textbf{x} + V'_\textbf{x} \cdot \textbf{f}_\textbf{xx},

Q_\textbf{ux} = \ell_\textbf{ux} + \textbf{f}_\textbf{u}^T V'_\textbf{xx} \textbf{f}_\textbf{x}+ V'_\textbf{x} \cdot \textbf{f}_\textbf{ux},

Q_\textbf{uu} = \ell_\textbf{uu} + \textbf{f}_\textbf{u}^T V'_\textbf{xx} \textbf{f}_\textbf{u}+ V'_\textbf{x} \cdot \textbf{f}_\textbf{uu}.

Remember that V' = V(\textbf{f}(\textbf{x}, \textbf{u})), which is the value function at the next time step. NOTE: All of the second derivatives of \textbf{f} are zero in the systems we’re controlling here, so when we calculate the second derivatives we don’t need to worry about doing any tensor math, yay!

Given the second-order expansion of Q, we can to compute the optimal modification to the control signal, \delta \textbf{u}^*. This control signal update has two parts, a feedforward term, \textbf{k}, and a feedback term \textbf{K} \delta\textbf{x}. The optimal update is the \delta\textbf{u} that minimizes the cost of Q:

\delta\textbf{u}^*(\delta \textbf{x}) = \min\limits_{\delta\textbf{u}}Q(\delta\textbf{x}, \delta\textbf{u}) = \textbf{k} + \textbf{K}\delta\textbf{x},

where \textbf{k} = -Q^{-1}_\textbf{uu} Q_\textbf{u} and \textbf{K} = -Q^{-1}_\textbf{uu} Q_\textbf{ux}.

Derivation can be found in this earlier paper by Li and Todorov. By then substituting this policy into the expansion of Q we get a quadratic model of V. They do some mathamagics and come out with:

V_\textbf{x} = Q_\textbf{x} - \textbf{K}^T Q_\textbf{uu} \textbf{k},

V_\textbf{xx} = Q_\textbf{xx} - \textbf{K}^T Q_\textbf{uu} \textbf{K}.

So now we have all of the terms that we need, and they’re defined in terms of the values at the next time step. We know the value of the value function at the final time step V_N = \ell_f(\textbf{x}_N), and so we’ll simply plug this value in and work backwards in time recursively computing the partial derivatives of Q and V.

Calculate control signal update

Once those are all calculated, we can calculate the gain matrices, \textbf{k} and \textbf{K}, for our control signal update. Huzzah! Now all that’s left to do is evaluate this new trajectory. So we set up our system

\hat{\textbf{x}}_0 = \textbf{x}_0,

\hat{\textbf{u}}_t = \textbf{u}_t + \textbf{k}_t + \textbf{K}_t (\hat{\textbf{x}}_t - \textbf{x}_t),

\hat{\textbf{x}}_{t+1} = \textbf{f}(\hat{\textbf{x}}_t, \hat{\textbf{u}}_t),

and record the cost. Now if the cost of the new trajectory (\hat{\textbf{X}}, \hat{\textbf{U}}) is less than the cost of (\textbf{X}, \textbf{U}) then we set \textbf{U} = \hat{\textbf{U}} and go do it all again! And when the cost from an update becomes less than a threshold value, call it done. In code this looks like:

if costnew < cost:
  sim_new_trajectory = True

  if (abs(costnew - cost)/cost) < self.converge_thresh:
    break

Of course, another option we need to account for is when costnew > cost. What do we do in this case? Our control update hasn’t worked, do we just exit?

The Levenberg-Marquardt heuristic
No! Phew.

The control signal update in iLQR is calculated in such a way that it can behave like Gauss-Newton optimization (which uses second-order derivative information) or like gradient descent (which only uses first-order derivative information). The is that if the updates are going well, then lets include curvature information in our update to help optimize things faster. If the updates aren’t going well let’s dial back towards gradient descent, stick to first-order derivative information and use smaller steps. This wizardry is known as the Levenberg-Marquardt heuristic. So how does it work?

Something we skimmed over in the iLQR description was that we need to calculate Q^{-1}_\textbf{uu} to get the \textbf{k} and \textbf{K} matrices. Instead of using np.linalg.pinv or somesuch, we’re going to calculate the inverse ourselves after finding the eigenvalues and eigenvectors, so that we can regularize it. This will let us do a couple of things. First, we’ll be able to make sure that our estimate of curvature (Q_\textbf{uu}^{-1}) stays positive definite, which is important to make sure that we always have a descent direction. Second, we’re going to add a regularization term to the eigenvalues to prevent them from exploding when we take their inverse. Here’s our regularization implemented in Python:

 
Q_uu_evals, Q_uu_evecs = np.linalg.eig(Q_uu)
Q_uu_evals[Q_uu_evals < 0] = 0.0
Q_uu_evals += lamb
Q_uu_inv = np.dot(Q_uu_evecs,
    np.dot(np.diag(1.0/Q_uu_evals), Q_uu_evecs.T))

Now, what happens when we change lamb? The eigenvalues represent the magnitude of each of the eigenvectors, and by taking their reciprocal we flip the contributions of the vectors. So the ones that were contributing the least now have the largest singular values, and the ones that contributed the most now have the smallest eigenvalues. By adding a regularization term we ensure that the inverted eigenvalues can never be larger than 1/lamb. So essentially we throw out information.

In the case where we’ve got a really good approximation of the system dynamics and value function, we don’t want to do this. We want to use all of the information available because it’s accurate, so make lamb small and get a more accurate inverse. In the case where we have a bad approximation of the dynamics we want to be more conservative, which means not having those large singular values. Smaller singular values give a smaller Q_\textbf{uu}^{-1} estimate, which then gives smaller gain matrices and control signal update, which is what we want to do when our control signal updates are going poorly.

How do you know if they’re going poorly or not, you now surely ask! Clever as always, we’re going to use the result of the previous iteration to update lamb. So adding to the code from just above, the end of our control update loop is going to look like:

lamb = 1.0 # initial value of lambda
...
if costnew < cost:
  lamb /= self.lamb_factor
  sim_new_trajectory = True

  if (abs(costnew - cost)/cost) < self.converge_thresh:
    break
else:
  lamb *= self.lamb_factor
  if lamb > self.max_lamb:
    break

And that is pretty much everything! OK let’s see how this runs!

Simulation results

If you want to run this and see for yourself, you can go copy my Control repo, navigate to the main directory, and run

python run.py arm2 reach

or substitute in arm3. If you’re having trouble getting the arm2 simulation to run, try arm2_python, which is a straight Python implementation of the arm dynamics, and should work no sweat for Windows and Mac.

Below you can see results from the iLQR controller controlling the 2 and 3 link arms (click on the figures to see full sized versions, they got distorted a bit in the shrinking to fit on the page), using immediate and final state cost functions defined as:

l = np.sum(u**2)

and

pos_err = np.array([self.arm.x[0] - self.target[0], 
                    self.arm.x[1] - self.target[1]])
l = (wp * np.sum(pos_err**2) + # pos error
    wv * np.sum(x[self.arm.DOF:self.arm.DOF*2]**2)) # vel error

where wp and wv are just gain values, x is the state of the system, and self.arm.x is the (x,y) position of the hand. These read as “during movement, penalize large control signals, and at the final state, have a big penalty on not being at the target.”


So let’s give it up for iLQR, this is awesome! How much of a crazy improvement is that over LQR? And with all knowledge of the system through finite differences, and with the full movements in exactly 1 second! (Note: The simulation speeds look different because of my editing to keep the gif sizes small, they both take the same amount of time for each movement.)

Changing cost functions
Something that you may notice is that the control of the 3 link is actually straighter than the 2 link. I thought that this might be just an issue with the gain values, since the scale of movement is smaller for the 2 link arm than the 3 link there might have been less of a penalty for not moving in a straight line, BUT this was wrong. You can crank the gains and still get the same movement. The actual reason is that this is what the cost function specifies, if you look in the code, only \ell_f(\textbf{x}_N) penalizes the distance from the target, and the cost function during movement is strictly to minimize the control signal, i.e. \ell(\textbf{x}_t, \textbf{u}_t) = \textbf{u}_t^2.

Well that’s a lot of talk, you say, like the incorrigible antagonist we both know you to be, prove it. Alright, fine! Here’s iLQR running with an updated cost function that includes the end-effector’s distance from the target in the immediate cost:

2linklxcost
All that I had to do to get this was change the immediate cost from

l = np.sum(u**2)

to

l = np.sum(u**2)
pos_err = np.array([self.arm.x[0] - self.target[0], 
                    self.arm.x[1] - self.target[1]])
l += (wp * np.sum(pos_err**2) + # pos error
    wv * np.sum(x[self.arm.DOF:self.arm.DOF*2]**2)) # vel error

where all I had to do was include the position penalty term from the final state cost into the immediate state cost.

Changing sequence length
In these simulations the system is simulating at .01 time step, and I gave it 100 time steps to reach the target. What if I give it only 50 time steps?


It looks pretty much the same! It’s just now twice as fast, which is of course achieved by using larger control signals, which we don’t see, but dang awesome.

What if we try to make it there in 10 time steps??


OK well that does not look good. So what’s going on in this case? Basically we’ve given the algorithm an impossible task. It can’t make it to the target location in 10 time steps. In the implementation I wrote here, if it hits the end of it’s control sequence and it hasn’t reached the target yet, the control sequence starts over back at t=0. Remember that part of the target state is also velocity, so basically it moves for 10 time steps to try to minimize (x,y) distance, and then slows down to minimize final state cost in the velocity term.

In conclusion

This algorithm has been used in a ton of things, for controlling robots and simulations, and is an important part of guided policy search, which has been used to very successfully train deep networks in control problems. It’s getting really impressive results for controlling the arm models that I’ve built here, and using finite differences should easily generalize to other systems.

iLQR is very computationally expensive, though, so that’s definitely a downside. It’s definitely less expensive if you have the equations of your system, or at least a decent approximation of them, and you don’t need to use finite differences. But you pay for the efficiency with a loss in generality.

There are also a bunch of parameters to play around with that I haven’t explored at all here, like the weights in the cost function penalizing the magnitude of the cost function and the final state position error. I showed a basic example of changing the cost function, which hopefully gets across just how easy changing these things out can be when you’re using finite differences, and there’s a lot to play around with there too.

Implementation note
In the Yuval and Todorov paper, they talked about using backtracking line search when generating the control signal. So the algorithm they had when generating the new control signal was actually:

\hat{\textbf{u}}_t = \hat{\textbf{u}}_t + \alpha\textbf{k}_t + \textbf{K}_t(\hat{\textbf{x}}_t - \textbf{x}_t)

where \alpha was the backtracking search parameter, which gets set to one initially and then reduced. It’s very possible I didn’t implement it as intended, but I found consistently that \alpha = 1 always generated the best results, so it was just adding computation time. So I left it out of my implementation. If anyone has insights on an implementation that improves results, please let me know!

And then finally, another thank you to Dr. Emo Todorov for providing Matlab code for the iLQG algorithm, which was very helpful, especially for getting the Levenberg-Marquardt heuristic implemented properly.

Tagged , , , , , , , ,