Tag Archives: simulation

Improving neural models by compensating for discrete rather than continuous filter dynamics when simulating on digital systems

This is going to be a pretty niche post, but there is some great work by Aaron Voelker from my old lab that has inspired me to do a post. The work is from an upcoming paper, which is all up on Aaron’s GitHub. It applies to building neural models using the Neural Engineering Framework (NEF). There’s a bunch of material on the NEF out there already, (e.g. the book How to Build a Brain by Dr. Chris Eliasmith, an online intro, and you can also check out Nengo, which is neural model development software with some good tutorials on the NEF) so I’m going to assume you already know the basics of the NEF for this post.

Additionally, this is applicable to simulating these models on digital systems, which, probably, most of you are. If you’re not, however! Then use standard NEF methods.

And then last note before starting, these methods are most relevant for systems with fast dynamics (relative to simulation time). If your system dynamics are pretty slow, you can likely get away with the continuous time solution if you resist change and learning. And we’ll see this in the example point attractor system at the end of the post! But even for slowly evolving systems, I would still recommend at least skipping to the end and seeing how to use the library shortcuts when coding your own models. The example code is also all up on my GitHub.

NEF modeling with continuous lowpass filter dynamics

Basic state space equations for linear time-invariant (LTI) systems (i.e. dynamics can be captured with a matrix and the matrices don’t change over time) are:

\dot{\textbf{x}}(t) = \textbf{A}\textbf{x}(t) + \textbf{B}\textbf{u}(t)

\textbf{y}(t) = \textbf{C}\textbf{x}(t) + \textbf{D}\textbf{u}(t)

where

  • \textbf{x} is the system state,
  • \textbf{y} is the system output,
  • \textbf{u} is the system input,
  • \textbf{A} is called the state matrix,
  • \textbf{B} is called the input matrix,
  • \textbf{C} is called the output matrix, and
  • \textbf{D} is called the feedthrough matrix,

and the system diagram looks like this:

Typical_State_Space_model

and the transfer function, which is written in Laplace space and captures the system output over system input, for the system is

\textbf{F}(s) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}(s\textbf{I} - \textbf{A})^{-1} \textbf{B} + \textbf{D}

where s is the Laplace variable.

Now, because it’s a neural system we don’t have a perfect integrator in the middle, we instead have a synaptic filter, H(s), giving:

Neural_State_Space_model

So our goal is: given some synaptic filter H(s), we want to generate some modified transfer function, \textbf{F}', such that \textbf{F}'(H(s)) has the same dynamics as our desired system, \textbf{F}(s). In other words, find an \textbf{F}' such that

\textbf{F}'\left(\frac{1}{H(s)}\right) = \textbf{F}(s).

Alrighty. Let’s do that.

The transfer function for our neural system is

\textbf{F}(H(s)) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}(H(s)^{-1}\textbf{I} - \textbf{A})^{-1} \textbf{B} + \textbf{D}.

The effect of the synapse is well captured by a lowpass filter, H(s) = \frac{1}{\tau s + 1}, making our equation

\textbf{F}(H(s)) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}((\tau s + 1)\textbf{I} - \textbf{A})^{-1} \textbf{B} + \textbf{D},

\textbf{F}(H(s)) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}(\tau s \textbf{I} + \textbf{I} - \textbf{A})^{-1} \textbf{B} + \textbf{D}.

To get this into a form where we can start to modify the system state matrices to compensate for the filter effects, we have to isolate s\textbf{I}. To do that, we can do the following basic math jujitsu

\textbf{F}(H(s)) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}(\tau (s \textbf{I} + \frac{1}{\tau}(\textbf{I} - \textbf{A}))^{-1} \textbf{B} + \textbf{D}.

\textbf{F}(H(s)) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}(s \textbf{I} + \frac{1}{\tau}(\textbf{I} - \textbf{A})^{-1} \frac{1}{\tau}\textbf{B} + \textbf{D}.

OK. Now, we can make \textbf{F}' by substituting for our \textbf{A} and \textbf{B} matrices with

\textbf{A}' = \tau\textbf{A} + \textbf{I}

\textbf{B}' = \tau\textbf{B}

then we get

\textbf{F}'(H(s)) = \frac{\textbf{Y}(s)}{\textbf{U}(s)} = \textbf{C}(s \textbf{I} + \frac{1}{\tau}(\textbf{I} - \textbf{A}')^{-1} \frac{1}{\tau}\textbf{B}' + \textbf{D}.

= \textbf{C}(s \textbf{I} + \frac{1}{\tau}(\textbf{I} - (\tau\textbf{A} + \textbf{I}))^{-1} \frac{1}{\tau}(\tau\textbf{B}) + \textbf{D}.

= \textbf{C}(s \textbf{I} + \frac{1}{\tau}(\tau\textbf{A}))^{-1}\textbf{B} + \textbf{D}.

= \textbf{C}(s \textbf{I} + \textbf{A})^{-1}\textbf{B} + \textbf{D}.

and voila! We have created an \textbf{F}' such that \textbf{F}'(H(s)) = \textbf{F}(s). Said another way, we have created a system (\textbf{A}', \textbf{B}', \textbf{C}'=\textbf{C}, \textbf{D}'=\textbf{D}) that compensates for the synaptic filtering effects to achieve our desired system dynamics!

So, to compensate for the continuous lowpass filter, we use \textbf{A}' = \tau \textbf{A} + \textbf{I} and \textbf{B}' = \tau \textbf{B} when implementing our model and we’re golden.

And so that’s what we’ve been doing for a long time when building our models. Assuming a continuous lowpass filter and going along our merry way. Aaron, however, shrewdly noticed that computers are digital, and thusly that the standard NEF methods are not a fully accurate way of compensating for the filter that is actually being applied in simulation.

To convert our continuous system state equations to discrete state equations we need to make two changes: 1) the first is a variable change to denote the that we’re in discrete time, and we’ll use z instead of s, and 2) we need to calculate the discrete version our system, i.e. transform (\textbf{A}, \textbf{B}, \textbf{C}, \textbf{D}) \rightarrow (\textbf{A}_d, \textbf{B}_d, \textbf{C}_d, \textbf{D}_d).

The first step is easy, the second step more complicated. To discretize the system we’ll use the zero-order hold (ZOH) method (also referred to as discretization assuming zero-order hold).

Zero-order hold discretization

Zero-order hold (ZOH) systems simply hold their input over a specified amount of time. The use of ZOH here is that during discretization we assume the input control signal stays constant until the next sample time.

There are good write ups on the derivation of the discretization both on wikipedia and in these course notes from Purdue. I mostly followed the wikipedia derivation, but there were a few steps that get glossed over, so I thought I’d just write it out fully here and hopefully save someone some pain. Also for just a general intro I found these slides from Paul Oh at Drexel University really helpful.

OK. First we’ll solve an LTI system, and then we’ll discretize it.

So, you’ve got yourself a continuous LTI system

\dot{\textbf{x}}(t) = \textbf{A}\textbf{x}(t) + \textbf{B}\textbf{u}(t)

and you want to solve for \textbf{x}(t). Rearranging things to put all the \textbf{x} on one side gives

\dot{\textbf{x}}(t) - \textbf{A}\textbf{x}(t) = \textbf{B}\textbf{u}(t).

Looking through our identity library to find something that might help us here (after a long and grueling search) we come across:

\frac{\partial}{\partial t} \textrm{e}^{\textbf{A}t} = \textbf{A} \textrm{e}^{\textbf{A}t} = \textrm{e}^{\textbf{A}t} \textbf{A}.

We now left multiply our system by \textrm{e}^{-\textbf{A}t} (note the negative in the exponent)

\textrm{e}^{-\textbf{A}t}\dot{\textbf{x}}(t) - \textrm{e}^{-\textbf{A}t}\textbf{A}\textbf{x}(t) = \textrm{e}^{-\textbf{A}t}\textbf{B}\textbf{u}(t).

Looking at this carefully, we identify the left-hand side as the result of a chain rule, so we can rewrite it as

\frac{\partial}{\partial t} (\textrm{e}^{-\textbf{A}t}\textbf{x}(t)) = \textrm{e}^{-\textbf{A}t}\textbf{B}\textbf{u}(t).

From here we integrate both sides, giving

\textrm{e}^{-\textbf{A}t}\textbf{x}(t) - \textrm{e}^0\textbf{x}(0) = \int_0^t\textrm{e}^{-\textbf{A}\tau}\textbf{B}\textbf{u}(\tau) d \tau,

\textrm{e}^{-\textbf{A}t}\textbf{x}(t) = \int_0^t\textrm{e}^{-\textbf{A}\tau}\textbf{B}\textbf{u}(\tau) d \tau + \textbf{x}(0).

To isolate the \textbf{x}(t) term on the left-hand side now multiply by \textrm{e}^{\textbf{A}t}:

\textrm{e}^{\textbf{A}t}\textrm{e}^{-\textbf{A}t}\textbf{x}(t) = \textrm{e}^{\textbf{A}t}\int_0^t\textrm{e}^{-\textbf{A}\tau}\textbf{B}\textbf{u}(\tau) d \tau + \textrm{e}^{\textbf{A}t}\textbf{x}(0),

\textrm{e}^{\textbf{A}t-\textbf{A}t}\textbf{x}(t) = \textrm{e}^{\textbf{A}t}\int_0^t\textrm{e}^{-\textbf{A}\tau}\textbf{B}\textbf{u}(\tau) d \tau + \textrm{e}^{\textbf{A}t}\textbf{x}(0),

\textbf{x}(t) = \textrm{e}^{\textbf{A}t}\int_0^t\textrm{e}^{-\textbf{A}\tau}\textbf{B}\textbf{u}(\tau) d \tau + \textrm{e}^{\textbf{A}t}\textbf{x}(0).

OK! We solved for \textbf{x}(t).

To discretize our solution we’re going to assume that we’re sampling the system at even intervals, i.e. each sample is at kT for some time step T, and that the input \textbf{u}(t) is constant between samples (this is where the ZOH comes in). To simplify our notation as we go, we also define

\textbf{x}[k] = \textbf{x}(kT).

So using our new notation, we have

\textbf{x}[k] = \textrm{e}^{\textbf{A}kT}\textbf{x}(0) + \textrm{e}^{\textbf{A}kT}\int_0^{kT} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau.

Now we want to get things back into the form:

\textbf{x}[k+1] = \textbf{A}_d\textbf{x}[k] + \textbf{B}_d\textbf{u}[k].

To start, let’s write out the equation for \textbf{x}[k + 1]

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}(k+1)T}\textbf{x}(0) + \textrm{e}^{\textbf{A}(k+1)T}\int_0^{(k+1)T} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau.

We want to relate \textbf{x}[k+1] to \textbf{x}[k]. Being incredibly clever, we see that we can left multiply \textbf{x}[k] by \textrm{e}^{\textbf{A}T}, to get

\textrm{e}^{\textbf{A}T}\textbf{x}[k] = \textrm{e}^{\textbf{A}(k+1)T}\textbf{x}(0) + \textrm{e}^{\textbf{A}(k+1)T}\int_0^{kT} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau,

and can rearrange to solve for a term we saw in \textbf{x}[k+1]:

\textrm{e}^{\textbf{A}(k+1)T}\textbf{x}(0) = \textrm{e}^{\textbf{A}T}\textbf{x}[k] - \textrm{e}^{\textbf{A}(k+1)T}\int_0^{kT} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau.

Plugging this in, we get

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] - \textrm{e}^{\textbf{A}(k+1)T}(\int_0^{kT} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau + \int_0^{(k+1)T} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau),

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] - \textrm{e}^{\textbf{A}(k+1)T}\int_{kT}^{(k+1)T} \textrm{e}^{-\textbf{A}\tau}\textbf{Bu}(\tau) d\tau.

OK, we’re getting close.

At this point we’ve got things in the right form, but we can still clean up that second term on the right-hand side quite a bit. First, note that using our starting assumption (that \textbf{u}(t) \in [k, kT) is constant), we can take \textbf{Bu}(t) outside the integral.

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] - \textrm{e}^{\textbf{A}(k+1)T}\int_{kT}^{(k+1)T} \textrm{e}^{-\textbf{A}\tau}d\tau \textbf{Bu}[k].

Next, bring that \textrm{e}^{\textbf{A}(k+1)T} term inside the integral:

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] - \int_{kT}^{(k+1)T} \textrm{e}^{\textbf{A}((k+1)T - \tau)}d\tau \textbf{Bu}[k].

And now we’re going to simplify the integral using variable substitution. Let v = (k+1)T - \tau, which means also that \frac{dv}{d\tau} = -1 \rightarrow d\tau = -dv. Evaluating the upper and lower bounds of the integral, when \tau = (k+1)T then v = 0 and when \tau = kT then v = T. With this, we can rewrite our equation:

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] - \int_T^0 \textrm{e}^{\textbf{A}v}dv \textbf{Bu}[k].

The astute will notice our integral integrates from T to 0 instead of 0 to T. Fortunately for us, we know \int_a^b x = -\int_b^a. We can just swap the bounds and multiply by -1, giving:

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] + \int_0^T \textrm{e}^{\textbf{A}v}dv \textbf{Bu}[k].

And finally, we can evaluate our integral by recalling that \frac{d}{dt}\textrm{e}^{\textbf{A}t} = \textbf{A}\textrm{e}^{\textbf{A}t} and assuming that \textbf{A} is invertible:

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] + \textbf{A}^{-1} \textrm{e}^{\textbf{A}v}|^T_{v=0} \textbf{Bu}[k].

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] + \textbf{A}^{-1} (\textrm{e}^{\textbf{A}T} - \textrm{e}^0) \textbf{Bu}[k].

\textbf{x}[k+1] = \textrm{e}^{\textbf{A}T}\textbf{x}[k] + \textbf{A}^{-1} (\textrm{e}^{\textbf{A}T} - \textbf{I}) \textbf{Bu}[k].

We did it! The state and input matrices for our digital system are:

\textbf{A}_d = \textrm{e}^{\textbf{A}T}

\textbf{B}_d = \textbf{A}^{-1} (\textrm{e}^{\textbf{A}T} - \textbf{I}) \textbf{B}

And that’s the hard part of discretization, the rest of the system is easy:
where, fortunately for us

\textbf{C}_d = \textbf{C},

\textbf{D}_d = \textbf{D}.

This then gives us our discrete system transfer function:

\textbf{F}(z) = \frac{\textbf{Y}_d(z)}{\textbf{U}_d(z)} = \textbf{C}_d(z\textbf{I} - \textbf{A}_d)^{-1} \textbf{B}_d + \textbf{D}_d.

NEF modeling with continuous lowpass filter dynamics

Now that we know how to discretize our system, we can look at compensating for the lowpass filter dynamics in discrete time. The equation for the discrete time lowpass filter is

H(z) = \frac{1-a}{z-a},

where a = \textrm{e}^{-\frac{dt}{\tau}}.

Plugging that in to the discrete transfer fuction gets us

\textbf{F}(H(z)) = \textbf{C}_d(H(z)^{-1}\textbf{I} - \textbf{A}_d)^{-1} \textbf{B}_d + \textbf{D}_d.

\textbf{F}(H(z)) = \textbf{C}_d(\frac{z-a}{1-a}\textbf{I} - \textbf{A}_d)^{-1} \textbf{B}_d + \textbf{D}_d,

\textbf{F}(H(z)) = \textbf{C}_d(\frac{z\textbf{I}}{1-a} - \frac{a\textbf{I} - (1-a)\textbf{A}_d}{1-a})^{-1} \textbf{B}_d + \textbf{D}_d,

\textbf{F}(H(z)) = \textbf{C}_d(z\textbf{I} - a\textbf{I} - (1-a)\textbf{A}_d)^{-1} (1-a)\textbf{B}_d + \textbf{D}_d.

and we see that if we choose

\textbf{A}'_d = \frac{1}{1-a}(\textbf{A} + a\textbf{I}),

\textbf{B}'_d = \frac{1}{1-a}\textbf{B},

then we get

\textbf{F}'(H(z)) = \textbf{C}_d(z\textbf{I} - a\textbf{I} - (1-a)\textbf{A}_d')^{-1} (1-a)\textbf{B}_d' + \textbf{D}_d,

= \textbf{C}_d(z\textbf{I} - a\textbf{I} - (1-a)(\frac{1}{1-a}(\textbf{A} + a\textbf{I})))^{-1} (1-a)(\frac{1}{1-a}\textbf{B}) + \textbf{D}_d,

= \textbf{C}_d(z\textbf{I} - a\textbf{I} - \textbf{A} + a\textbf{I})^{-1} \textbf{B}) + \textbf{D}_d,

= \textbf{C}_d(z\textbf{I} - \textbf{A})^{-1} \textbf{B}) + \textbf{D}_d,

And now congratulations are in order. Proper compensation for the discrete lowpass filter dynamics has finally been achieved!

Point attractor example

What difference does this actually make in modelling? Well, everyone likes examples, so let’s have one.

Here are the dynamics for a second-order point attractor system:

\ddot{x} = \alpha(\beta(x^* - x) - \dot{x})

with x, \dot{x}, and \ddot{x} being the system position, velocity, and acceleration, respectively, x^* is the target position, and \alpha, and \beta are gain values. So the acceleration is just going to be set such that it drives the system towards the target position, and compensates for velocity.

Converting this from a second order system to a first order system we have

\left [ \begin{array}{c} \dot{x} \\ \ddot{x} \end{array} \right ] = \left [ \begin{array}{cc}0 & 1 \\ -\alpha\beta & -\alpha \end{array} \right] \left [ \begin{array}{c} x \\ \dot{x} \end{array} \right ] + \left [ \begin{array}{c}0 \\ \alpha\beta \end{array} \right] \left [ \begin{array}{c} 0 \\ x^* \end{array} \right ]

which we’ll rewrite compactly as

\dot{\textbf{x}} = \textbf{A} \textbf{x} + \textbf{B} \textbf{u}

OK, we’ve got our state space equation of the dynamical system we want to implement.

Given a simulation time step dt, we’ll first calculate the discrete state matrices:

\textbf{A}_d = \textrm{e}^{\textbf{A}dt},

\textbf{B}_d = \textbf{A}^-1 (\textrm{e}^{\textbf{A}dt} - \textbf{I})\textbf{B}.

Great! Easy. Now we can calculate the state matrices that will compensate for the discrete lowpass filter:

\textbf{A}_d' = \frac{1}{1-a}(\textbf{A}_d + a\textbf{I}),

\textbf{B}_d' = \frac{1}{1-a}\textbf{B}_d,

where a = \textrm{e}^{-\frac{dt}{\tau}}.

Alright! So that’s our system now, a basic point attractor implementation in Nengo 2.3 looks like this:

tau = 0.1  # synaptic time constant

# the A matrix for our point attractor
A = np.array([[0.0, 1.0],
              [-alpha*beta, -alpha]])

# the B matrix for our point attractor
B = np.array([[0.0], [alpha*beta]])

# account for discrete lowpass filter
a = np.exp(-dt/tau)
if analog:
    A = tau * A + np.eye(2)
    B = tau * B
else:
    # discretize
    Ad = expm(A*dt)
    Bd = np.dot(np.linalg.inv(A), np.dot((Ad - np.eye(2)), B))
    A = 1.0 / (1.0 - a) * (Ad - a * np.eye(2))
    B = 1.0 / (1.0 - a) * Bd

net = nengo.Network(label='Point Attractor')
net.config[nengo.Connection].synapse = nengo.Lowpass(tau)

with config, net:
    net.ydy = nengo.Ensemble(n_neurons=n_neurons, dimensions=2,
        # set it up so neurons are tuned to one dimensions only
        encoders=nengo.dists.Choice([[1, 0], [-1, 0], [0, 1], [0, -1]]))
    # set up Ax part of point attractor
    nengo.Connection(net.ydy, net.ydy, transform=A)

    # hook up input
    net.input = nengo.Node(size_in=1, size_out=1)
    # set up Bu part of point attractor
    nengo.Connection(net.input, net.ydy, transform=B)

    # hook up output
    net.output = nengo.Node(size_in=1, size_out=1)
    # add in forcing function
    nengo.Connection(net.ydy[0], net.output, synapse=None)

Note that for calculating Ad we’re using expm which is the matrix exp function from scipy.linalg package. The numpy.exp does an elementwise exp, which is definitely not what we want here, and you will get some confusing bugs if you’re not careful.

Code for implementing and also testing under some different gains is up on my GitHub, and generates the following plots for dt=0.001:

figure_1-24dt=1e-3

In the above results you can see that the when the gains are low, and thus the system dynamics are slower, that you can’t really tell a difference between the continuous and discrete filter compensation. But! As you get larger gains and faster dynamics, the resulting effects become much more visible.

If you’re building your own system, then I also recommend using the ss2sim function from Aaron’s nengolib library. It automatically handles compensation for any synapses and generates the matrices that account for discrete or continuous implementations automatically. Using the library looks like:

tau = 0.1  # synaptic time constant
synapse = nengo.Lowpass(tau)

# the A matrix for our point attractor
A = np.array([[0.0, 1.0],
              [-alpha*beta, -alpha]])

# the B matrix for our point attractor
B = np.array([[0.0], [alpha*beta]])

from nengolib.synapses import ss2sim
C = np.eye(2)
D = np.zeros((2, 2))
linsys = ss2sim((A, B, C, D),
                synapse=synapse,
                dt=None if analog else dt)
A = linsys.A
B = linsys.B

Conclusions

So there you are! Go forward and model free of error introduced by improperly accounting for discrete simulation! If, like me, you’re doing anything with neural modelling and motor control (i.e. systems with very quickly evolving dynamics), then hopefully you’ve found all this work particularly interesting, as I did.

There’s a ton of extensions and different directions that this work can be and has already been taken, with a bunch of really neat systems developed using this more accurate accounting for synaptic filtering as a base. You can read up on this and applications to modelling time delays and time cells and lots lots more up on Aaron’s GitHub, and hisrecent papers, which are listed on his lab webpage.

Tagged , , , , ,

Quick calculations with SymPy and Cython

Alrighty! Last time I posted about SymPy we weren’t worrying too much about computation speed. It was worth the time saved by automating the Jacobian, inertia matrix, etc calculations and it didn’t affect simulation speed really all that much. But! When working with a real arm it suddenly becomes critical to have highly efficient calculations.

Turns out that I still don’t want to code those equations by hand. There are probably very nice options for generating optimized code using Mathematica or MapleSim or whatever pay software, but SymPy is free and already Python compatible so my goal is to stick with that.

Options

I did some internet scouring, and looked at installing various packages to help speed things up, including

  1. trying out the Sympy simplify function,
  2. trying out SymEngine,
  3. trying out the Sympy compile to Theano function,
  4. trying out the Sympy autowrap function, and
  5. various combinations of the above.

The SymEngine backend and Theano functions really didn’t give any improvements for the kind of low-dimensional vector calculations performed for control here. Disclaimer, it could be the case that I gave up on these implementations too quickly, but from some basic testing it didn’t seem like they were the way to go for this kind of problem.

So down to the simplify function and compiling to the Cython backend options. First thing you’ll notice when using the simplify is that the generation time for the function can take orders of magnitude longer (up to 12ish hours for inertia matrices for the Kinova Jaco 2 arm with simplify vs about 1-2 minutes without simplify, for example). And then, as a nice kick in the teeth after that, there’s almost no difference between straight Cython of the non-simplified functions and the simplified functions. Here are some graphs:

So you can see how the addition of the simplify drastically increases the compile time in the top half of the graphs there. In the lower half, you can see that the simplify does save some execution time relative to the baseline straight lambdify function, but once you start compiling to Cython with the autowrap the difference is on the order of hundredths to thousandths of milliseconds.

Results

So! My recommendation is

  • Don’t use simplify,
  • do use autowrap with the Cython backend.

To compile using the Cython backend:

from sympy.utilities.autowrap import autowrap
function = autowrap(sympy_expression, backend="cython",
                    args=parameters)

In the last Sympy post I looked at a hard-coded calculation against the Sympy generated function, using the timeit function, which runs a given chunk of code 1,000,000 times and tells you how long it took. To save tracking down the results from that post, here are the timeit results of a Sympy function generated using just the lambdify function vs the hard-coding the function in straight Numpy:

simtime

And now using the autowrap function to compile to Cython code, compared to hard-coding the function in Numpy:

cython_vs_hardcoded

This is a pretty crazy improvement, and means that we can have the best of both worlds. We can declare our transforms in SymPy and automate the calculation of our Jacobians and inertia matrices, and still have the code execute fast enough that we can use it to control real robots.

That said, this is all from my basic experimenting with some different optimisations, which is a far from thorough exploration of the space. If you know of any other ways to speed up execution, please comment below!

You can find the code I used to generate the above plots up on my GitHub.

Tagged , , , , ,

Using SymPy’s lambdify for generating transform matrices and Jacobians

I’ve been working in VREP with some of their different robot models and testing out force control, and one of the things that becomes pretty important for efficient workflow is to have a streamlined method for setting up the transform matrices and calculating the Jacobians for different robots. You do not want to be working these all out by hand and then writing them in yourself.

A solution that’s been working well for me (and is fully implemented in Python) is to use SymPy to set up the basic transform matrices, from each joint to the next, and then use it’s derivative function to calculate the Jacobian. Once this is calculated you can then use SymPy’s lambdify function to parameterize this, and off you go! In this post I’m going to work through an example for controlling VREP’s UR5 arm using force control. And if you’re just looking for code examples, you can find it all up on my GitHub.

Edit: Updated the code to be much nicer, added saving of calculated functions, and a null space controller.
Edit 2: Removed the simplify calls from the code, not worth the generation time increase (if execution is a great concern can generate the functions using cython).

Setting up the transform matrices

This is the part of this process that is unique to each arm. The goal is to set up a system so that you can specify your transforms for each joint (and to each centre-of-mass (COM) too, of course) and then be on your way. So here’s the UR5, cute thing that it is:

ur5.png

For the UR5, there are 6 joints, and we’re going to be specifying 9 transform matrices: 6 joints and the COM for three arm segments (the remaining arm segment COMs are centered at their respective joint’s reference frame). The joints are all rotary, with 0 and 4 rotating around the z-axis, and the rest all rotating around x.

So first, we’re going to create a class called robot_config. Then, to create our transform matrices in SymPy first we need to set up the variables we’re going to be using:

# set up our joint angle symbols (6th angle doesn't affect any kinematics)
self.q = [sp.Symbol('q%i'%ii) for ii in range(6)]
# segment lengths associated with each joint
self.L = np.array([0.0935, .13453, .4251, .12, .3921, .0935, .0935, .0935])

where self.q is an array storing all our joint angle symbols, and self.L is an array of all of the offsets for each joint and arm segment lengths.

Using these to create the transform matrices for the joints, we get a set up that looks like this:

# transform matrix from origin to joint 0 reference frame
self.T0org = sp.Matrix([[sp.cos(self.q[0]), -sp.sin(self.q[0]), 0, 0],
                        [sp.sin(self.q[0]), sp.cos(self.q[0]), 0, 0],
                        [0, 0, 1, self.L[0]],
                        [0, 0, 0, 1]])

# transform matrix from joint 0 to joint 1 reference frame
self.T10 = sp.Matrix([[1, 0, 0, -L[1]],
                      [0, sp.cos(-self.q[1] + sp.pi/2),
                       -sp.sin(-self.q[1] + sp.pi/2), 0],
                      [0, sp.sin(-self.q[1] + sp.pi/2),
                       sp.cos(-self.q[1] + sp.pi/2), 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 1 to joint 2 reference frame
self.T21 = sp.Matrix([[1, 0, 0, 0],
                      [0, sp.cos(-self.q[2]),
                       -sp.sin(-self.q[2]), self.L[2]],
                      [0, sp.sin(-self.q[2]),
                       sp.cos(-self.q[2]), 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 2 to joint 3
self.T32 = sp.Matrix([[1, 0, 0, L[3]],
                      [0, sp.cos(-self.q[3] - sp.pi/2),
                       -sp.sin(-self.q[3] - sp.pi/2), self.L[4]],
                      [0, sp.sin(-self.q[3] - sp.pi/2),
                       sp.cos(-self.q[3] - sp.pi/2), 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 3 to joint 4
self.T43 = sp.Matrix([[sp.sin(-self.q[4] - sp.pi/2),
                       sp.cos(-self.q[4] - sp.pi/2), 0, -self.L[5]],
                      [sp.cos(-self.q[4] - sp.pi/2),
                       -sp.sin(-self.q[4] - sp.pi/2), 0, 0],
                      [0, 0, 1, 0],
                      [0, 0, 0, 1]])

# transform matrix from joint 4 to joint 5
self.T54 = sp.Matrix([[1, 0, 0, 0],
                      [0, sp.cos(self.q[5]), -sp.sin(self.q[5]), 0],
                      [0, sp.sin(self.q[5]), sp.cos(self.q[5]), self.L[6]],
                      [0, 0, 0, 1]])

# transform matrix from joint 5 to end-effector
self.TEE5 = sp.Matrix([[1, 0, 0, self.L[7]],
                       [0, 1, 0, 0],
                       [0, 0, 1, 0],
                       [0, 0, 0, 1]])

You can see a bunch of offsetting of the joint angles by -sp.pi/2 and this is to account for the expected 0 angle (straight along the reference frame’s x-axis) at those joints being different than the 0 angle defined in the VREP simulation (at a 90 degrees from the x-axis). You can find these by either looking at and finding the joints’ 0 position in VREP or by¬†trial-and-error empirical analysis.

Once you have your transforms, then you have to specify how to move from the origin to each reference frame of interest (the joints and COMs). For that, I’ve just set up a simple function with a switch statement:

# point of interest in the reference frame (right at the origin)
self.x = sp.Matrix([0,0,0,1])

def _calc_T(self, name, lambdify=True):
    """ Uses Sympy to generate the transform for a joint or link

    name string: name of the joint or link, or end-effector
    lambdify boolean: if True returns a function to calculate
                      the transform. If False returns the Sympy
                      matrix
    """

    # check to see if we have our transformation saved in file
    if os.path.isfile('%s/%s.T' % (self.config_folder, name)):
        Tx = cloudpickle.load(open('%s/%s.T' % (self.config_folder, name),
                                   'rb'))
    else:
        if name == 'joint0' or name == 'link0':
            T = self.T0org
        elif name == 'joint1' or name == 'link1':
            T = self.T0org * self.T10
        elif name == 'joint2':
            T = self.T0org * self.T10 * self.T21
        elif name == 'link2':
            T = self.T0org * self.T10 * self.Tl21
        elif name == 'joint3':
            T = self.T0org * self.T10 * self.T21 * self.T32
        elif name == 'link3':
            T = self.T0org * self.T10 * self.T21 * self.Tl32
        elif name == 'joint4' or name == 'link4':
            T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43
        elif name == 'joint5' or name == 'link5':
            T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43 * \
                self.T54
        elif name == 'link6' or name == 'EE':
            T = self.T0org * self.T10 * self.T21 * self.T32 * self.T43 * \
                self.T54 * self.TEE5
        Tx = T * self.x  # to convert from transform matrix to (x,y,z)

        # save to file
        cloudpickle.dump(Tx, open('%s/%s.T' % (self.config_folder, name),
                                  'wb'))

    if lambdify is False:
        return Tx
    return sp.lambdify(self.q, Tx)

So the first part is pretty straight forward, create the transform matrix, and then at the end to get the (x,y,z) position we just multiply by a vector we created that represents a point at the origin of the last reference frame. Some of the transform matrices (the ones to the arm segments) I didn’t specify above just to cut down on space.

The second part is where we use this awesome lambify function, which lets us turn the matrix we’ve defined into a function, so that we can pass in joint angles and get back out the resulting (x,y,z) position. There’s also the option to get the straight up SymPy matrix return, in case you need the symbolic form (which we will!).

NOTE: You can also see that there’s a check built in to look for saved files, and to just load those saved files instead of recalculating things if they’re available. This is because calculating some of these matrices and their derivatives takes a long, long time. I used the cloudpickle module to do this because it’s able to easily handle saving a whole bunch of weird things that makes normal pickle sour.

Calculating the Jacobian

So now that we’re able to quickly generate the transform matrix for each point of interest on the UR5, we simply take the derivative of the equation for each (x,y,z) coordinate with respect to each joint angle to generate our Jacobian.

def _calc_J(self, name, lambdify=True):
    """ Uses Sympy to generate the Jacobian for a joint or link

    name string: name of the joint or link, or end-effector
    lambdify boolean: if True returns a function to calculate
                      the Jacobian. If False returns the Sympy
                      matrix
    """

    # check to see if we have our Jacobian saved in file
    if os.path.isfile('%s/%s.J' % (self.config_folder, name)):
        J = cloudpickle.load(open('%s/%s.J' %
                             (self.config_folder, name), 'rb'))
    else:
        Tx = self._calc_T(name, lambdify=False)
        J = []
        # calculate derivative of (x,y,z) wrt to each joint
        for ii in range(self.num_joints):
            J.append([])
            J[ii].append(Tx[0].diff(self.q[ii]))  # dx/dq[ii]
            J[ii].append(Tx[1].diff(self.q[ii]))  # dy/dq[ii]
            J[ii].append(Tx[2].diff(self.q[ii]))  # dz/dq[ii]

Here we retrieve the Tx vector from our _calc_T function, and then calculate the derivatives. When calculating the Jacobian for the end-effector, this is all we need! Huzzah!

But to calculate the Jacobian for transforming the inertia matrices of each arm segment into joint space we’re going to need the orientation information added to our Jacobian as well. This we know ahead of time, for each joint it’s a 3D vector with a 1 on the axis being rotated around. So we can predefine this:

# orientation part of the Jacobian (compensating for orientations)
self.J_orientation = [
    [0, 0, 1], # joint 0 rotates around z axis
    [1, 0, 0], # joint 1 rotates around x axis
    [1, 0, 0], # joint 2 rotates around x axis
    [1, 0, 0], # joint 3 rotates around x axis
    [0, 0, 1], # joint 4 rotates around z axis
    [1, 0, 0]] # joint 5 rotates around x axis

And then we just fill in the Jacobians for each reference frame with self.J_orientation up to the last joint, and fill in the rest of the Jacobian with zeros. So e.g. when we’re calculating the Jacobian for the arm segment just past the second joint we’ll use the first two rows of self.J_orientation and the rest of the rows will be 0.

So this leads us to the second half of the _calc_J function:

        end_point = name.strip('link').strip('joint')
        if end_point != 'EE':
            end_point = min(int(end_point) + 1, self.num_joints)
            # add on the orientation information up to the last joint
            for ii in range(end_point):
                J[ii] = J[ii] + self.J_orientation[ii]
            # fill in the rest of the joints orientation info with 0
            for ii in range(end_point, self.num_joints):
                J[ii] = J[ii] + [0, 0, 0]

        # save to file
        cloudpickle.dump(J, open('%s/%s.J' %
                                 (self.config_folder, name), 'wb'))

    J = sp.Matrix(J).T  # correct the orientation of J
    if lambdify is False:
        return J
    return sp.lambdify(self.q, J)

The orientation information is added in, we save the result to file, and a function that takes in the joint angles and outputs the Jacobian is created (unless lambdify == False in which case the SymPy symbolic form is returned.)

Then finally, two wrapper functions are added in to make creating / accessing these functions easier. First, define a couple of dictionaries

# create function dictionaries
self._T = {}  # for transform calculations
self._J = {}  # for Jacobian calculations

and then our wrapper functions look like this

def T(self, name, q):
    """ Calculates the transform for a joint or link
    name string: name of the joint or link, or end-effector
    q np.array: joint angles
    """
    # check for function in dictionary
    if self._T.get(name, None) is None:
        print('Generating transform function for %s'%name)
        self._T[name] = self.calc_T(name)
    return self._T[name](*q)[:-1].flatten()

def J(self, name, q):
   """ Calculates the transform for a joint or link
   name string: name of the joint or link, or end-effector
   q np.array: joint angles
   """
   # check for function in dictionary
   if self._J.get(name, None) is None:
        print('Generating Jacobian function for %s'%name)
        self._J[name] = self.calc_J(name)
   return np.array(self._J[name](*q)).T

So how you use this class (all of this is in a class) is to call these T and J functions with the current joint angles. They’ll check to see if the functions have already be created or stored in file, if they haven’t then the T and / or J functions will be created, then our wrappers do a bit of formatting to get them into the proper form (i.e. transposing or cropping), and return you your (x,y,z) or Jacobian!

NOTE: It’s a bit of a misnomer to have the function be called T and actually return to you Tx, but hey this is a blog. Lay off.

Calculating the inertia matrix in joint-space and force of gravity
Now, since we’re here we might as well also calculate the functions for our inertia matrix in joint space and the effect of gravity. So, define a couple more placeholders in our robot_config class to help us:

self._M = []  # placeholder for (x,y,z) inertia matrices
self._Mq = None  # placeholder for joint space inertia matrix function
self._Mq_g = None  # placeholder for joint space gravity term function

and then add in our inertia matrix information (defined in each link’s centre-of-mass (COM) reference frame)

# create the inertia matrices for each link of the ur5
self._M.append(np.diag([1.0, 1.0, 1.0,
                        0.02, 0.02, 0.02]))  # link0
self._M.append(np.diag([2.5, 2.5, 2.5,
                        0.04, 0.04, 0.04]))  # link1
self._M.append(np.diag([5.7, 5.7, 5.7,
                        0.06, 0.06, 0.04]))  # link2
self._M.append(np.diag([3.9, 3.9, 3.9,
                        0.055, 0.055, 0.04]))  # link3
self._M.append(np.copy(self._M[1]))  # link4
self._M.append(np.copy(self._M[1]))  # link5
self._M.append(np.diag([0.7, 0.7, 0.7,
                        0.01, 0.01, 0.01]))  # link6

and then using our equations for calculating the system’s inertia and gravity we create our _calc_Mq and _calc_Mq_g functions

def _calc_Mq(self, lambdify=True):
    """ Uses Sympy to generate the inertia matrix in
    joint space for the ur5

    lambdify boolean: if True returns a function to calculate
                      the Jacobian. If False returns the Sympy
                      matrix
    """

    # check to see if we have our inertia matrix saved in file
    if os.path.isfile('%s/Mq' % self.config_folder):
        Mq = cloudpickle.load(open('%s/Mq' % self.config_folder, 'rb'))
    else:
        # get the Jacobians for each link's COM
        J = [self._calc_J('link%s' % ii, lambdify=False)
             for ii in range(self.num_links)]

        # transform each inertia matrix into joint space
        # sum together the effects of arm segments' inertia on each motor
        Mq = sp.zeros(self.num_joints)
        for ii in range(self.num_links):
            Mq += J[ii].T * self._M[ii] * J[ii]

        # save to file
        cloudpickle.dump(Mq, open('%s/Mq' % self.config_folder, 'wb'))

    if lambdify is False:
        return Mq
    return sp.lambdify(self.q, Mq)

def _calc_Mq_g(self, lambdify=True):
    """ Uses Sympy to generate the force of gravity in
    joint space for the ur5

    lambdify boolean: if True returns a function to calculate
                      the Jacobian. If False returns the Sympy
                      matrix
    """

    # check to see if we have our gravity term saved in file
    if os.path.isfile('%s/Mq_g' % self.config_folder):
        Mq_g = cloudpickle.load(open('%s/Mq_g' % self.config_folder,
                                     'rb'))
    else:
        # get the Jacobians for each link's COM
        J = [self._calc_J('link%s' % ii, lambdify=False)
             for ii in range(self.num_links)]

        # transform each inertia matrix into joint space and
        # sum together the effects of arm segments' inertia on each motor
        Mq_g = sp.zeros(self.num_joints, 1)
        for ii in range(self.num_joints):
            Mq_g += J[ii].T * self._M[ii] * self.gravity

        # save to file
        cloudpickle.dump(Mq_g, open('%s/Mq_g' % self.config_folder,
                                    'wb'))

    if lambdify is False:
        return Mq_g
    return sp.lambdify(self.q, Mq_g)

and wrapper functions

def Mq(self, q):
    """ Calculates the joint space inertia matrix for the ur5

    q np.array: joint angles
    """
    # check for function in dictionary
    if self._Mq is None:
        print('Generating inertia matrix function')
        self._Mq = self._calc_Mq()
    return np.array(self._Mq(*q))

def Mq_g(self, q):
    """ Calculates the force of gravity in joint space for the ur5

    q np.array: joint angles
    """
    # check for function in dictionary
    if self._Mq_g is None:
        print('Generating gravity effects function')
        self._Mq_g = self._calc_Mq_g()
    return np.array(self._Mq_g(*q)).flatten()

and we’re all set!

Putting it all together

Now we have nice clean code to generate everything we need for our controller. Using the controller developed in this post as a base, we can replace those calculations with the following nice compact code (which also includes a secondary null-space controller to keep the arm near resting joint angles):

# calculate position of the end-effector
# derived in the ur5 calc_TnJ class
xyz = robot_config.T('EE', q)
# calculate the Jacobian for the end effector
JEE = robot_config.J('EE', q)
# calculate the inertia matrix in joint space
Mq = robot_config.Mq(q)
# calculate the effect of gravity in joint space
Mq_g = robot_config.Mq_g(q)

# convert the mass compensation into end effector space
Mx_inv = np.dot(JEE, np.dot(np.linalg.inv(Mq), JEE.T))
svd_u, svd_s, svd_v = np.linalg.svd(Mx_inv)
# cut off any singular values that could cause control problems
singularity_thresh = .00025
for i in range(len(svd_s)):
    svd_s[i] = 0 if svd_s[i] < singularity_thresh else \
        1./float(svd_s[i])
# numpy returns U,S,V.T, so have to transpose both here
Mx = np.dot(svd_v.T, np.dot(np.diag(svd_s), svd_u.T))

kp = 100
kv = np.sqrt(kp)
# calculate desired force in (x,y,z) space
u_xyz = np.dot(Mx, target_xyz - xyz)
# transform into joint space, add vel and gravity compensation
u = (kp * np.dot(JEE.T, u_xyz) - np.dot(Mq, kv * dq) - Mq_g)

# calculate our secondary control signal
# calculated desired joint angle acceleration
q_des = (((robot_config.rest_angles - q) + np.pi) %
         (np.pi*2) - np.pi)
u_null = np.dot(Mq, (kp * q_des - kv * dq))

# calculate the null space filter
Jdyn_inv = np.dot(Mx, np.dot(JEE, np.linalg.inv(Mq)))
null_filter = (np.eye(robot_config.num_joints) -
               np.dot(JEE.T, Jdyn_inv))
u_null_filtered = np.dot(null_filter, u_null)

u += u_null_filtered

And there you go!

You can see all of this code up on my GitHub, along a full example controlling a UR5 VREP model though reaching to a series of targets. It looks something pretty much like this (slightly better because this gif was made before adding in the null space controller):

ur5.gif

Overhead of using lambdify instead of hard-coding

This was a big question that I had, because when I’m running simulations the time step is on the order of a few milliseconds, with the controller code called at every time step. So I reaaaally can’t afford a crazy overhead for using lambdify.

To test this, I used the handy Python timeit, which requires a bit awkward setup, but quite nicely calls the function a whole bunch of times (1,000,000 by default) and accounts for various other things going on that could affect the execution time.

I tested two sample functions, one simpler than the other. Here’s the code for setting up and testing the first function:

import timeit
import seaborn

# Test 1 ----------------------------------------------------------------------
print('\nTest function 1: ')
time_sympy1 = timeit.timeit(
        stmt = 'f(np.random.random(), np.random.random())',
        setup = 'import numpy as np;\
                import sympy as sp;\
                q0 = sp.Symbol("q0");\
                l0 = sp.Symbol("l0");\
                a = sp.cos(q0) * l0;\
                f = sp.lambdify((q0, l0), a, "numpy")')
print('Sympy lambdify function 1 time: ', time_sympy1)

time_hardcoded1 = timeit.timeit(
        stmt = 'np.cos(np.random.random())*np.random.random()',
        setup = 'import numpy as np')
print('Hard coded function 1 time: ', time_hardcoded1)

Pretty simple, a bit of a pain in the sympy setup, but other than that not bad at all. The second function I tested was just a random collection of cos and sin calls that resemble what gets computed in a Jacobian:

l1*np.sin(q0 - l0*np.sin(q1)*np.cos(q2) - l2*np.sin(q2) - l0*np.sin(q1) + q0*l0)*np.cos(q0) + l2*np.sin(q0)

And here’s the results:

simtime

So it’s slower for sure, but again this is the difference in time after 1,000,000 function calls, so until some big optimization needs to be done using the SymPy lambdify function is definitely worth the slight gain in execution time for the insane convenience.

The full code for the timing tests here are also up on my GitHub.

Tagged , , , ,