**Introduction**

There are numerous resources that introduce the topic of orientation control, so I’m not going to do a full rehash here. I will link to resources that I found helpful, but a quick google search will pull up many useful references on the basics.

When describing the orientation of the end-effector there are three different primary methods used: Euler angles, rotation matrices, and quaternions. The Euler angles , and denote roll, pitch, and yaw, respectively. Rotation matrices specify 3 orthogonal unit vectors, and I describe in detail how to calculate them in this post on forward transforms. And quaternions are 4-dimensional vectors used to describe 3 dimensional orientation, which provide stability and require less memory and compute but are more complicated to understand.

Most modern robotics control is done using quaternions because they do not have singularities, and it is straight forward to convert them to other representations. Fun fact: Quaternion trajectories also interpolate nicely, where Euler angles and rotation matrices do not, so they are used in computer graphics to generate a trajectory for an object to follow in orientation space.

While you won’t need a full understanding of quaternions to use the orientation control code, it definitely helps if anything goes wrong. If you are looking to learn about or brush up on quaternions, or even if you’re not but you haven’t seen this resource, you should definitely check out these interactive videos by Grant Sanderson and Ben Eater. They have done an incredible job developing modules to give people an intuition into how quaternions work, and I can’t recommend their work enough. There’s also a non-interactive video version that covers the same material.

In control literature, angular velocity and acceleration are denoted and . It’s important to remember that is denoting a *velocity*, and not a position, as we work through things or it could take you a lot longer to understand than it otherwise might…

**How to generate task-space angular forces**

So, if you recall a post long ago on Jacobians, our task-space Jacobian has 6 rows:

In position control, where we’re only concerned about the position of the hand, the angular velocity dimensions are stripped out of the Jacobian so that it’s a 3 x n_joints matrix rather than a 6 x n_joints matrix. So the orientation of the end-effector is not controlled at all. When we did this, we didn’t need to worry or know anything about the form of the angular velocities.

Now that we’re interested in orientation control, however, we will need to learn up (and thank you to Yitao Ding for clarifying this point in comments on the original version of this post). The Jacobian for the orientation describes the rotational velocities around each axis with respect to joint velocities. The rotational velocity of each axis “happens” at the same time. It’s important to note that this is not the same thing as Euler angles, which are applied sequentially.

To generate our task-space angular forces we will have to generate an orientation angle error signal of the appropriate form. To do that, first we’re going to have to get and be able to manipulate the orientation representation of the end-effector and our target.

**Transforming orientation between representations**

We can get the current end-effector orientation in rotation matrix form quickly, using the transformation matrices for the robot. To get the target end-effector orientation in the examples below we’ll use the VREP remote API, which returns Euler angles.

It’s important to note that Euler angles can come in 12 different formats. You have to know what kind of Euler angles you’re dealing with (e.g. rotate around X then Y then Z, or rotate around X then Y then X, etc) for all of this to work properly. It should be well documented somewhere, for example the VREP API page tells us that it will return angles corresponding to x, y, and then z rotations.

The axes of rotation can be static (extrinsic rotations) or rotating (intrinsic rotations). NOTE: The VREP page says that they rotate around an absolute frame of reference, which I take to mean static, but I believe that’s a typo on their page. If you calculate the orientation of the end-effector of the UR5 using transform matrices, and then convert it to Euler angles with `axes='rxyz'`

you get a match with the displayed Euler angles, but not with `axes='sxyz'`

.

Now we’re going to have to be able to transform between Euler angles, rotation matrices, and quaternions. There are well established methods for doing this, and a bunch of people have coded things up to do it efficiently. Here I use the very handy transformations module from Christoph Gohlke at the University of California. Importantly, when converting to quaternions, don’t forget to normalize the quaternions to unit length.

from abr_control.arms import ur5 as arm from abr_control.interfaces import VREP from abr_control.utils import transformations robot_config = arm.Config() interface = VREP(robot_config) interface.connect() feedback = interface.get_feedback() # get the end-effector orientation matrix R_e = robot_config.R('EE', q=feedback['q']) # calculate the end-effector unit quaternion q_e = transformations.unit_vector( transformations.quaternion_from_matrix(R_e)) # get the target information from VREP target = np.hstack([ interface.get_xyz('target'), interface.get_orientation('target')]) # calculate the target orientation rotation matrix R_d = transformations.euler_matrix( target[3], target[4], target[5], axes='rxyz')[:3, :3] # calculate the target orientation unit quaternion q_d = transformations.unit_vector( transformations.quaternion_from_euler( target[3], target[4], target[5], axes='rxyz')) # converting angles from 'rotating xyz'

**Generating the orientation error**

I implemented 4 different methods for calculating the orientation error, from (Caccavale et al, 1998), (Yuan, 1988) and (Nakinishi et al, 2008), and then one based off some code I found on Stack Overflow. I’ll describe each below, and then we’ll look at the results of applying them in VREP.

**Method 1 – Based on code from StackOverflow**

Given two orientation quaternion and , we want to calculate the rotation quaternion that takes us from to :

To isolate , we right multiply by the inverse of . All orientation quaternions are of unit length, and for unit quaternions the inverse is the same as the conjugate. To calculate the conjugate of a quaternion, denoted here, we negate *either* the scalar or vector part of the quaternion, but not both.

Great! Now we know how to calculate the rotation needed to get from the current orientation to the target orientation. Next, we have to get from a set of target Euler angle forces. In A new method for performing digital control system attitude computations using quaternions (Ickes, 1968), he mentions mention that

For control purposes, the last three elements of the quaternion define the roll, pitch, and yaw rotational errors…

So you can just take the vector part of the quaternion and use that as your desired Euler angle forces.

# calculate the rotation between current and target orientations q_r = transformations.quaternion_multiply( q_target, transformations.quaternion_conjugate(q_e)) # convert rotation quaternion to Euler angle forces u_task[3:] = ko * q_r[1:] * np.sign(q_r[0])

NOTE: You will run into issues when the angle is crossed where the arm ‘goes the long way around’. To account for this, use `q_r[1:] * np.sign(q_r[0])`

. This will make sure that you always rotate along a trajectory < 180 degrees towards the target angle. The reason that this crops up is because there are multiple different quaternions that can represent the same orientation.

The following figure shows the arm being directed from and to the same orientations, where the one on the left takes the long way around, and the one on the right multiplies by the sign of the scalar component of the quaternion as specified above.

**Method 2 – Quaternion feedback from ****Resolved-acceleration control of robot manipulators: A critical review with experiments**** (Caccavale et al, 1998)**

In section IV, equation (34) of this paper they specify the orientation error to be calculated as

where is the rotation matrix for the end-effector, and is the vector part of the unit quaternion that can be extracted from the rotation matrix

.

To implement this is pretty straight forward using the `transforms.py`

module to handle the representation conversions:

# From (Caccavale et al, 1997) # Section IV - Quaternion feedback R_ed = np.dot(R_e.T, R_d) # eq 24 q_ed = transformations.quaternion_from_matrix(R_ed) q_ed = transformations.unit_vector(q_ed) u_task[3:] = -np.dot(R_e, q_ed[1:]) # eq 34

**Method 3 – Angle/axis feedback from ****Resolved-acceleration control of robot manipulators: A critical review with experiments**** (Caccavale et al, 1998)**

In section V of the paper, they present an angle / axis feedback algorithm, which overcomes the singularity issues that classic Euler angle methods suffer from. The algorithm defines the orientation error in equation (45) to be calculated

,

where and are the scalar and vector part of the quaternion representation of

Where is the rotation matrix representing the desired orientation and is the rotation matrix representing the end-effector orientation. The code implementation for this looks like:

# From (Caccavale et al, 1997) # Section V - Angle/axis feedback R_de = np.dot(R_d, R_e.T) # eq 44 q_ed = transformations.quaternion_from_matrix(R_de) q_ed = transformations.unit_vector(q_ed) u_task[3:] = -2 * q_ed[0] * q_ed[1:] # eq 45

From playing around with this briefly, it seems like this method also works. The authors note in the discussion that it may “suffer in the case of large orientation errors”, but I wasn’t able to elicit poor behaviour when playing around with it in VREP. The other downside they mention is that the computational burden is heavier with this method than with quaternion feedback.

**Method 4 – From ****Closed-loop manipulater control using quaternion feedback**** (Yuan, 1988) and ****Operational space control: A theoretical and empirical comparison**** (Nakanishi et al, 2008)**

This was the one method that I wasn’t able to get implemented / working properly. Originally presented in (Yuan, 1988), and then modified for representing the angular velocity in world and not local coordinates in (Nakanishi et al, 2008), the equation for generating error (Nakanishi eq 72):

where and are the scalar and vector components of the quaternions representing the end-effector and target orientations, respectively, and is defined in (Nakanishi eq 73):

My code for this implementation looks like:

S = np.array([ [0.0, -q_d[2], q_d[1]], [q_d[2], 0.0, -q_d[0]], [-q_d[1], q_d[0], 0.0]]) u_task[3:] = -(q_d[0] * q_e[1:] - q_e[0] * q_d[1:] + np.dot(S, q_e[1:]))

If you understand why this isn’t working, if you can provide a working code example in the comments I would be very grateful.

**Generating the full orientation control signal**

The above steps generate the task-space control signal, and from here I’m just using standard operational space control methods to take `u_task`

and transform it into joint torques to send out to the arm. With possibly the caveat that I’m accounting for velocity in joint-space, not task space. Generating the full control signal looks like:

# which dim to control of [x, y, z, alpha, beta, gamma] ctrlr_dof = np.array([False, False, False, True, True, True]) feedback = interface.get_feedback() # get the end-effector orientation matrix R_e = robot_config.R('EE', q=feedback['q']) # calculate the end-effector unit quaternion q_e = transformations.unit_vector( transformations.quaternion_from_matrix(R_e)) # get the target information from VREP target = np.hstack([ interface.get_xyz('target'), interface.get_orientation('target')]) # calculate the target orientation rotation matrix R_d = transformations.euler_matrix( target[3], target[4], target[5], axes='rxyz')[:3, :3] # calculate the target orientation unit quaternion q_d = transformations.unit_vector( transformations.quaternion_from_euler( target[3], target[4], target[5], axes='rxyz')) # converting angles from 'rotating xyz' # calculate the Jacobian for the end effectora # and isolate relevate dimensions J = robot_config.J('EE', q=feedback['q'])[ctrlr_dof] # calculate the inertia matrix in task space M = robot_config.M(q=feedback['q']) # calculate the inertia matrix in task space M_inv = np.linalg.inv(M) Mx_inv = np.dot(J, np.dot(M_inv, J.T)) if np.linalg.det(Mx_inv) != 0: # do the linalg inverse if matrix is non-singular # because it's faster and more accurate Mx = np.linalg.inv(Mx_inv) else: # using the rcond to set singular values < thresh to 0 # singular values < (rcond * max(singular_values)) set to 0 Mx = np.linalg.pinv(Mx_inv, rcond=.005) u_task = np.zeros(6) # [x, y, z, alpha, beta, gamma] # generate orientation error # CODE FROM ONE OF ABOVE METHODS HERE # remove uncontrolled dimensions from u_task u_task = u_task[ctrlr_dof] # transform from operational space to torques and # add in velocity and gravity compensation in joint space u = (np.dot(J.T, np.dot(Mx, u_task)) - kv * np.dot(M, feedback['dq']) - robot_config.g(q=feedback['q'])) # apply the control signal, step the sim forward interface.send_forces(u)

The control script in full context is available up on my GitHub along with the corresponding VREP scene. If you download and run both (and have the ABR Control repo installed), then you can generate fun videos like the following:

Here, the green ball is the target, and the end-effector is being controlled to match the orientation of the ball. The blue box is just a visualization aid for displaying the orientation of the end-effector. And that hand is on there just from another project I was working on then forgot to remove but already made the videos so here we are. It’s set to not affect the dynamics so don’t worry. The target changes orientation once a second. The orientation gain for these trials is `ko=200`

and `kv=np.sqrt(600)`

.

The first three methods all perform relatively similarly to each other, although method 3 seems to be a bit faster to converge to the target orientation after the first movement. But it’s pretty clear something is terribly wrong with the implementation of the Yuan algorithm in method 4; brownie points for whoever figures out what!

**Controlling position and orientation**

So you want to use force control to control both position *and* orientation, eh? You are truly reaching for the stars, and I applaud you. For the most part, this is pretty straight-forward. But there are a couple of gotchyas so I’ll explicitly go through the process here.

__How many degrees-of-freedom (DOF) can be controlled?__

If you recall from my article on Jacobians, there was a section on analysing the Jacobian. It comes down to two main points: 1) The Jacobian specifies which task-space DOF can be controlled. If there is a row of zeros, for example, the corresponding task-space DOF (i.e. cannot be controlled. 2) The rank of the Jacobian determines how many DOF can be controlled *at the same time.*

For example, in a two joint planar arm, the variables can be controlled, but cannot be controlled because their corresponding rows are all zeros. So 3 variables can potentially be controlled, but because the Jacobian is rank 2 only two variables can be controlled at a time. If you try to control more than 2 DOF at a time things are going to go poorly. Here are some animations of trying to control 3 DOF vs 2 DOF in a 2 joint arm:

__How to specify which DOF are being controlled?__

Okay, so we don’t want to try to control too many DOF at once. Got it. Let’s say we know that our arm has 3 DOF, how do we choose which DOF to control? Simple: You remove the rows from you Jacobian and your control signal that correspond to task-space DOF you don’t want to control.

To implement this in code in a flexible way, I’ve chosen to specify an array with 6 boolean elements, set to `True`

if you want to control the corresponding task space parameter and `False`

if you don’t. For example, if you to control just the parameters, you would set `ctrl_dof = [True, True, True, False, False, False]`

.

We then strip the Jacobian and task space control signal down to the relevant rows with `J = robot_config.('EE', q)[ctrlr_dof]`

and `u_task = (current - target)[ctrlr_dof]`

. This means that both `current`

and `target`

must be 6-dimensional vectors specifying the current and target values, respectively, regardless of how many dimensions we’re actually controlling.

**Generating a position + orientation control signal**

The UR5 has 6 degrees of freedom, so we’re able to fully control the task space position and orientation. To do this, in the above script just `ctrl_dof = np.array([True, True, True, True, True, True])`

, and there you go! In the following animations the gain values used were `kp=300`

, `ko=300`

, and `kv=np.sqrt(kp+ko)*1.5`

. The full script can be found up on my GitHub.

NOTE: Setting the gains properly for this task is pretty critical, and I did it just by trial and error until I got something that was decent for each. For a real comparison, better parameter tuning would have to be undertaken more rigorously.

NOTE: When implementing this minimal code example script I ran into a problem that was caused by the task-space inertia matrix calculation. It turns out that using `np.linalg.pinv`

gives very different results than `np.linalg.inv`

, and I did not realise this. I’m going to have to explore this more fully later, but basically heads up that you want to be using `np.linalg.inv`

as much as possible. So you’ll notice in the above code I check the dimensionality of `Mx_inv`

and first try to use `np.linalg.inv`

before resorting to `np.linalg.pinv`

.

NOTE: If you start playing around with controlling only one or two of the orientation angles, something to keep in mind: Because we’re using rotating axes, if you set up `False, False, True`

then it’s not going to look like of the end-effector is lining up with the of the target. This is because and weren’t set first. If you generate a plot of the target orientations vs the end-effector orientations, however, you’ll see that you are in face reaching the target orientation for .

**In summary**

So that’s that! Lots of caveats, notes, and more work to be done, but hopefully this will be a useful resource for any others embarking on the same journey. You can download the code, try it out, and play around with the gains and targets. Let me know below if you have any questions or enjoyed the post, or want to share any other resources on force control of task-space orientation.

And once again thanks to Yitao Ding for his helpful comments and corrections on the original version of this article.

]]>If you’re not familiar with policy gradient or various approaches to the cart-pole problem I also suggest that you check out this blog post from KV Frans, which provides the basis for the code I use here. All of the code that I use in this post is available up on my GitHub, and it’s worth noting this code is for TensorFlow 1.7.0.

**Natural policy gradient review**

Let’s informally define our notation

- and are the system state and chosen action at time
- are the network parameters that we’re learning
- is the control policy network parameterized
- is the advantage function, which returns the estimated total reward for taking action in state relative to default behaviour (i.e. was this action better or worse than what we normally do)

The regular policy gradient is calculated as:

where denotes calculating the partial derivative with respect to , the hat above denotes that we’re using an estimation of the advantage function, and returns a scalar value which is the probability of the chosen action in state .

For the natural policy gradient, we’re going to calculate the Fisher information matrix, , which estimates the curvature of the control policy:

The goal of including curvature information is to be able move along the steepest ascent direction, which we estimate with . Including this then the weight update becomes

where is a learning rate parameter. In his paper Towards Generalization and Simplicity in Continuous Control, Aravind Rajeswaran notes that empirically it’s difficult to choose an appropriate value or set an appropriate schedule. To address this, they normalize the update under the Fisher metric:

Using the above for the learning rate can be thought of as choosing a step size that is normalized with respect to the change in the control policy. This is beneficial because it means that we don’t do any parameter updates that will drastically change the output of the policy network.

Here’s what the pseudocode for the algorithm looks like

- Initialize policy parameters
- for k = 1 to K do:
- collect N trajectories by rolling out the stochastic policy
- compute for each pair along the trajectories sampled
- compute advantages based on the sampled trajectories and the estimated value function
- compute the policy gradient as specified above
- compute the Fisher matrix and perform the weight update as specified above
- update the value network to get

OK great! On to implementation.

**Implementation in TensorFlow 1.7.0**

Calculating the gradient at every time step

In TensorFlow, the `tf.gradient(ys, xs)`

function, which calculates the partial derivative of `ys`

with respect to `xs:`

will automatically sum over over all elements in `ys`

. There is no function for getting the gradient of each individual entry (i.e. the Jacobian) in `ys`

, which is less than great. People have complained, but until some update happens we can address this with basic list comprehension to iterate through each entry in the history and calculate the gradient at that point in time. If you have a faster means that you’ve tested please comment below!

The code for this looks like

g_log_prob = tf.stack( [tf.gradients(action_log_prob_flat[i], my_variables)[0] for i in range(action_log_prob_flat.get_shape()[0])])

Where the result is called `g_log_prob_flat`

because it’s the gradient of the log probability of the chosen action at each time step.

Inverse of a positive definite matrix with rounding errors

One of the problems I ran into was in calculating the normalized learning rate was that the square root calculation was dying because negative values were being set in. This was very confusing because is a positive definite matrix, which means that is also positive definite, which means that for any .

So it was clear that it was a calculation error somewhere, but I couldn’t find the source of the error for a while, until I started doing the inverse calculation myself using SVD to explicitly look at the eigenvalues and make sure they were all > 0. Turns out that there were some very, very small eigenvalues with a negative sign, like -1e-7 kind of range. My guess (and hope) is that these are just rounding errors, but they’re enough to mess up the calculations of the normalized learning rate. So, to handle this I explicitly set those values to zero, and that took care of that.

S, U, V = tf.svd(F) atol = tf.reduce_max(S) * 1e-6 S_inv = tf.divide(1.0, S) S_inv = tf.where(S < atol, tf.zeros_like(S), S_inv) S_inv = tf.diag(S_inv) F_inv = tf.matmul(S_inv, tf.transpose(U)) F_inv = tf.matmul(V, F_inv)

Manually adding an update to the learning parameters

Once the parameter update is explicitly calculated, we then need to update them. To implement an explicit parameter update in TensorFlow we do

# update trainable parameters my_variables[0] = tf.assign_add(my_variables[0], update)

So to update the parameters then we call `sess.run(my_variables, feed_dict={...})`

. It’s worth noting here too that any time you run the session with `my_variables`

the parameter update will be calculated and applied, so you can’t run the session with `my_variables`

to only fetch the current values.

**Results on the CartPole problem**

Now running the original policy gradient algorithm against the natural policy gradient algorithm (with everything else the same) we can examine the results of using the Fisher information matrix in the update provides some strong benefits. The plot below shows the maximum reward received in a batch of 200 time steps, where the system receives a reward of 1 for every time step that the pole stays upright, and 200 is the maximum reward achievable.

To generate this plot I ran 10 sessions of 300 batches, where each batch runs as many episodes as it takes to get 200 time steps of data. The solid lines are the mean value at each epoch across all sessions, and the shaded areas are the 95% confidence intervals. So we can see that the natural policy gradient starts to hit a reward of 200 inside 100 batches, and that the mean stays higher than normal policy gradient even after 300 batches.

It’s also worth noting that for both the policy gradient and natural policy gradient the average time to run a batch and weight update was about the same on my machine, between 150-180 milliseconds.

The modified code for policy gradient, the natural policy gradient, and plotting code are all up on my GitHub.

**Other notes**

- Had to reduce the to 0.001 (from 0.05 which was used in the Rajeswaran paper) to get the learning to converge, suspect this is because of the greatly reduced complexity of the space and control problem.
- The Rajeswaran paper algorithm does gradient ascent instead of descent, which is why the signs are how they are.

To do this, I start by using three methods from control literature (operational space control, dynamic movement primitives, and non-linear adaptive control) to create an algorithms level model of the motor control system that captures behavioural level phenomena of human movement. Then I explore how this functionality could be mapped to the primate brain and implemented in spiking neurons. Finally, I look at the data generated by this model on the behavioural level (e.g. kinematics of movement), the systems level (i.e. analysis of populations of neurons), and the single-cell level (e.g. correlating neural activity with movement parameters) and compare/contrast with experimental data.

By having a full model framework (from observable behaviour to neural spikes) is to have a more constrained computational model of the motor control system; adding lower-level biological constraints to behavioural models and higher-level behavioural constraints to neural models.

In general, once you have a model, the critical next step is to generating testable predictions that can be used to discriminate between other models with different implementations or underlying algorithms. Discriminative predictions allow us to design experiments that can gather evidence in favour or against different hypotheses of brain function, and provide clues to useful directions for further research. Which is the real contribution of computational modeling.

So that’s a quick overview of the paper; there are quite a few pages of supplementary information that describe the details of the model implementation, and I provided the code and data used to generate the data analysis figures. However, code to explicitly run the model on your own has been missing. As one of the major points of this blog is to provide code for furthering research, this is pretty embarrassing. So, to begin to remedy this, in this post I’m going to work through a REACH framework for building models to control a two-link arm through reaching in a line, tracing a circle, performing the centre-out reaching task, and adapting online to unexpected perturbations during reaching imposed by a joint-velocity based forcefield.

This post is directed towards those who have already read the paper (although not necessarily the supplementary material). To run the scripts you’ll need Nengo, Nengo GUI, and NengoLib all installed. There’s a description of the theory behind the Neural Engineering Framework, which I use extensively in my Nengo modeling, in the paper. I’m hoping that between that and code readability / my explanations below that most will be comfortable starting to play around with the code. But if you’re not, and would like more resources, you can check out the Getting Started page on the Nengo website, the tutorials from How To Build a Brain, and the examples in the Nengo GUI.

You can find all of the code up on my GitHub.

NOTE: I’m using the two-link arm (which is fully implemented in Python) instead of the three-link arm (which has compile issues for Macs) both so that everyone should be able to run the model arm code and to reduce the number of neurons that are required for control, so that hopefully you can run this on you laptop in the event that you don’t have a super power Ubuntu work station. Scaling this model up to the three-link arm is straight-forward though, and I will work on getting code up (for the three-link arm for non-Mac users) as a next project.

**Implementing PMC – the trajectory generation system**

I’ve talked at length about dynamic movement primitives (DMPs) in previous posts, so I won’t describe those again here. Instead I will focus on their implementation in neurons.

def generate(y_des, speed=1, alpha=1000.0): beta = alpha / 4.0 # generate the forcing function forces, _, goals = forcing_functions.generate( y_des=y_des, rhythmic=False, alpha=alpha, beta=beta) # create alpha synapse, which has point attractor dynamics tau = np.sqrt(1.0 / (alpha*beta)) alpha_synapse = nengolib.Alpha(tau) net = nengo.Network('PMC') with net: net.output = nengo.Node(size_in=2) # create a start / stop movement signal time_func = lambda t: min(max( (t * speed) % 4.5 - 2.5, -1), 1) def goal_func(t): t = time_func(t) if t <= -1: return goals[0] return goals[1] net.goal = nengo.Node(output=goal_func, label='goal') # -------------------- Ramp --------------------------- ramp_node = nengo.Node(output=time_func, label='ramp') net.ramp = nengo.Ensemble( n_neurons=500, dimensions=1, label='ramp ens') nengo.Connection(ramp_node, net.ramp) # ------------------- Forcing Functions --------------- def relay_func(t, x): t = time_func(t) if t <= -1: return [0, 0] return x # the relay prevents forces from being sent on reset relay = nengo.Node(output=relay_func, size_in=2) domain = np.linspace(-1, 1, len(forces[0])) x_func = interpolate.interp1d(domain, forces[0]) y_func = interpolate.interp1d(domain, forces[1]) transform=1.0/alpha/beta nengo.Connection(net.ramp, relay[0], transform=transform, function=x_func, synapse=alpha_synapse) nengo.Connection(net.ramp, relay[1], transform=transform, function=y_func, synapse=alpha_synapse) nengo.Connection(relay, net.output) nengo.Connection(net.goal[0], net.output[0], synapse=alpha_synapse) nengo.Connection(net.goal[1], net.output[1], synapse=alpha_synapse) return net

The `generate`

method for the PMC takes in a desired trajectory, `y_des`

, as a parameter. The first thing we do (on lines 5-6) is calculate the forcing function that will push the DMP point attractor along the desired trajectory.

The next thing (on lines 9-10) is creating an Alpha (second-order low-pass filter) synapse. By writing out the dynamics of a point attractor in Laplace space, one of the lab members, Aaron Voelker, noticed that the dynamics could be fully implemented by creating an Alpha synapse with an appropriate choice of `tau`

. I walk through all of the math behind this in this post. Here we’ll use that more compact method and project directly to the `output`

node, which improves performance and reduces the number of neurons.

Inside the PMC network we create a `time_func`

node, which is the pace-setter during simulation. It will output a linear ramp from -1 to 1 every few seconds, with the pace set by the `speed`

parameter, and then pause before restarting.

We also have a `goal`

node, which will provide a target starting and ending point for the trajectory. Both the `time_func`

and `goal`

nodes are created and used as a model simulation convenience, and proposed to be generated elsewhere in the brain (the basal ganglia, why not? #igotreasons #provemewrong).

The `ramp`

ensemble is the most important component of the trajectory generation system. It takes the output from the `time_func`

node as input, and generates the forcing function which will guide our little system through the trajectory that was passed in. The ensemble itself is nothing special, but rather the function that it approximates on its outgoing connection. We set up this function approximation with the following code:

domain = np.linspace(-1, 1, len(forces[0])) x_func = interpolate.interp1d(domain, forces[0]) y_func = interpolate.interp1d(domain, forces[1]) transform=1.0/alpha/beta nengo.Connection(net.ramp, relay[0], transform=transform, function=x_func, synapse=alpha_synapse) nengo.Connection(net.ramp, relay[1], transform=transform, function=y_func, synapse=alpha_synapse) nengo.Connection(relay, net.output)

We want the forcing function be generated as the signals represented in the `ramp`

ensemble moves from -1 to 1. To achieve this, we create interpolation functions, `x_func`

and `y_func`

, which are set up to generate the forcing function values mapped to input values between -1 and 1. We pass these functions into the outgoing connections from the `ramp`

population (one for `x`

and one for `y`

). So now when the `ramp`

ensemble is representing -1, 0, and 1 the output along the two connections will be the starting, middle, and ending `x`

and `y`

points of the forcing function trajectory. The `transform`

and `synapse`

are set on each connection with the appropriate gain values and Alpha synapse, respectively, to implement point attractor dynamics.

NOTE: The above DMP implementation can generate a trajectory signal with as many dimensions as you would like, and all that’s required is adding another outgoing `Connection`

projecting from the `ramp`

ensemble.

The last thing in the code is hooking up the `goal`

node to the output, which completes the point attractor implementation.

**Implementing M1 – the kinematics of operational space control**

In REACH, we’ve modelled the primary motor cortex (M1) as responsible for taking in a desired hand movement (i.e. `target_position - current_hand_position`

) and calculating a set of joint torques to carry that out. Explicitly, M1 generates the kinematics component of an OSC signal:

In the paper I did this using several populations of neurons, one to calculate the Jacobian, and then an `EnsembleArray`

to perform the multiplication for the dot product of each dimension separately. Since then I’ve had the chance to rework things and it’s now done entirely in one ensemble.

Now, the set up for the M1 model that computes the above function is to have a single ensemble of neurons that takes in the joint angles, , and control signal , and outputs the function above. Sounds pretty simple, right? Simple is good.

Let’s look at the code (where I’ve stripped out the comments and some debugging code):

def generate(arm, kp=1, operational_space=True, inertia_compensation=True, means=None, scales=None): dim = arm.DOF + 2 means = np.zeros(dim) if means is None else means scales = np.ones(dim) if scales is None else scales scale_down, scale_up = generate_scaling_functions( np.asarray(means), np.asarray(scales)) net = nengo.Network('M1') with net: # create / connect up M1 ------------------------------ net.M1 = nengo.Ensemble( n_neurons=1000, dimensions=dim, radius=np.sqrt(dim), intercepts=AreaIntercepts( dimensions=dim, base=nengo.dists.Uniform(-1, .1))) # expecting input in form [q, x_des] net.input = nengo.Node(output=scale_down, size_in=dim) net.output = nengo.Node(size_in=arm.DOF) def M1_func(x, operational_space, inertia_compensation): """ calculate the kinematics of the OSC signal """ x = scale_up(x) q = x[:arm.DOF] x_des = kp * x[arm.DOF:] # calculate hand (dx, dy) if operational_space: J = arm.J(q=q) if inertia_compensation: # account for inertia Mx = arm.Mx(q=q) x_des = np.dot(Mx, x_des) # transform to joint torques u = np.dot(J.T, x_des) else: u = x_des if inertia_compensation: # account for mass M = arm.M(q=q) u = np.dot(M, x_des) return u # send in system feedback and target information # don't account for synapses twice nengo.Connection(net.input, net.M1, synapse=None) nengo.Connection( net.M1, net.output, function=lambda x: M1_func( x, operational_space, inertia_compensation), synapse=None) return net

The ensemble of neurons itself is created with a few interesting parameters:

net.M1 = nengo.Ensemble( n_neurons=1000, dimensions=dim, radius=np.sqrt(dim), intercepts=AreaIntercepts( dimensions=dim, base=nengo.dists.Uniform(-1, .1)))

Specifically, the `radius`

and `intercepts`

parameters.

Setting the intercepts

First we’ll discuss setting the `intercepts`

using the `AreaIntercepts`

distribution. The intercepts of a neuron determine how much of state space a neuron is active in, which we’ll refer to as the ‘active proportion’. If you don’t know what kind of functions you want to approximate with your neurons, then you having the active proportions for your ensemble chosen from a uniform distribution is a good starting point. This means that you’ll have roughly the same number of neurons active across all of state space as you do neurons that are active over half of state space as you do neurons that are active over very small portions of state space.

By default, Nengo sets the intercepts such that the distribution of active proportions is uniform for lower dimensional spaces. But when you start moving into higher dimensional spaces (into a hypersphere) the default method breaks down and you get mostly neurons that are either active for all of state space or almost none of state space. The `AreaIntercepts`

class calculates how the intercepts should be set to achieve the desire active proportion inside a hypersphere. There are a lot more details here that you can read up on in this IPython notebook by Dr. Terrence C. Stewart.

What you need to know right now is that we’re setting the intercepts of the neurons such that the percentage of state space for which any given neuron is active is chosen from a uniform distribution between 0% and 55%. In other words, a neuron will maximally be active in 55% of state space, no more. This will let us model more nonlinear functions (such as the kinematics of the OSC signal) with fewer neurons. If this description is clear as mud, I really recommend checking out the IPython notebook linked above to get an intuition for what I’m talking about.

Scaling the input signal

The other parameter we set on the M1 ensemble is the radius. The radius scales the range of input values that the ensemble can represent, which is by default anything inside the unit hypersphere (i.e. hypersphere with `radius=1`

). If the radius is left at this default value, the neural activity will saturate for vectors with magnitude greater than 1, leading to inaccurate vector representation and function approximation for input vectors with magnitude > 1. For lower dimensions this isn’t a terrible problem, but as the dimensions of the state space you’re representing grow it becomes more common for input vectors to have a norm greater than 1. Typically, we’d like to be able to, at a minimum, represent vectors with any number of dimensions where any element can be anywhere between -1 and 1. To do this, all we have to do is calculate the norm of the unit vector size `dim`

, which is `np.sqrt(dim)`

(the magnitude of a vector size `dim`

with all elements set to one).

Now that we’re able to represent vectors where the input values are all between -1 and 1, the last part of this sub-network is scaling the input to be between -1 and 1. We use two scaling functions, `scale_down`

and `scale_up`

. The `scale_down`

function subtracts a mean value and scales the input signal to be between -1 and 1. The `scale_up`

function reverts the vector back to it’s original values so that calculations can be carried out normally on the decoding. In choosing mean and scaling values, there are two ways we can set these functions up:

- Set them generally, based on the upper and lower bounds of the input signal. For M1, the input is where is the control signal in end-effector space, we know that the joint angles are always in the range 0 to (because that’s how the arm simulation is programmed), so we can set the
`means`

and`scales`

to be for . For a mean of zero is reasonable, and we can choose (arbitrarily, empirically, or analytically) the largest task space control signal we want to represent accurately. - Or, if we know the model will be performing a specific task, we can look at the range of input values encountered during that task and set the
`means`

and`scales`

terms appropriately. For the task of reaching in a straight line, the arm moves in a very limited state space and we can set the mean and we can tune these parameter to be very specific:means=[0.6, 2.2, 0, 0], scales=[.25, .25, .25, .25]

The benefit of the second method, although one can argue it’s parameter tuning and makes things less biologically plausible, is that it lets us run simulations with fewer neurons. The first method works for all of state space, given enough neurons, but seeing as we don’t always want to be simulating 100k+ neurons we’re using the second method here. By tuning the scaling functions more specifically we’re able to run our model using 1k neurons (and could probably get away with fewer). It’s important to keep in mind though that if the arm moves outside the expected range the control will become unstable.

**Implementing CB – the dynamics of operational space control**

The cerebellum (CB) sub-network has two components to it: dynamics compensation and dynamics adaptation. First we’ll discuss the dynamics compensation. By which I mean the term from the OSC signal.

Much like the calculating the kinematics term of the OSC signal in M1, we can calculate the entire dynamics compensation term using a single ensemble with an appropriate radius, scaled inputs, and well chosen intercepts.

def generate(arm, kv=1, learning_rate=None, learned_weights=None, means=None, scales=None): dim = arm.DOF * 2 means = np.zeros(dim) if means is None else means scales = np.ones(dim) if scales is None else scales scale_down, scale_up = generate_scaling_functions( np.asarray(means), np.asarray(scales)) net = nengo.Network('CB') with net: # create / connect up CB -------------------------------- net.CB = nengo.Ensemble( n_neurons=1000, dimensions=dim, radius=np.sqrt(dim), intercepts=AreaIntercepts( dimensions=dim, base=nengo.dists.Uniform(-1, .1))) # expecting input in form [q, dq, u] net.input = nengo.Node(output=scale_down, size_in=dim+arm.DOF+2) cb_input = nengo.Node(size_in=dim, label='CB input') nengo.Connection(net.input[:dim], cb_input) # output is [-Mdq, u_adapt] net.output = nengo.Node(size_in=arm.DOF*2) def CB_func(x): """ calculate the dynamic component of OSC signal """ x = scale_up(x) q = x[:arm.DOF] dq = x[arm.DOF:arm.DOF*2] # calculate inertia matrix M = arm.M(q=q) return -np.dot(M, kv * dq) # connect up the input and output nengo.Connection(net.input[:dim], net.CB) nengo.Connection(net.CB, net.output[:arm.DOF], function=CB_func, synapse=None)

I don’t think there’s anything noteworthy going on here, most of the relevant details have already been discussed…so we’ll move on to the adaptation!

**Implementing CB – non-linear dynamics adaptation**

The final part of the model is the non-linear dynamics adaptation, modelled as a separate ensemble in the cerebellar sub-network (a separate ensemble so that it’s more modular, the learning connection could also come off of the other CB population). I work through the details and proof of the learning rule in the paper, so I’m not going to discuss that here. But I will restate the learning rule here:

where are the neural decoders, is the learning rate, is the neural activity of the ensemble, and is the joint space control signal sent to the arm. This is a basic delta learning rule, where the decoders of the active neurons are modified to push the decoded function in a direction that reduces the error.

The adaptive ensemble can be initialized either using saved weights (passed in with the `learned_weights`

paramater) or as all zeros. It is important to note that setting decoders to all zeros means that does not mean having zero neural activity, so learning will not be affected by this initialization.

# dynamics adaptation------------------------------------ if learning_rate is not None: net.CB_adapt = nengo.Ensemble( n_neurons=1000, dimensions=arm.DOF*2, radius=np.sqrt(arm.DOF*2), # enforce spiking neurons neuron_type=nengo.LIF(), intercepts=AreaIntercepts( dimensions=arm.DOF, base=nengo.dists.Uniform(-.5, .2))) net.learn_encoders = nengo.Connection( net.input[:arm.DOF*2], net.CB_adapt,) # if no saved weights were passed in start from zero weights = ( learned_weights if learned_weights is not None else np.zeros((arm.DOF, net.CB_adapt.n_neurons))) net.learn_conn = nengo.Connection( # connect directly to arm so that adaptive signal # is not included in the training signal net.CB_adapt.neurons, net.output[arm.DOF:], transform=weights, learning_rule_type=nengo.PES( learning_rate=learning_rate), synapse=None) nengo.Connection(net.input[dim:dim+2], net.learn_conn.learning_rule, transform=-1, synapse=.01) return net

We’re able to implement that learning rule using Nengo’s prescribed error-sensitivity (PES) learning on our connection from the adaptive ensemble to the output. With this set up the system will be able to learn to adapt to perturbations that are functions of the input (set here to be ).

The intercepts in this population are set to values I found worked well for adapting to a few different forces, but it’s definitely a parameter to play with in your own scripts if you’re finding that there’s too much or not enough generalization of the decoded function output across the state space.

One other thing to mention is that we need to have a relay node to amalgamate the control signals output from M1 and the dynamics compensation ensemble in the CB. This signal is used to train the adaptive ensemble, and it’s important that the adaptive ensemble’s output is not included in the training signal, or else the system quickly goes off to positive or negative infinity.

**Implementing S1 – a placeholder**

The last sub-network in the REACH model is a placeholder for a primary sensory cortex (S1) model. This is just a set of ensembles that represents the feedback from the arm and relay it on to the rest of the model.

def generate(arm, direct_mode=False, means=None, scales=None): dim = arm.DOF*2 + 2 # represents [q, dq, hand_xy] means = np.zeros(dim) if means is None else means scales = np.ones(dim) if scales is None else scales scale_down, scale_up = generate_scaling_functions( np.asarray(means), np.asarray(scales)) net = nengo.Network('S1') with net: # create / connect up S1 -------------------------------- net.S1 = nengo.networks.EnsembleArray( n_neurons=50, n_ensembles=dim) # expecting input in form [q, x_des] net.input = nengo.Node(output=scale_down, size_in=dim) net.output = nengo.Node( lambda t, x: scale_up(x), size_in=dim) # send in system feedback and target information # don't account for synapses twice nengo.Connection(net.input, net.S1.input, synapse=None) nengo.Connection(net.S1.output, net.output, synapse=None) return net

Since there’s no function that we’re decoding off of the represented variables we can use separate ensembles to represent each dimension with an `EnsembleArray`

. If we were going to decode some function of, for example, `q0`

and `dq0`

, then we would need an ensemble that represents both variables. But since we’re just decoding out `f(x) = x`

, using an `EnsembleArray`

is a convenient way to decrease the number of neurons needed to accurately represent the input.

**Creating a model using the framework**

The REACH model has been set up to be as much of a plug and play system as possible. To generate a model you first create the M1, PMC, CB, and S1 networks, and then they’re all hooked up to each other using the `framework.py`

file. Here’s an example script that controls the arm to trace a circle:

def generate(): kp = 200 kv = np.sqrt(kp) * 1.5 center = np.array([0, 1.25]) arm_sim = arm.Arm2Link(dt=1e-3) # set the initial position of the arm arm_sim.init_q = arm_sim.inv_kinematics(center) arm_sim.reset() net = nengo.Network(seed=0) with net: net.dim = arm_sim.DOF net.arm_node = arm_sim.create_nengo_node() net.error = nengo.Ensemble(1000, 2) net.xy = nengo.Node(size_in=2) # create an M1 model ------------------------------------- net.M1 = M1.generate(arm_sim, kp=kp, operational_space=True, inertia_compensation=True, means=[0.6, 2.2, 0, 0], scales=[.5, .5, .25, .25]) # create an S1 model ------------------------------------- net.S1 = S1.generate(arm_sim, means=[.6, 2.2, -.5, 0, 0, 1.25], scales=[.5, .5, 1.7, 1.5, .75, .75]) # subtract current position to get task space direction nengo.Connection(net.S1.output[net.dim*2:], net.error, transform=-1) # create a trajectory for the hand to follow ------------- x = np.linspace(0.0, 2.0*np.pi, 100) PMC_trajectory = np.vstack([np.cos(x) * .5, np.sin(x) * .5]) PMC_trajectory += center[:, None] # create / connect up PMC -------------------------------- net.PMC = PMC.generate(PMC_trajectory, speed=1) # send target for calculating control signal nengo.Connection(net.PMC.output, net.error) # send target (x,y) for plotting nengo.Connection(net.PMC.output, net.xy) net.CB = CB.generate(arm_sim, kv=kv, means=[0.6, 2.2, -.5, 0], scales=[.5, .5, 1.6, 1.5]) model = framework.generate(net=net, probes_on=True) return model

In line 50 you can see the call to the `framework`

code, which will hook up the most common connections that don’t vary between the different scripts.

The REACH model has assigned functionality to each area / sub-network, and you can see the expected input / output in the comments at the top of each sub-network file, but the implementations are open. You can create your own M1, PMC, CB, or S1 sub-networks and try them out in the context of a full model that generates high-level movement behaviour.

**Running the model**

To run the model you’ll need Nengo, Nengo GUI, and NengoLib all installed. You can then pull open Nengo GUI and load any of the `a#`

scripts. In all of these scripts the S1 model is just an ensemble that represents the output from the `arm_node`

. Here’s what each of the scripts does:

`a01`

has a spiking M1 and CB, dynamics adaptation turned off. The model guides the arm in reaching in a straight line to a single target and back.`a02`

has a spiking M1, PMC, and CB, dynamics adaptation turned off. The PMC generates a path for the hand to follow that traces out a circle.`a03`

has a spiking M1, PMC, and CB, dynamics adaptation turned off. The PMC generates a path for the joints to follow, which moves the hand in a straight line to a target and back.`a04`

has a spiking M1 and CB, dynamics adaptation turned off. The model performs the centre-out reaching task, starting at a central point and reaching to 8 points around a circle.`a05`

has a spiking M1 and CB, and dynamics adaptation turned on. The model performs the centre-out reaching task, starting at a central point and reaching to 8 points around a circle. As the model reaches, a forcefield is applied based on the joint velocities that pushes the arm as it tries to reach the target. After 100-150 seconds of simulation the arm has adapted and learned to reach in a straight line again.

Here’s what it looks like when you pull open `a02`

in Nengo GUI:

I’m not going to win any awards for arm animation, but! It’s still a useful visualization, and if anyone is proficient in javascript and want’s to improve it, please do! You can see the network architecture in the top left, the spikes generated by M1 and CB to the right of that, the arm in the bottom left, and the path traced out by the hand just to the right of that. On the top right you can see the `a02`

script code, and below that the Nengo console.

**Conclusions**

One of the most immediate extensions (aside from any sort of model of S1) that comes to mind here is implementing a more detailed cerebellar model, as there are several anatomically detailed models which have the same supervised learner functionality (for example (Yamazaki and Nagao, 2012)).

Hopefully people find this post and the code useful for their own work, or at least interesting! In the ideal scenario this would be a community project, where researchers add models of different brain areas and we end up with a large library of implementations to build larger models with in a Mr. Potato Head kind of fashion.

You can find all of the code up on my GitHub. And again, this code all should have been publicly available along with the publication. Hopefully the code proves useful! If you have any questions about it please don’t hesitate to make a comment here or contact me through email, and I’ll get back to you as soon as I can.

]]>So here’s code for getting the above plot, with an option for solid or dashed lines. The sky is the limit!

import matplotlib.pyplot as plt import matplotlib.patches as mpatches from matplotlib.colors import colorConverter as cc import numpy as np def plot_mean_and_CI(mean, lb, ub, color_mean=None, color_shading=None): # plot the shaded range of the confidence intervals plt.fill_between(range(mean.shape[0]), ub, lb, color=color_shading, alpha=.5) # plot the mean on top plt.plot(mean, color_mean) # generate 3 sets of random means and confidence intervals to plot mean0 = np.random.random(50) ub0 = mean0 + np.random.random(50) + .5 lb0 = mean0 - np.random.random(50) - .5 mean1 = np.random.random(50) + 2 ub1 = mean1 + np.random.random(50) + .5 lb1 = mean1 - np.random.random(50) - .5 mean2 = np.random.random(50) -1 ub2 = mean2 + np.random.random(50) + .5 lb2 = mean2 - np.random.random(50) - .5 # plot the data fig = plt.figure(1, figsize=(7, 2.5)) plot_mean_and_CI(mean0, ub0, lb0, color_mean='k', color_shading='k') plot_mean_and_CI(mean1, ub1, lb1, color_mean='b', color_shading='b') plot_mean_and_CI(mean2, ub2, lb2, color_mean='g--', color_shading='g') class LegendObject(object): def __init__(self, facecolor='red', edgecolor='white', dashed=False): self.facecolor = facecolor self.edgecolor = edgecolor self.dashed = dashed def legend_artist(self, legend, orig_handle, fontsize, handlebox): x0, y0 = handlebox.xdescent, handlebox.ydescent width, height = handlebox.width, handlebox.height patch = mpatches.Rectangle( # create a rectangle that is filled with color [x0, y0], width, height, facecolor=self.facecolor, # and whose edges are the faded color edgecolor=self.edgecolor, lw=3) handlebox.add_artist(patch) # if we're creating the legend for a dashed line, # manually add the dash in to our rectangle if self.dashed: patch1 = mpatches.Rectangle( [x0 + 2*width/5, y0], width/5, height, facecolor=self.edgecolor, transform=handlebox.get_transform()) handlebox.add_artist(patch1) return patch bg = np.array([1, 1, 1]) # background of the legend is white colors = ['black', 'blue', 'green'] # with alpha = .5, the faded color is the average of the background and color colors_faded = [(np.array(cc.to_rgb(color)) + bg) / 2.0 for color in colors] plt.legend([0, 1, 2], ['Data 0', 'Data 1', 'Data 2'], handler_map={ 0: LegendObject(colors[0], colors_faded[0]), 1: LegendObject(colors[1], colors_faded[1]), 2: LegendObject(colors[2], colors_faded[2], dashed=True), }) plt.title('Example mean and confidence interval plot') plt.tight_layout() plt.grid() plt.show()

Side note, really enjoying the default formatting in Matplotlib 2+.

]]>We’ve been working with Kinova’s Jaco arm with joint torque sensors for the last year or so as part of our research at Applied Brain Research, and we put together a fun adaptive control demo and got to show it to Justin Trudeau. As you might have guessed based on previous posts, the robotic control used force control. Force control is available on the Jaco, but the API that comes with the arm has much too slow an update time for practical use for our application (around 100Hz, if I recall correctly).

So part of the work I did with Pawel Jaworski over the last year was to write an interface for force control to the Jaco arm that had a faster control loop. Using Kinova’s low level API, we managed to get things going at about 250Hz, which was sufficient for our purposes. In the hopes of saving other people the trouble of having to redo all this work to begin to be able to play around with force control on the Kinova, we’ve made the repo public and free for non-commercial use. It’s by no means fully optimized, but it is well tested and hopefully will be found useful!

The interface was designed to plug into our ABR Control repo, so you’ll also need that installed to get things working. Once both repos are installed, you can either use the controllers in the ABR Control repo or your own. The interface has a few options, which are shown in the following demo script:

import abr_jaco2 from abr_control.controllers import OSC robot_config = abr_jaco2.Config() interface = abr_jaco2.Interface(robot_config) ctrlr = OSC(robot_config) # instantiate things to avoid creating 200ms delay in main loop zeros = np.zeros(robot_config.N_LINKS) ctrlr.generate(q=zeros, dq=zeros, target=zeros(3)) # run once outside main loop as well, returns the cartesian # coordinates of the end effector robot_config.Tx('EE', q=zeros) interface.connect() interface.init_position_mode() interface.send_target_angles(robot_config.INIT_TORQUE_POSITION) target_xyz = [.57, .03 .87] # (x, y, z) target (metres) interface.init_force_mode() while 1: # returns a dictionary with q, dq feedback = interface.get_feedback() # ee position xyz = robot_config.Tx('EE', q=q, target_pos = target_xyz) u = ctrlr.generate(feedback['q'], feedback['dq'], target_xyz) interface.send_forces(u, dtype='float32') error = np.sqrt(np.sum((xyz - TARGET_XYZ[ii])**2)) if error < 0.02: break # switch back to position mode to move home and disconnect interface.init_position_mode() interface.send_target_angles(robot_config.INIT_TORQUE_POSITION) interface.disconnect()

You can see you have the option for position control, but you can also initiate torque control mode and then start sending forces to the arm motors. To get a full feeling of what is available, we’ve got a bunch of example scripts that show off more of the functionality.

Here are some gifs feature Pawel showing the arm operating under force control. The first just shows compliance of normal operational space control (on the left) and an adaptation example (on the right). In both cases here the arm is moving to and trying to maintain a target location, and Pawel is pushing it away.

You can see that in the adaptive example the arm starts to compensate for the push, and then when Pawel lets go of the arm it overshoots the target because it’s compensating for a force that no longer exists.

So it’s our hope that this will be a useful tool for those with a Kinova Jaco arm with torque sensors exploring force control. If you end up using the library and come across places for improvement (there are many), contributions are very appreciated!

Also a big shout out to the Kinova support team that provided many hours of support during development! It’s an unusual use of the arm, and their engineers and support staff were great in getting back to us quickly and with useful advice and insights.

]]>Last August I started working on a new version of my old control repo with a resolve to make something less hacky, as part of the work for Applied Brain Research, Inc, a startup that I co-founded with others from my lab after most of my cohort graduated. Together with Pawel Jaworski, who comprises other half of ABR’s motor team, we’ve been building up a library of useful functions for modeling, simulating, interfacing, and controlling robot arms.

Today we’re making the repository public, under the same free for non-commercial use that we’ve released our Nengo neural modelling software on. You can find it here: ABR_Control

It’s all Python 3, and here’s an overview of some of the features:

- Automates generation of functions for computing the Jacobians, joint space and task space inertia matrices, centrifugal and Coriolis effects, and Jacobian derivative, provided each link’s inertia matrix and the transformation matrices
- Option to compile these functions to speedy Cython implementations
- Operational space, joint space, floating, and sliding controllers provided with PyGame and VREP example scripts
- Obstacle avoidance and dynamics adaptation add-ons with PyGame example scripts
- Interfaces with VREP
- Configuration files for one, two, and three link arms, as well as the UR5 and Jaco2 arms in VREP
- Provides Python simulations of two and three link arms, with PyGame visualization
- Path planning using first and second order filtering of the target and example scripts.

**Structure**

The ABR Control library is divided into three sections:

- Arm descriptions (and simulations)
- Robotic system interfaces
- Controllers

The big goal was to make all of these interchangeable, so that to run any permutation of them you just have to change which arm / interface / controller you’re importing.

To support a new arm, the user only needs to create a configuration file specifying the transforms and inertia matrices. Code for calculating the necessary functions of the arm will be symbolically derived using SymPy, and compiled to C using Cython for efficient run-time execution.

Interfaces provide `send_forces`

and `send_target_angles`

functions, to apply torques and put the arm in a target state, as well as a `get_feedback`

function, which returns a dictionary of information about the current state of the arm (joint angles and velocities at a minimum).

Controllers provide a `generate`

function, which take in current system state information and a target, and return a set of joint torques to apply to the robot arm.

**VREP example**

The easiest way to show it is with some code examples. So, once you’ve cloned and installed the repo, you can open up VREP and the jaco2.ttt model in the `abr_control/arms/jaco2`

folder, and to control it using an operational space controller you would run the following:

import numpy as np from abr_control.arms import jaco2 as arm from abr_control.controllers import OSC from abr_control.interfaces import VREP # initialize our robot config for the ur5 robot_config = arm.Config(use_cython=True, hand_attached=True) # instantiate controller ctrlr = OSC(robot_config, kp=200, vmax=0.5) # create our VREP interface interface = VREP(robot_config, dt=.001) interface.connect() target_xyz = np.array([0.2, 0.2, 0.2]) # set the target object's position in VREP interface.set_xyz(name='target', xyz=target_xyz) count = 0.0 while count < 1500: # run for 1.5 simulated seconds # get joint angle and velocity feedback feedback = interface.get_feedback() # calculate the control signal u = ctrlr.generate( q=feedback['q'], dq=feedback['dq'], target_pos=target_xyz) # send forces into VREP, step the sim forward interface.send_forces(u) count += 1 interface.disconnect()

This is a minimal example of the `examples/VREP/reaching.py`

code. To run it with a different arm, you can just change the `from abr_control.arms import as`

line. The repo comes with the configuration files for the `UR5`

and a `onelink`

VREP arm model as well.

**PyGame example**

I’ve also found the PyGame simulations of the 2 and 3 link arms very helpful for quickly testing new controllers and code, as an easy low overhead proof of concept sandbox. To run the `threelink`

arm (which runs in Linux and Windows fine but I’ve heard has issues in Mac OS), with the operational space controller, you can run this script:

import numpy as np from abr_control.arms import threelink as arm from abr_control.interfaces import PyGame from abr_control.controllers import OSC # initialize our robot config robot_config = arm.Config(use_cython=True) # create our arm simulation arm_sim = arm.ArmSim(robot_config) # create an operational space controller ctrlr = OSC(robot_config, kp=300, vmax=100, use_dJ=False, use_C=True) def on_click(self, mouse_x, mouse_y): self.target[0] = self.mouse_x self.target[1] = self.mouse_y # create our interface interface = PyGame(robot_config, arm_sim, dt=.001, on_click=on_click) interface.connect() # create a target feedback = interface.get_feedback() target_xyz = robot_config.Tx('EE', feedback['q']) interface.set_target(target_xyz) try: while 1: # get arm feedback feedback = interface.get_feedback() hand_xyz = robot_config.Tx('EE', feedback['q']) # generate an operational space control signal u = ctrlr.generate( q=feedback['q'], dq=feedback['dq'], target_pos=target_xyz) new_target = interface.get_mousexy() if new_target is not None: target_xyz[0:2] = new_target interface.set_target(target_xyz) # apply the control signal, step the sim forward interface.send_forces(u) finally: # stop and reset the simulation interface.disconnect()

The extra bits of code just set up a hook so that when you click on the PyGame display somewhere the target moves to that point.

So! Hopefully some people find this useful for their research! It should be as easy to set up as cloning the repo, installing the requirements and running the setup file, and then trying out the examples.

If you find a bug please file an issue! If you find a way to improve it please do so and make a PR! And if you’d like to use anything in the repo commercially, please contact me.

]]>Note: I’m only going to be dealing with revolute joints in this post. Linear joints are easy (just put the joint offset in the translation part of the transform matrix) and spherical joints are ~~horrible death~~ beyond the scope of this post.

First thing’s first, quick recap of what transform matrices actually are. A transform matrix changes the coordinate system (reference frame) a point is defined in. We’re using the notation to denote a transformation matrix that transforms a point from reference frame 1 to reference frame 0. To calculate this matrix, we described the transformation from reference frame 0 to reference frame 1.

To make things easier, we’re going to break up each transform matrix into two parts. Unfortunately I haven’t found a great way to denote these two matrices, so we’re going to have to use additional subscripts:

- : accounting for the joint rotation, and
- : accounting for static translations and rotations.

So the matrix accounts for transformations that involve joint angles, and the matrix accounts for all the static transformations between reference frames.

**Step 1: Account for the joint rotation**

When calculating the transform matrix from a joint to a link’s centre-of-mass (COM), note that the reference frame of the link’s COM rotates along with the angle of the joint. Conversely, the joint’s reference frame does not rotate with the joint angle. Look at this picture!

In these pictures, we’re looking at the transformation from joint i’s reference frame to the COM for link i. denotes the joint angle of joint i. In the first image (on the left) the joint angle is almost 90 degrees, and on the right it’s closer to 45 degrees. Keeping in mind that the reference frame for COM i changes with the joint angle, and the reference frame for joint i *does not*, helps make deriving the transform matrices much easier.

So, to actually account for joint rotation, all you have to do is determine which axis the joint is rotating around and create the appropriate rotation matrix inside the transform.

For rotations around the x axis:

For rotations around the y axis:

For rotations around the z axis:

And there you go! Easy. The should be one of the above matrices unless something fairly weird is going on. For the arm in the diagram above, the joint is rotating around the axis, so

**Step 2: Account for translations and any static axes rotations**

Once the joint rotation is accounted for, translations become much simpler to deal with: Just put joint i at 0 degrees and the offset from the origin of reference frame i to i+1 is your translation.

So for the above example, this is what the system looks like when joint i is at 0 degrees:

where I’ve also added labels to the and axes for clarity. Say that the COM is at , then that is what you set the translation values to in the .

Also we need to note that there is a rotation in the axes between reference frames (i.e. and do not point in the same direction for both axes sets). Here the rotation is 90 degrees. So the rotation part of the transformation matrix should be a 90 degree rotation.

Thus, the transformation matrix for our static translations and rotations is

**Step 3: Putting them together**

This is pretty easy, to compute the full transform from joint i to the COM i you just string together the two matrices derived above:

where I’m using simplified notation on the left-hand side because we don’t need the full notation for clarity in this simple example.

So when you’re transforming a point, you left multiply by :

which is not great notation, but hopefully conveys the idea. To get our point into the reference frame of joint i, we left multiply the point as defined in the reference frame of COM i by .

So there you go! That’s the process for a single joint to COM transform. For transforms from COM to joints it’s even easier because you only need to account for the static offsets and axes rotations.

**Examples**

It’s always nice to have examples, so let’s work through the deriving the transform matrices for the Jaco2 arm as it’s set up in the VREP. The process is pretty straight-forward, with really the only tricky bit converting from the Euler angles that VREP gives you to a rotation matrix that we can use in our transform.

But first thing’s first, load up VREP and drop in a Jaco2 model.

NOTE: I’ve renamed the joints and links to start at 0, to make indexing in / interfacing with VREP easier.

We’re going to generate the transforms for the whole arm piece by piece, sequentially: origin -> link 0 COM, link 0 COM -> joint 0, joint 0 -> link 1 COM, etc. I’m going to use the notation and to denote link i COM and joint i, respectively, in the transform sub and superscripts.

The first transform we want then is for origin -> link 0 COM. There’s no joint rotation that we need to account for in this transform, so we only have to look for static translation and rotation. First click on link 0, and then

- ‘Object/item shift’ button from the toolbar,
- the ‘Position’ tab,
- select the ‘Parent frame’ radio button,

and it will provide you with the translation of link 0’s COM relative to its parent frame, as shown below:

So the translation part of our transformation is .

The rotation part we can see by

- ‘Object/item rotate’ button from the toolbar,
- the ‘Orientation’ tab,
- select the ‘Parent frame’ radio button,

Here, it’s not quite as straight forward to use this info to fill in our transform matrix. So, if we pull up the VREP description for their Euler angles we see that to generate the rotation matrix from them you perform:

where the matrix is a rotation matrix around the axis (as described above). For this works out to no rotation, as you may have guessed. So then our first transform is

For the second transform matrix, from link 0 to joint 0, again we only have to account for static translations and rotations. Pulling up the translation information:

we have the translation .

And this time, for our rotation matrix, there’s a flip in the axes so we have something more interesting than no rotation:

we have Euler angles . This is also a good time to note that the angles are provided in degrees, so let’s convert those over, giving us approximately . Now, calculating our rotation matrix:

which then gives us the transform matrix

And the last thing I’ll show here is accounting for the joint rotation. In the transform from joint 0 to link 1 we have to account for the joint rotation, so we’ll break it down into two matrices as described above. To generate the first one, all we need to know is which axis the joint rotates around. In VREP, this is almost always the axis, but it’s good to double check in case someone has built a weird model. To check, one easy way is to make the joint visible, and the surrounding arm parts invisible:

You can do this by

- double clicking on the joint in the scene hierarchy to get to the Scene Object Properties window,
- selecting the ‘Common’ tab,
- selecting or deselecting check boxes from the ‘Visibility’ box.

As you can see in the image, this joint is indeed rotating around the axis, so the first part of our transformation from joint 0 to link 1 is

The second matrix can be generated as previously described, so I’ll leave that, and the rest of the arm, to you!

If you’d like to see a full arm example, you can check out the tranforms for the UR5 model in VREP up on my GitHub.

]]>Additionally, this is applicable to simulating these models on digital systems, which, probably, most of you are. If you’re not, however! Then use standard NEF methods.

And then last note before starting, these methods are most relevant for systems with fast dynamics (relative to simulation time). If your system dynamics are pretty slow, you can likely get away with the continuous time solution if you resist change and learning. And we’ll see this in the example point attractor system at the end of the post! But even for slowly evolving systems, I would still recommend at least skipping to the end and seeing how to use the library shortcuts when coding your own models. The example code is also all up on my GitHub.

**NEF modeling with continuous lowpass filter dynamics**

Basic state space equations for linear time-invariant (LTI) systems (i.e. dynamics can be captured with a matrix and the matrices don’t change over time) are:

where

- is the system state,
- is the system output,
- is the system input,
- is called the state matrix,
- is called the input matrix,
- is called the output matrix, and
- is called the feedthrough matrix,

and the system diagram looks like this:

and the transfer function, which is written in Laplace space and captures the system output over system input, for the system is

where is the Laplace variable.

Now, because it’s a neural system we don’t have a perfect integrator in the middle, we instead have a synaptic filter, , giving:

So our goal is: given some synaptic filter , we want to generate some modified transfer function, , such that has the same dynamics as our desired system, . In other words, find an such that

Alrighty. Let’s do that.

The transfer function for our neural system is

The effect of the synapse is well captured by a lowpass filter, , making our equation

To get this into a form where we can start to modify the system state matrices to compensate for the filter effects, we have to isolate . To do that, we can do the following basic math jujitsu

OK. Now, we can make by substituting for our and matrices with

then we get

and voila! We have created an such that . Said another way, we have created a system that compensates for the synaptic filtering effects to achieve our desired system dynamics!

So, to compensate for the continuous lowpass filter, we use and when implementing our model and we’re golden.

And so that’s what we’ve been doing for a long time when building our models. Assuming a continuous lowpass filter and going along our merry way. Aaron, however, shrewdly noticed that computers are digital, and thusly that the standard NEF methods are not a fully accurate way of compensating for the filter that is actually being applied in simulation.

To convert our continuous system state equations to discrete state equations we need to make two changes: 1) the first is a variable change to denote the that we’re in discrete time, and we’ll use instead of , and 2) we need to calculate the discrete version our system, i.e. transform .

The first step is easy, the second step more complicated. To discretize the system we’ll use the zero-order hold (ZOH) method (also referred to as discretization assuming zero-order hold).

**Zero-order hold discretization**

Zero-order hold (ZOH) systems simply hold their input over a specified amount of time. The use of ZOH here is that during discretization we assume the input control signal stays constant until the next sample time.

There are good write ups on the derivation of the discretization both on wikipedia and in these course notes from Purdue. I mostly followed the wikipedia derivation, but there were a few steps that get glossed over, so I thought I’d just write it out fully here and hopefully save someone some pain. Also for just a general intro I found these slides from Paul Oh at Drexel University really helpful.

OK. First we’ll solve an LTI system, and then we’ll discretize it.

So, you’ve got yourself a continuous LTI system

and you want to solve for . Rearranging things to put all the on one side gives

Looking through our identity library to find something that might help us here (after a long and grueling search) we come across:

We now left multiply our system by (note the negative in the exponent)

Looking at this carefully, we identify the left-hand side as the result of a chain rule, so we can rewrite it as

From here we integrate both sides, giving

To isolate the term on the left-hand side now multiply by :

OK! We solved for .

To discretize our solution we’re going to assume that we’re sampling the system at even intervals, i.e. each sample is at for some time step , and that the input is constant between samples (this is where the ZOH comes in). To simplify our notation as we go, we also define

So using our new notation, we have

Now we want to get things back into the form:

To start, let’s write out the equation for

We want to relate to . Being incredibly clever, we see that we can left multiply by , to get

and can rearrange to solve for a term we saw in :

Plugging this in, we get

OK, we’re getting close.

At this point we’ve got things in the right form, but we can still clean up that second term on the right-hand side quite a bit. First, note that using our starting assumption (that is constant), we can take outside the integral.

Next, bring that term inside the integral:

And now we’re going to simplify the integral using variable substitution. Let , which means also that . Evaluating the upper and lower bounds of the integral, when then and when then . With this, we can rewrite our equation:

The astute will notice our integral integrates from T to 0 instead of 0 to T. Fortunately for us, we know . We can just swap the bounds and multiply by , giving:

And finally, we can evaluate our integral by recalling that and assuming that is invertible:

We did it! The state and input matrices for our digital system are:

And that’s the hard part of discretization, the rest of the system is easy:

where, fortunately for us

This then gives us our discrete system transfer function:

**NEF modeling with continuous lowpass filter dynamics**

Now that we know how to discretize our system, we can look at compensating for the lowpass filter dynamics in discrete time. The equation for the discrete time lowpass filter is

where

Plugging that in to the discrete transfer fuction gets us

and we see that if we choose

then we get

And now congratulations are in order. Proper compensation for the discrete lowpass filter dynamics has finally been achieved!

**Point attractor example**

What difference does this actually make in modelling? Well, everyone likes examples, so let’s have one.

Here are the dynamics for a second-order point attractor system:

with and being the system position, velocity, and acceleration, respectively, is the target position, and and are gain values. So the acceleration is just going to be set such that it drives the system towards the target position, and compensates for velocity.

Converting this from a second order system to a first order system we have

which we’ll rewrite compactly as

OK, we’ve got our state space equation of the dynamical system we want to implement.

Given a simulation time step , we’ll first calculate the discrete state matrices:

Great! Easy. Now we can calculate the state matrices that will compensate for the discrete lowpass filter:

where .

Alright! So that’s our system now, a basic point attractor implementation in Nengo 2.3 looks like this:

tau = 0.1 # synaptic time constant # the A matrix for our point attractor A = np.array([[0.0, 1.0], [-alpha*beta, -alpha]]) # the B matrix for our point attractor B = np.array([[0.0], [alpha*beta]]) # account for discrete lowpass filter a = np.exp(-dt/tau) if analog: A = tau * A + np.eye(2) B = tau * B else: # discretize Ad = expm(A*dt) Bd = np.dot(np.linalg.inv(A), np.dot((Ad - np.eye(2)), B)) A = 1.0 / (1.0 - a) * (Ad - a * np.eye(2)) B = 1.0 / (1.0 - a) * Bd net = nengo.Network(label='Point Attractor') net.config[nengo.Connection].synapse = nengo.Lowpass(tau) with config, net: net.ydy = nengo.Ensemble(n_neurons=n_neurons, dimensions=2, # set it up so neurons are tuned to one dimensions only encoders=nengo.dists.Choice([[1, 0], [-1, 0], [0, 1], [0, -1]])) # set up Ax part of point attractor nengo.Connection(net.ydy, net.ydy, transform=A) # hook up input net.input = nengo.Node(size_in=1, size_out=1) # set up Bu part of point attractor nengo.Connection(net.input, net.ydy, transform=B) # hook up output net.output = nengo.Node(size_in=1, size_out=1) # add in forcing function nengo.Connection(net.ydy[0], net.output, synapse=None)

Note that for calculating `Ad`

we’re using `expm`

which is the matrix exp function from `scipy.linalg`

package. The `numpy.exp`

does an elementwise `exp`

, which is definitely not what we want here, and you will get some confusing bugs if you’re not careful.

Code for implementing and also testing under some different gains is up on my GitHub, and generates the following plots for dt=0.001:

In the above results you can see that the when the gains are low, and thus the system dynamics are slower, that you can’t really tell a difference between the continuous and discrete filter compensation. But! As you get larger gains and faster dynamics, the resulting effects become much more visible.

If you’re building your own system, then I also recommend using the `ss2sim`

function from Aaron’s `nengolib`

library. It automatically handles compensation for any synapses and generates the matrices that account for discrete or continuous implementations automatically. Using the library looks like:

tau = 0.1 # synaptic time constant synapse = nengo.Lowpass(tau) # the A matrix for our point attractor A = np.array([[0.0, 1.0], [-alpha*beta, -alpha]]) # the B matrix for our point attractor B = np.array([[0.0], [alpha*beta]]) from nengolib.synapses import ss2sim C = np.eye(2) D = np.zeros((2, 2)) linsys = ss2sim((A, B, C, D), synapse=synapse, dt=None if analog else dt) A = linsys.A B = linsys.B

**Conclusions**

So there you are! Go forward and model free of error introduced by improperly accounting for discrete simulation! If, like me, you’re doing anything with neural modelling and motor control (i.e. systems with very quickly evolving dynamics), then hopefully you’ve found all this work particularly interesting, as I did.

There’s a ton of extensions and different directions that this work can be and has already been taken, with a bunch of really neat systems developed using this more accurate accounting for synaptic filtering as a base. You can read up on this and applications to modelling time delays and time cells and lots lots more up on Aaron’s GitHub, and hisrecent papers, which are listed on his lab webpage.

]]>Turns out that I still don’t want to code those equations by hand. There are probably very nice options for generating optimized code using Mathematica or MapleSim or whatever pay software, but SymPy is free and already Python compatible so my goal is to stick with that.

**Options**

I did some internet scouring, and looked at installing various packages to help speed things up, including

- trying out the Sympy
`simplify`

function, - trying out SymEngine,
- trying out the Sympy compile to Theano function,
- trying out the Sympy autowrap function, and
- various combinations of the above.

The SymEngine backend and Theano functions really didn’t give any improvements for the kind of low-dimensional vector calculations performed for control here. Disclaimer, it could be the case that I gave up on these implementations too quickly, but from some basic testing it didn’t seem like they were the way to go for this kind of problem.

So down to the `simplify`

function and compiling to the Cython backend options. First thing you’ll notice when using the `simplify`

is that the generation time for the function can take orders of magnitude longer (up to 12ish hours for inertia matrices for the Kinova Jaco 2 arm with `simplify`

vs about 1-2 minutes without `simplify`

, for example). And then, as a nice kick in the teeth after that, there’s almost no difference between straight Cython of the non-simplified functions and the simplified functions. Here are some graphs:

So you can see how the addition of the `simplify`

drastically increases the compile time in the top half of the graphs there. In the lower half, you can see that the `simplify`

does save some execution time relative to the baseline straight `lambdify`

function, but once you start compiling to Cython with the `autowrap`

the difference is on the order of hundredths to thousandths of milliseconds.

**Results**

So! My recommendation is

- Don’t use
`simplify`

, - do use
`autowrap`

with the Cython backend.

To compile using the Cython backend:

from sympy.utilities.autowrap import autowrap function = autowrap(sympy_expression, backend="cython", args=parameters)

In the last Sympy post I looked at a hard-coded calculation against the Sympy generated function, using the `timeit`

function, which runs a given chunk of code 1,000,000 times and tells you how long it took. To save tracking down the results from that post, here are the `timeit`

results of a Sympy function generated using just the `lambdify`

function vs the hard-coding the function in straight Numpy:

And now using the `autowrap`

function to compile to Cython code, compared to hard-coding the function in Numpy:

This is a pretty crazy improvement, and means that we can have the best of both worlds. We can declare our transforms in SymPy and automate the calculation of our Jacobians and inertia matrices, and still have the code execute fast enough that we can use it to control real robots.

That said, this is all from my basic experimenting with some different optimisations, which is a far from thorough exploration of the space. If you know of any other ways to speed up execution, please comment below!

You can find the code I used to generate the above plots up on my GitHub.

]]>I was doing a quick lit scan on methods for control methods for avoiding joint collision with obstacles, and was kind of surprised that there wasn’t really much in the realm of recent discussions about it. There is, however, a 1986 paper from Dr. Oussama Khatib titled Real-time obstacle avoidance for manipulators and mobile robots that pretty much solves this problem short of getting into explicit path planning methods. Which could be why there aren’t papers since then about it. All the same, I couldn’t find any implementations of it online, and there are a few tricky parts, so in this post I’m going to work through the implementation.

Note: The implementation that I’ve worked out here uses spheres to represent the obstacles. This works pretty well in general by just making the sphere large enough to cover whatever obstacle you’re avoiding. But if you want a more precise control around other shapes, you can check out the appendix of Dr. Khatib’s paper, where he works provides the math for cubes and cones as well.

Note: You can find all of the code I use here and everything you need for the VREP implementation up on my GitHub. I’ve precalculated the functions that are required, because generating them is quite computationally intensive. Hopefully the file saving doesn’t get weird between different operating systems, but if it does, try deleting all of the files in the `ur5_config`

folder and running the code again. This will regenerate those files (on my laptop this takes ~4 hours, though, so beware).

**The general idea**

Since it’s from Dr. Khatib, you might expect that this approach involves task space. And indeed, your possible suspicions are correct! The algorithm is going to work by identifying forces in Cartesian coordinates that will move any point of the arm that’s too close to an obstacle away from it. The algorithm follows the form:

**Setup**

- Specify obstacle location and size
- Specify a threshold distance to the obstacle

**While running**

- Find the closest point of each arm segment to obstacles
- If within threshold of obstacle, generate force to apply to that point
- Transform this force into joint torques
- Add directly to the outgoing control signal

Something that you might notice about this is that it’s similar to the addition of the null space controller that we’ve seen before in operational space control. There’s a distinct difference here though, in that the control signal for avoiding obstacles is added directly to the outgoing control signal, and that it’s not filtered (like the null space control signal) such that there’s no guarantee that it won’t affect the movement of the end-effector. In fact, it’s very likely to affect the movement of the end-effector, but that’s also desirable, as not ramming the arm through an obstacle is as important as getting to the target.

OK, let’s walk through these steps one at a time.

**Setup**

I mentioned that we’re going to treat all of our obstacles as spheres. It’s actually not much harder to do these calculations for cubes too, but this is already going to be long enough so I’m only addressing sphere’s here. This algorithm assumes we have a list of every obstacle’s centre-point and radius.

We want the avoidance response of the system to be a function of the distance to the obstacle, such that the closer the arm is to the obstacle the stronger the response. The function that Dr. Khatib provides is of the following form:

where is the distance to the target, is the threshold distance to the target at which point the avoidance function activates, is the partial derivative of the distance to the target with respect to a given point on the arm, and is a gain term.

This function looks complicated, but it’s actually pretty intuitive. The partial derivative term in the function works simply to point in the opposite direction of the obstacle, in Cartesian coordinates, i.e. tells the system how to get away from the obstacle. The rest of the term is just a gain that starts out at zero when , and gets huge as the obstacle nears the object (as ). Using and gives us a function that looks like

So you can see that very quickly a very, very strong push away from this obstacle is going to be generated once we enter the threshold distance. But how do we know exactly when we’ve entered the threshold distance?

**Find the closest point**

We want to avoid the obstacle with our whole body, but it turns out we can reduce the problem to only worrying about the closest point of each arm segment to the obstacle, and move that one point away from the obstacle if threshold distance is hit.

To find the closest point on a given arm segment to the obstacle we’ll do some pretty neat trig. I’ll post the code for it and then discuss it below. In this snippet, `p1`

and `p2`

are the beginning and ending locations of arm segment (which we are assuming is a straight line), and `v`

is the center of the obstacle.

# the vector of our line vec_line = p2 - p1 # the vector from the obstacle to the first line point vec_ob_line = v - p1 # calculate the projection normalized by length of arm segment projection = (np.dot(vec_ob_line, vec_line) / np.dot(vec_line, vec_line)) if projection < 0: # then closest point is the start of the segment closest = p1 elif projection > 1: # then closest point is the end of the segment closest = p2 else: closest = p1 + projection * vec_line

The first thing we do is find the arm segment line, and then line from the obstacle center to the start point of the arm segment. Once we have these, we do:

using the geometric definition of the dot product two vectors, we can rewrite the above as

which reads as the magnitude of `vec_ob_line`

divided by the magnitude of `vec_line`

(I know, these are terrible names, sorry) multiplied by the angle between the two vectors. If the angle between the vectors is < 0 (`projection`

will also be < 0), then right off the bat we know that the start of the arm segment, `p1`

, is the closest point. If the `projection`

value is > 1, then we know that 1) the length from the start of the arm segment to the obstacle is longer than the length of the arm, and 2) the angle is such that the end of the arm segment, `p2`

, is the closest point to the obstacle.

Finally, in the last case we know that the closest point is somewhere along the arm segment. To find where exactly, we do the following

which can be rewritten

I find it more intuitive if we rearrange the second term to be

So then what this is all doing is starting us off at the beginning of the arm segment, `p1`

, and to that we add this other fun term. The first part of this fun term provides direction normalized to magnitude 1. The second part of this term provides magnitude, specifically the exact distance along `vec_line`

we need to traverse to form reach a right angle (giving us the shortest distance) pointing at the obstacle. This handy figure from the Wikipedia page helps illustrate exactly what’s going on with the second part, where B is be `vec_line`

and A is `vec_ob_line`

:

Armed with this information, we understand how to find the closest point of each arm segment to the obstacle, and we are now ready to generate a force to move the arm in the opposite direction!

**Check distance, generate force**

To calculate the distance, all we have to do is calculate the Euclidean distance from the closest point of the arm segment to the center of the sphere, and then subtract out the radius of the sphere:

# calculate distance from obstacle vertex to the closest point dist = np.sqrt(np.sum((v - closest)**2)) # account for size of obstacle rho = dist - obstacle[3]

Once we have this, we can check it and generate using the equation we defined above. The one part of that equation that wasn’t specified was exactly what was. Since it’s just the partial derivative of the distance to the target with respect to the closest point, we can calculate it as the normalized difference between the two points:

drhodx = (v - closest) / rho

Alright! Now we’ve found the closest point, and know the force we want to apply, from here it’s standard operational space procedure.

**Transform the force into torques**

As we all remember, the equation for transforming a control signal from operational space to involves two terms aside from the desired force. Namely, the Jacobian and the operational space inertia matrix:

where is the Jacobian for the point of interest, is the operational space inertia matrix for the point of interest, and is the force we defined above.

Calculating the Jacobian for an unspecified point

So the first thing we need to calculate is the Jacobian for this point on the arm. There are a bunch of ways you could go about this, but the way I’m going to do it here is by building on the post where I used SymPy to automate the Jacobian derivation. The way we did that was by defining the transforms from the origin reference frame to the first link, from the first link to the second, etc, until we reached the end-effector. Then, whenever we needed a Jacobian we could string together the transforms to get the transform from the origin to that point on the arm, and take the partial derivative with respect to the joints (using SymPy’s derivative method).

As an example, say we wanted to calculate the Jacobian for the third joint, we would first calculate:

where reads the transform from reference frame to reference frame .

Once we have this transformation matrix, , we multiply it by the point of interest in reference frame 3, which, previously, has usually been . In other words, usually we’re just interested in the origin of reference frame 3. So the Jacobian is just

what if we’re interested in some non-specified point along link 3, though? Well, using SymPy we set make instead of (recall the 1 at the end in these vectors is just to make the math work), and make the Jacobian function SymPy generates for us dependent on *both* and , rather than just . In code this looks like:

Torg3 = self._calc_T(name="3") # transform x into world coordinates Torg3x = sp.simplify(Torg3 * sp.Matrix(x)) J3_func = sp.lambdify(q + x, Torg3)

Now it’s possible to calculate the Jacobian for any point along link 3 just by changing the parameters that we pass into `J3_func`

! Most excellent.

We are getting closer.

NOTE: This parameterization can significantly increase the build time of the function, it took my laptop about 4 hours. To decrease build time you can try commenting out the `simplify`

calls from the code, which might slow down run-time a bit but *significantly* drops the generation time.

Where is the closest point in that link’s reference frame?

A sneaky problem comes up when calculating the closest point of each arm segment to the object: We’ve calculated the closest point of each arm segment in the origin’s frame of reference, and we need thew relative to each link’s own frame of reference. Fortunately, all we have to do is calculate the inverse transform for the link of interest. For example, the inverse transform of transforms a point from the origin’s frame of reference to the reference frame of the 3rd joint.

I go over how to calculate the inverse transform at the end of my post on forward transformation matrices, but to save you from having to go back and look through that, here’s the code to do it:

Torg3 = self._calc_T(name="3") rotation_inv = Torg3[:3, :3].T translation_inv = -rotation_inv * Torg3[:3, 3] Torg3_inv = rotation_inv.row_join(translation_inv).col_join( sp.Matrix([[0, 0, 0, 1]]))

And now to find the closest point in the coordinates of reference frame 3 we simply

x = np.dot(Torg3_inv, closest)

This `x`

value is what we’re going to plug in as parameters to our `J3_func`

above to find the Jacobian for the closest point on link 3.

Calculate the operational space inertia matrix for the closest point

OK. With the Jacobian for the point of interest we are now able to calculate the operational space inertia matrix. This code I’ve explicitly worked through before, and I’ll show it in the full code below, so I won’t go over it again here.

**The whole implementation**

You can run an example of all this code controlling the UR5 arm to avoid obstacles in VREP using this code up on my GitHub. The specific code added to implement obstacle avoidance looks like this:

# find the closest point of each link to the obstacle for ii in range(robot_config.num_joints): # get the start and end-points of the arm segment p1 = robot_config.Tx('joint%i' % ii, q=q) if ii == robot_config.num_joints - 1: p2 = robot_config.Tx('EE', q=q) else: p2 = robot_config.Tx('joint%i' % (ii + 1), q=q) # calculate minimum distance from arm segment to obstacle # the vector of our line vec_line = p2 - p1 # the vector from the obstacle to the first line point vec_ob_line = v - p1 # calculate the projection normalized by length of arm segment projection = (np.dot(vec_ob_line, vec_line) / np.sum((vec_line)**2)) if projection < 0: # then closest point is the start of the segment closest = p1 elif projection > 1: # then closest point is the end of the segment closest = p2 else: closest = p1 + projection * vec_line # calculate distance from obstacle vertex to the closest point dist = np.sqrt(np.sum((v - closest)**2)) # account for size of obstacle rho = dist - obstacle_radius if rho < threshold: eta = .02 drhodx = (v - closest) / rho Fpsp = (eta * (1.0/rho - 1.0/threshold) * 1.0/rho**2 * drhodx) # get offset of closest point from link's reference frame T_inv = robot_config.T_inv('link%i' % ii, q=q) m = np.dot(T_inv, np.hstack([closest, [1]]))[:-1] # calculate the Jacobian for this point Jpsp = robot_config.J('link%i' % ii, x=m, q=q)[:3] # calculate the inertia matrix for the # point subjected to the potential space Mxpsp_inv = np.dot(Jpsp, np.dot(np.linalg.pinv(Mq), Jpsp.T)) svd_u, svd_s, svd_v = np.linalg.svd(Mxpsp_inv) # cut off singular values that could cause problems singularity_thresh = .00025 for ii in range(len(svd_s)): svd_s[ii] = 0 if svd_s[ii] < singularity_thresh else \ 1./float(svd_s[ii]) # numpy returns U,S,V.T, so have to transpose both here Mxpsp = np.dot(svd_v.T, np.dot(np.diag(svd_s), svd_u.T)) u_psp = -np.dot(Jpsp.T, np.dot(Mxpsp, Fpsp)) if rho < .01: u = u_psp else: u += u_psp

The one thing in this code I didn’t talk about is that you can see that if `rho < .01`

then I set `u = u_psp`

instead of just adding `u_psp`

to `u`

. What this does is basically add in a fail safe take over of the robotic control saying that “if we are about to hit the obstacle forget about everything else and get out of the way!”.

**Results**

And that’s it! I really enjoy how this looks when it’s running, it’s a really effective algorithm. Let’s look at some samples of it in action.

First, in a 2D environment, where it’s real easy to move around the obstacle and see how it changes in response to the new obstacle position. The red circle is the target and the blue circle is the obstacle:

And in 3D in VREP, running the code example that I’ve put up on my GitHub implementing this. The example of it running without obstacle avoidance code is on the left, and running with obstacle avoidance is on the right. It’s kind of hard to see but on the left the robot moves through the far side of the obstacle (the gold sphere) on its way to the target (the red sphere):

And one last example, the arm dodging a moving obstacle on its way to the target.

The implementation is a ton of fun to play around with. It’s a really nifty algorithm, that works quite well, and I haven’t found many alternatives in papers that don’t go into path planning (if you know of some and can share that’d be great!). This post was a bit of a journey, but hopefully you found it useful! I continue to find it impressive how many different neat features like this can come about once you have the operational space control framework in place.

]]>