Dynamic movement primitives part 3: Rhythmic movements

So far we’ve looked at using DMPs for discrete movements, but as I briefly mentioned it’s also possible to use them to perform rhythmic movements. In this post we’re going to look at the implementation of rhythmic DMPs and see exactly how it’s done. It’s actually pretty straightforward, which is always nich. We’ll go through everything that has to change and then see a couple of different applications of rhythmic DMPs. Again, all the code for the DMPs and arm control implementations can be found up on my Github.

Obtaining consistent and repeatable basis function activation

We’ll start with the canonical system. In the discrete case our canonical system (which drives the activation of the basis functions) decayed from 1 to 0 and then we were all done. In the rhythmic case we want our system to repeat indefinitely, so we need a reliable way of continuously activating the basis functions in the same order. One function that may come to mind that reliably repeats is the cosine function. But how to use the cosine function, exactly?

The idea is that we’ll lay out our basis functions in the range from 0 to 2\pi, and we’ll set the canonical system to be ever increasing linearly. Then we’ll use the difference between the canonical system state and each of the center points as the value we pass in to our cosine function. Because the cosine function repeats at 2\pi we’ll get a nice repeating spread of basis function activations. We’ll then throw these values through our Gaussian equation (with the gain and bias terms) and we’ll be done! Here’s the equation:

\psi = \textrm{exp}(h * \textrm{cos}(x - c) - 1),
where x is the state of the canonical system, c is the basis function center point, and h is a gain term controlling the variance of the Gaussian. And here’s a picture of the activations of three basis functions with centers spread out evenly between 0 and 2\pi:

gauss_rhythmic

Here’s a picture of the basis function activations on a longer time scale:

gauss_rhythmic_longer

As long as our canonical system increases at a steady pace the basis functions will continue to activate in a reliable and repeatable way. As mentioned, to get the canonical system to increase at a steady pace we simply give it linear dynamics:

\dot{x} = 1
The placement of the center points of the rhythmic system is a lot easier than in the discrete case because the rhythmic canonical system dynamics are linear.

Other differences between discrete and rhythmic

In the actual implementation of the rhythmic system there are a couple of other differences that you need to be aware of. The first is that there is no diminishing term in the rhythmic system, whereas there is in the discrete case.

The second is how to go about establishing goal states when imitating an path. We’re going to assume that whatever path is passed in to us is going to be a rhythmic pattern, and then goal state is going to be set as the center point of the desired trajectory. So for rhythmic systems the ‘goal’ is actually more like a center point for the trajectory.

And that’s pretty much it! If you look through the code that I have up for the rhythmic DMP subclass you’ll see that only a handful of functions needed to be redefined, the rest is the same! Well that’s great, let’s look at some applications.

Example

I was thinking of different examples that would be interesting to see of the rhythmic DMPs, and after a little consideration the most obvious application of rhythmic DMPs is really to a system that does something like walking. In the spinal cord of animals there are circuits which are known as central pattern generators (CPGs), and when stimulated these have been shown to generate rhythmic, repeated movements (as described in Principles of rhythmic motor pattern generation and other papers). Let’s go ahead and build our own CPG here using DMPs. I don’t have a simulation of a set of legs, however, so instead we’ll look at getting a single 3-link arm to reproduce a pattern of behaviour that you’d expect in a leg that is walking.

To do this, I’m going to need to specify the trajectories for the three joints of the arm, and I’ll be controlling the system in joint space, using the generalized coordinates controller, rather than the operational space controller used in the discrete DMP examples for controlling the end-effector position. So that’s one thing. Another thing is that I’m going to need the kinematics of leg joints during motion. After a quick search of the web I found this presentation from Simon Fraser University which has the following images:

leg_rhythms

Which is some very useful information! I just need to transfer this into a Numpy vector that I can feed into the system and convert degrees to radians and we should be good to go! To get the information out of the graph, use your favorite data stripper, I used http://arohatgi.info/WebPlotDigitizer/app/.

I also had to do a bit of a shuffle with the trajectories provided to make it look right on the arm. The hip to shoulder joint maps fine, but for the knee to elbow the flexion and extension are reversed, and for the wrist to foot the 0 angle for the foot is straight out. Once those have been accounted for, though, everything works great. Here is an animation of a single leg walking along, if you use your imagination you can just picture a happy-go-lucky torso attached to this leg whistling while he gets his very human-like gait on:

leg_walking

And there you go! It’s neat how straightforward that was, once you have the data of the system you want to model. It shows the power of DMPs for coordinating multiple degrees of freedom quickly and easily. Also something that’s worth pointing out once again is that this is a force based controller. We were able to go from a kinematic description of the movement we wanted to a reliable system generating torques. It’s another good example of the dexterity of dynamic movement primitives.

You can see the code for everything here up online on my github. To run the code you should run:

python run.py arm3 gc walk

If you have any questions or trouble running the code please contact me.

Tagged , , ,

Dynamic movement primitives part 2: Controlling end-effector trajectories

The dynamic movement primitive (DMP) framework was designed for trajectory control. It so happens that in previous posts we’ve built up to having several arm simulations that are ripe for throwing a trajectory controller on top, and that’s what we’ll do in this post. The system that we will be controlling here is the 3 link arm model with an operational space controller (OSC) that translates end-effector forces into joint torques. The DMPs here will be controlling the (x,y) trajectory of the hand, and the OSC will take care of turning the desired hand forces into torques that can be applied to the arm. All of the code used to generate the animations throughout this post can of course be found up on my github.

Controlling a 3 link arm with DMPs
We have our 3 link arm and its OSC controller; this whole setup we’ll collectively refer to as ‘the plant’ throughout this post. We are going to pass in some (x,y) force signal to the plant, and the plant will carry it out. Additionally we’ll get a feedback signal with the (x,y) position of the hand. At the same time, we also have a DMP system that’s doing its own thing, tracing out a desired trajectory in (x,y) space. We have to tie these two systems together.

To do this is easy, we’ll generate the control signal for the plant from our DMP system simply by measuring the difference between the state of our DMP system and the plant state, use that to drive the plant to the state of the DMP system. Formally,

u = k_p(y_{\textrm{DMP}} - y)
where y_{\textrm{DMP}} is the state of the DMP system, y is the state of the plant, and k_p and is the position error gain term.

Once we have this, we just go ahead and step our DMP system forward and make sure the gain values on the control signal are high enough that the plant follows the DMP’s trajectory. And that’s pretty much it, just run the DMP system to the end of the trajectory and then stop your simulation.

To give a demonstration of DMP control I’ve set up the DMP system to follow the same number trajectories that the SPAUN arm followed. As you can see the combination of DMPs and operational space control is much more effective than my previous implementation.

Incorporating system feedback

One of the issues in implementing the control above is that we have to be careful about how quickly the DMP trajectory moves, because while the DMP system isn’t constrained by any physical dynamics, the plant is. Depending on the size of the movement the DMP trajectory may be moving a foot a second or an inch a second. You can see above that the arm doesn’t fully draw out the desired trajectories in places where the DMP system moved too quickly in and out and sharp corners. The only way to remedy this without feedback is to have the DMP system move more slowly throughout the entire trajectory. What would be nice, instead, would be to just say ‘go as fast as you can, as long as the plant state is within some threshold distance of you’, and this is where system feedback comes in.

It’s actually very straightforward to implement this using system feedback: If the plant state drifts away from the state of the DMPs, slow down the execution speed of the DMP to allow the plant time to catch up. The do this we just have to multiply the DMP timestep dt by a new term:

1 / (1 + \alpha_{\textrm{err}}(y_{\textrm{plant}} - y_{\textrm{DMP}})).
All this new term does is slow down the canonical system when there’s an error, you can think of it as a scaling on the time step. Additionally, the sensitivity of this term can be modulated the scaling term \alpha_{\textrm{err}} on the difference between the plant and DMP states.

We can get an idea of how this affects the system by looking at the dynamics of the canonical system when an error term is introduced mid-run:

CSwitherror
When the error is introduced the dynamics of the system slow down, great! Lets look at an example comparing execution with and without this feedback term. Here’s the system drawing the number 3 without any feedback incorporation:

and here’s the system drawing the number 3 with the feedback term included:


These two examples are a pretty good case for including the feedback term into your DMP system. You can still see in the second case that the specified trajectory isn’t traced out exactly, but if that’s what you’re shooting for you can just crank up the \alpha_{\textrm{err}} to make the DMP timestep really slow down whenever the DMP gets ahead of the plant at all.

Interpolating trajectories for imitation

When imitating trajectories there can be some issues with having enough sample points and how to fit them to the canonical system’s timeframe, they’re not difficult to get around but I thought I would go over what I did here. The approach I took was to always run the canonical system for 1 second, and whenever a trajectory is passed in that should be imitated to scale the x-axis of the trajectory such that it’s between 0 and 1. Then I shove it into an interpolator and use the resulting function to generate an abundance of nicely spaced sample points for the DMP imitator to match. Here’s the code for that:

# generate function to interpolate the desired trajectory
import scipy.interpolate

path = np.zeros((self.dmps, self.timesteps))
x = np.linspace(0, self.cs.run_time, y_des.shape[1])

for d in range(self.dmps):

    # create interpolation function
    path_gen = scipy.interpolate.interp1d(x, y_des[d])

    # create evenly spaced sample points of desired trajectory
    for t in range(self.timesteps):  
        path[d, t] = path_gen(t * self.dt)

y_des = path

Direct trajectory control vs DMP based control

Now, using the above described interpolation function we can just directly use its output to guide our system. And, in fact, when we do this we get very precise control of the end-effector, more precise than the DMP control, as it happens. The reason for this is because our DMP system is approximating the desired trajectory and with a set of basis functions, and some accuracy is being lost in this approximation.

So if we instead use the interpolation function to drive the plant we can get exactly the points that we specified. The times when this comes up especially are when the trajectories that you’re trying to imitate are especially complicated. There are ways to address this with DMPs by placing your basis functions more appropriately, but if you’re just looking for the exact replication of an input trajectory (as often people are) this is a simpler way to go. You can see the execution of this in the control_trajectory.py code up on my github. Here’s a comparison of a single word drawn using the interpolation function:

draw_word_traj

and here’s the same word drawn using a DMP system with 1,000 basis function per DOF:

draw_word_dmp
We can see that just using the interpolation function here gives us the exact path that we specified, where using DMPs we have some error, and this error increases with the size of the desired trajectory. But! That’s for exactly following a given trajectory, which is often not the case. The strength of the DMP framework is that the trajectory is a dynamical system. This lets us do simple things to get really neat performance, like scale the trajectory spatially on the fly simply by changing the goal, rather than rescaling the entire trajectory:

Conclusions

Some basic examples of using DMPs to control the end-effector trajectory of an arm with operational space control were gone over here, and you can see that they work really nicely together. I like when things build like this. We also saw that power of DMPs in this situation is in their generalizability, and not in exact reproduction of a given path. If I have a single complex trajectory that I only want the end-effector to follow once then I’m going to be better off just interpolating that trajectory and feeding the coordinates into the arm controller rather than go through the whole process of setting up the DMPs.

Drawing words, though, is just one basic example of using the DMP framework. It’s a very simple application and really doesn’t do justice to the flexibility and power of DMPs. Other example applications include things like playing ping pong. This is done by creating a desired trajectory showing the robot how to swing a ping pong paddle, and then using a vision system to track the current location of the incoming ping pong ball and changing the target of the movement to compensate dynamically. There’s also some really awesome stuff with object avoidance, that is implemented by adding another term with some simple dynamics to the DMP. Discussed here, basically you just have another system that moves you away from the object with a strength relative to your distance from the object. You can also use DMPs to control gain terms on your PD control signal, which is useful for things like object manipulation.

And of course I haven’t touched on rhythmic DMPs or learning with DMPs at all, and those are both also really interesting topics! But this serves as a decent introduction to the whole area, which has been developed in the Schaal lab over the last decade or so. I recommend further reading with some of these papers if you’re interested, there are a ton of neat ways to apply the DMP framework! And, again, the code for everything here is up on my github.

Tagged , , , , ,

Dynamic movement primitives part 1: The basics

Dynamic movement primitives (DMPs) are a method of trajectory control / planning from Stefan Schaal’s lab. They were presented way back in 2002 in this paper, and then updated in 2013 by Auke Ijspeert in this paper. This work was motivated by the desire to find a way to represent complex motor actions that can be flexibly adjusted without manual parameter tuning or having to worry about instability.

Complex movements have long been thought to be composed of sets of primitive action ‘building blocks’ executed in sequence and \ or in parallel, and DMPs are a proposed mathematical formalization of these primitives. The difference between DMPs and previously proposed building blocks is that each DMP is a nonlinear dynamical system. The basic idea is that you take a dynamical system with well specified, stable behaviour and add another term that makes it follow some interesting trajectory as it goes about its business. There are two kinds of DMPs: discrete and rhythmic. For discrete movements the base system is a point attractor, and for rhythmic movements a limit cycle is used. In this post we’re only going to worry about discrete DMPs, because by the time we get through all the basics this will already be a long post.

Imagine that you have two systems: An imaginary system where you plan trajectories, and a real system where you carry them out. When you use a DMP what you’re doing is planning a trajectory for your real system to follow. A DMP has its own set of dynamics, and by setting up your DMP properly you can get the control signal for your actual system to follow. If our DMP system is planing a path for the hand to follow, then what gets sent to the real system is the set of forces that need to be applied to the hand. It’s up to the real system to take these hand forces and apply them, by converting them down to joint torques or muscle activations (through something like the operation space control framework) or whatever. That’s pretty much all I’ll say here about the real system, what we’re going to focus on here is the DMP system. But keep in mind that the whole DMP framework is for generating a trajectory \ control signal to guide the real system.

I’ve got the code for the basic discrete DMP setup and examples I work through in this post up on my github, so if you want to jump straight to that, there’s the link! You can run test code for each class just by executing that file.

Discrete DMPs

Let’s start out with point attractor dynamics:

\ddot{y} = \alpha_y ( \beta_y (g - y) - \dot{y}),
where y is our system state, g is the goal, and \alpha and \beta are gain terms. This should look very familiar, it’s a PD control signal, all this is going to do is draw our system to the target. Now what we’ll do is add on a forcing term that will let us modify this trajectory:

\ddot{y} = \alpha_y ( \beta_y (g - y) - \dot{y}) + f.
How to define a nonlinear function f such that you get the desire behaviour is a non-trivial question. The crux of the DMP framework is an additional nonlinear system used to define the forcing function f over time, giving the problem a well defined structure that can be solved in a straight-forward way and easily generalizes. The introduced system is called the canonical dynamical system, is denoted x, and has very simple dynamics:

\dot{x} = -\alpha_x x.

The forcing function f is defined as a function of the canonical system:

f(x,g) = \frac{\Sigma_{i=1}^N \psi_i w_i}{\Sigma_{i=1}^N \psi_i} x(g - y_0),
where y_0 is the initial position of the system,

\psi_i = \textrm{exp}\left( -h_i \left( x - c_i\right)^2 \right),
and w_i is a weighting for a given basis function \psi_i. You may recognize that the \psi_i equation above defines a Gaussian centered at c_i, where h_i is the variance. So our forcing function is a set of Gaussians that are ‘activated’ as the canonical system x converges to its target. Their weighted summation is normalized, and then multiplied by the x (g - y_0) term, which is both a ‘diminishing’ and spatial scaling term.

Let’s break this down a bit. The canonical system starts at some arbitrary value, throughout this post x_0 = 1, and goes to 0 as time goes to infinity. For right now, let’s pretend that x decays linearly to 0. The first main point is that there are some basis functions which are activated as a function of x, this is displayed in the top figure below. As the value of x decreases from 1 to 0, each of the Gaussians are activated (or centered) around different x values. The second thing is that each of these basis functions are also assigned a weight, w_i. These weights are displayed in the lower figure in the bar plot. The output of the forcing function f is then the summation of the activations of these basis functions multiplied by their weight, also displayed in the lower figure below.

psi
The diminishing term
Incorporating the x term into the forcing function guarantees that the contribution of the forcing term goes to zero over time, as the canonical system does. This means that we can sleep easy at night knowing that our system can trace out some crazy path, and regardless will eventually return to its simpler point attractor dynamics and converge to the target.

Spatial scaling
Spatial scaling means that once we’ve set up the system to follow a desired trajectory to a specific goal we would like to be able to move that goal farther away or closer in and get a scaled version of our trajectory. This is what the (g - y_0) term of the forcing function handles, by scaling the activation of each of these basis functions to be relative to the distance to the target, causing the system to cover more or less distance. For example, let’s say that we have a set of discrete DMPs set up to follow a given trajectory:

DMPimitatedpath
The goals in this case are 1 and .5, which you can see is where the DMPs end up. Now, we’ve specified everything in this case for these particular goals (1 and .5), but let’s say we’d like to now generalize and get a scaled up version of this trajectory for moving by DMPs to a goal of 2. If we don’t appropriately scale our forcing function, with the (g - y_0) term, then we end up with this:

DMPnogscaling
Basically what’s happened is that for these new goals the same weightings of the basis functions were too weak to get the system to follow or desired trajectory. Once the (g - y_0) term included in the forcing function, however, we get:

DMPwithgscaling
which is exactly what we want! Our movements now scale spatially. Awesome.

Spreading basis function centers
Alright, now, unfortunately for us, our canonical system does not converge linearly to the target, as we assumed above. Here’s a comparison of a linear decay vs the exponential decay of actual system:

xvsPD
This is an issue because our basis functions activate dependent on x. If the system was linear then we would be fine and the basis function activations would be well spread out as the system converged to the target. But, with the actual dynamics, x is not a linear function of time. When we plot the basis function activations as a function of time, we see that the majority are activated immediately as x moves quickly at the beginning, and then the activations stretch out as the x slows down at the end:

gauss_over_time
In the interest of having the basis functions spaced out more evenly through time (so that our forcing function can still move the system along interesting paths as it nears the target, we need to choose our Gaussian center points more shrewdly. If we look at the values of x over time, we can choose the times that we want the Gaussians to be activated, and then work backwards to find the corresponding x value that will give us activation at that time. So, let’s look at a picture:

des_gauss_centers
The red dots are the times we’d like the Gaussians to be centered around, and the blue line is our canonical system x. Following the dotted lines up to the corresponding x values we see what values of x the Gaussians need to be centered around. Additionally, we need to worry a bit about the width of each of the Gaussians, because those activated later will be activated for longer periods of time. To even it out the later basis function widths should be smaller. Through the very nonanalytical method of trial and error I’ve come to calculate the variance as

h_i = \frac{\#BFs}{c_i}
Which reads the variance of basis function i is equal to the number of basis functions divided by the center of that basis function. When we do this, we can now generate centers for our basis functions that are well spaced out:

gauss_over_time_spaced_well

Temporal scaling

Again, generalizability is one of the really important things that we want out of this system. There are two obvious kinds, temporal and spatial. Spatial scaling we discussed above, in the temporal case we’d like to be able to follow this same trajectory at different speeds. Sometimes quick, sometimes slow, but always tracing out the same path. To do that we’re going to add another term to our system dynamics, \tau, our temporal scaling term. Given that our system dynamics are:

\ddot{y} = \alpha_y ( \beta_y (g - y) - \dot{y}) + f,
\dot{x} = -\alpha_x x,
to give us temporal flexibility we can add the \tau term:

\ddot{y} = \tau^2 (\alpha_y ( \beta_y (g - y) - \dot{y}) + f),
\dot{x} = \tau(-\alpha_x x),
where we use \tau^2 for \ddot{y} because it’s the second derivative, and that’s all we have to do! Now to slow down the system you set \tau between 0 and 1, and to speed it up you set \tau greater than 1.

Imitating a desired path

Alright, great. We have a forcing term that can make the system take a weird path as it converges to a target point, and temporal and spatial scalability. How do we set up the system to follow a path that we specify? That would be ideal, to show the system the path to follow, and have it be able to work backwards and figure out the forces and then be able to generate that trajectory whenever we want. This ends up being a pretty straight forward process.

We have control over the forcing term, which affects the system acceleration. So we first need to take our desired trajectory, \textbf{y}_d (where bold denotes a vector, in this case the time series of desired points in the trajectory), and differentiate it twice to get the accelerations:

\ddot{\textbf{y}}_d = \frac{\partial}{\partial t} \dot{\textbf{y}}_d = \frac{\partial}{\partial t} \frac{\partial}{\partial t} \textbf{y}_d.
Once we have the desired acceleration trajectory, we need to remove the effect of the base point attractor system. We have the equation above for exactly what the acceleration induced by the point attractor system at each point in time is:

\ddot{y} = \alpha_y ( \beta_y (g - y) - \dot{y}),
so then to calculate what the forcing term needs to be generate this trajectory we have:

\textbf{f}_d = \ddot{\textbf{y}}_d - \alpha_y ( \beta_y (g - \textbf{y}) - \dot{\textbf{y}}).
From here we know that the forcing term is comprised of a weighted summation of basis functions which are activated through time, so we can use an optimization technique like locally weighted regression to choose the weights over our basis functions such that the forcing function matches the desired trajectory \textbf{f}_d. In locally weighted regression sets up to minimize:

\Sigma_t \psi_i(t)(f_d(t) - w_i (x(t) (g - y_0)))^2
and the solution (which I won’t derive here, but is worked through in Schaal’s 1998 paper) is

w_i = \frac{\textbf{s}^T \pmb{\psi}_i \textbf{f}_d}{\textbf{s}^T \pmb{\psi}_i \textbf{s}},
where

\textbf{s} = \left( \begin{array}{c}x_{t_0}(g - y_0) \\ \vdots \\ x_{t_N}(g - y_0) \end{array} \right), \;\;\; \pmb{\psi}_i = \left( \begin{array}{ccc} \psi_i(t_0) & \dots & 0 \\ 0 & \ddots & 0 \\ 0 & \dots & \psi_i(t_n) \end{array} \right)
Great! Now we have everything we need to start making some basic discrete DMPs!

Different numbers of basis functions

One of the things you’ll notice right off the bat when imitating paths, is that as the complexity of the trajectory increases, so does the required number of basis functions. For example, below, the system is trying to follow a sine wave and a highly nonlinear piecewise function:

DMPdiffBFnums
We can see in the second case that although the DMP is never able to exactly reproduce the desired trajectory, the approximation continues to get better as the number of basis functions increases. This kind of slow improvement in certain nonlinear areas is to be expected from how the basis functions are being placed. An even spreading of the centers of the basis functions through time was used, but for imitation there is another method out of Dr. Schaal’s lab that places the basis functions more strategically. Need is determined by the function complexity is in that region, and basis function centers and widths are defined accordingly. In highly nonlinear areas we would expect there to be many narrow basis functions, and in linear areas we would expect fewer basis functions, but ones that are wider. The method is called locally weighted projection regression, which I plan on writing about and applying in a future post!

Conclusions \ thoughts

There’s really a lot of power in this framework, and there are a ton of expansions on this basic setup, including things like incorporating system feedback, spatio-temporal coupling of DMPs, using DMPs for gain control as well as trajectory control, incorporating a cost function and reinforcement learning, identifying action types, and other really exciting stuff.

I deviated from the terminology used in the papers here in a couple of places. First, I didn’t see a reason to reduce the second order systems to two first order systems. When working through it I found it more confusing than helpful, so I left the dynamics as a second order systems. Second, I also moved the \tau term to the right hand side, and that’s just so that it matches the code, it doesn’t really matter. Neither of these were big changes, but in case you’re reading the papers and wondering.

Something that I kind of skirted above is planning along multiple dimensions. It’s actually very simple; the DMP framework simply assigns one DMP per degree of freedom being controlled. But, it’s definitely worth explicitly stating at some point.

I also mentioned this above, but this is a great trajectory control system to throw on top of the previously discussed operational space control framework. With the DMP framework on top to plan robust, generalizable movements, and the OSCs underneath to carry out those commands we can start to get some really neat applications. For use on real systems the incorporation of feedback and spatio-temporal coupling terms is going to be important, so the next post will likely be working through those and then we can start looking at some exciting implementations!

Speaking of implementations, there’s a DMP and canonical system code up on my github, please feel free to explore it, run it, send me questions about it. Whatever. I should also mention that there’s this and a lot more all up and implemented on Stefan Schaal’s lab website.

Tagged , , , ,

Robot control part 7: OSC of a 3-link arm

So we’ve done control for the 2-link arm, and control of the one link arm is trivial (where we control joint angle, or x or y coordinate of the pendulum), so here I’ll just show an implementation of operation space control for a more interesting arm model, the 3-link arm model. The code can all be found up on my Github.

In theory there’s nothing different or more difficult about controlling a 3-link arm versus a 2-link arm. For the inertia matrix, what I ended up doing here is just jacking up all the values of the matrix to about 100, which causes the controller to way over control the arm, and you can see the torques are much larger than they would need to be if we had an accurate inertia matrix. But the result is the same super straight trajectories that we’ve come to expect from operational space control:

3link

It’s a little choppy because I cut out a bunch of frames to keep the gif size down. But you get the point, it works. And quite well!

Because this is also a 3-link arm now and our task space force signal is 2D, we have redundant space of solutions, meaning that the task space control signal can be carried out in a number of ways. In other words, a null space exists for this controller. This means that we can throw another controller in our system to operate inside the null space of the first controller. We’ve already worked through all the math for this, so it’s straightforward to implement.

What kind of null space controller should we put in? Well, you may have noticed in the above animation the arm goes through itself, here’s another example:

3linknonull

Often it’s desirable to avoid this (because of physics or whatever), so what we can do is add a secondary controller that works to keep the arm’s elbow and wrist near some comfortable default angles. When we do this we get the following:

3linknull

And there you have it! Operational space control of a three link arm with a secondary controller in the null space to try to keep the angles near their default / resting positions.

I also added mouse based control to the arm so it will try to follow your mouse when you move over the figure, which makes it pretty fun to explore the power of the controller. It’s interesting to see where the singularities become an issue, and how having a null space controller that’s operating in joint space can actually come to help the system move through those problem points more smoothly. Check it out! It’s all up on my Github.

Tagged , , , , , ,

Robot control part 6: Handling singularities

We’re back! Another exciting post about robotic control theory, but don’t worry, it’s short and ends with simulation code. The subject of today’s post is handling singularities.

What is a singularity

This came up recently when I had build this beautiful controller for a simple two link arm that would occasionally go nuts. After looking at it for a while it became obvious this was happening whenever the elbow angle reached or got close to 0 or \pi. Here’s an animation:

singularity

What’s going on here? Here’s what. The Jacobian has dropped rank and become singular (i.e. non-invertible), and when we try to calculate our mass matrix for operational space

\textbf{M}_\textbf{x} = (\textbf{J} \; \textbf{M}_\textbf{q}^{-1} \textbf{J}^T)^{-1}
the values explode in the inverse calculation. By dropping rank I mean that the rows of the Jacobian are no longer linearly independent. This means that we can rotate the matrix and end up with a matrix with a row of zeros. This row of zeros is the degenerate direction, and the problems come from trying to send forces in that direction.

To determine when the Jacobian becomes singular we can look at its determinant. If the determinant of the matrix is zero, then it’s singular. Let’s look at the Jacobian for the end-effector:

\textbf{J} = \left[ \begin{array}{cc} -L_0 sin(\theta_0) - L_1 sin(\theta_0 + \theta_1) & -L_1 sin(\theta_0 + \theta_1) \\ L_0 cos(\theta_0) + L_1 cos(\theta_0 + \theta_1) & L_1 cos(\theta_0 + \theta_1) \end{array} \right]

And now let’s look at what happens when \theta_1 = 0. We can see that sin(\theta_0 + 0) = sin(\theta_0), so our Jacobian becomes

\textbf{J} = \left[ \begin{array}{cc} -(L_0 - L_1) sin(\theta_0) & -L_1 sin(\theta_0) \\ (L_0 + L_1) cos(\theta_0) & L_1 cos(\theta_0) \end{array} \right]

If we calculate the determinant we get

(L_0 - L_1)(-sin(\theta_0))(L_1)(cos(\theta_0)) - (L_1)(-sin(\theta_0))(L_0 + L_1)(cos(\theta_0)) = 0.

Similarly, if we look at the Jacobian when \theta_1 = \pi, where sin(\theta_0 + \pi) = -sin(\theta_0) and cos(\theta_0 + \pi) = -cos(\theta_0), we get

\textbf{J} = \left[ \begin{array}{cc} -(L_0 - L_1) sin(\theta_0) & L_1 sin(\theta_0) \\ (L_0 + L_1) cos(\theta_0) & - L_1 cos(\theta_0) \end{array} \right]

Calculating the determinant of this we get

(L_0 + L_1)(-sin(\theta_0))(L_1)(-cos(\theta_0)) - (L_1)(sin(\theta_0))(L_0 + L_1)(-cos(\theta_0)) = 0.

Note that in these cases the Jacobian is a square matrix, in the event that it’s not a square matrix, no worries, we just find the determinant of \textbf{J}\;\textbf{J}^T instead!

Fixing the problem

So we can detect when a singularity is occurring, how do you prevent it from destroying our controller? Well, what we can do is identify the degenerate dimensions and set force in those directions to zero. Fortunately, to do this is straight forward.

The problems come when we try to find the inverse of \textbf{M}_\textbf{x}^{-1} in the regular way. So we’ll amend the process a bit. First we find the SVD decomposition of \textbf{M}_\textbf{x}^{-1}. To get the inverse of this matrix (i.e. to find \textbf{M}_\textbf{x}) from the returned \textbf{V}, \textbf{S} and \textbf{U} matrices we can simply multiply them out like so:

\textbf{M}_\textbf{x} = \textbf{V} \textbf{S}^{-1} \textbf{U}^T,
where \textbf{S} is a diagonal matrix of singular values.

Because \textbf{S} is diagonal it’s very easy to find its inverse, which is calculated just by taking the reciprocal of each of the diagonal elements.

Now, whenever our system approaches a singularity some of the values of \textbf{S} will start to get very small, and when we take the reciprocal of them we start getting huge numbers, which is where the value explosion comes from. Instead, we can check for approaching the singularity, then look at the singular values and set those entries smaller than our threshold (say .005) equal to zero. This cancels out any forces that would be sent in that direction. Then we can just go on with our calculations.
Here’s the code:

Mx_inv = np.dot(JEE, np.dot(np.linalg.inv(Mq), JEE.T))
if abs(np.linalg.det(np.dot(JEE,JEE.T))) > .005**2:
    # if we're not near a singularity
    Mx = np.linalg.inv(Mx_inv)
else: 
    # in the case that the robot is entering near singularity
    u,s,v = np.linalg.svd(Mx_inv)
    for i in range(len(s)):
        if s[i] < .005: s[i] = 0
        else: s[i] = 1.0/float(s[i])
    Mx = np.dot(v, np.dot(np.diag(s), u.T))

And here’s an animation of the controlled arm now that we’ve accounted for movement when near singular configurations:

fixed

As always, the code for this can be found up on my Github. The default is to run using a two link arm simulator written in Python. To run, simply download everything and run the run_this.py file.

Everything is also included required to run the MapleSim arm simulator. To do this, go into the TwoLinkArm folder, and run python setup.py build_ext -i. This should compile the arm simulation to a shared object library that Python can now access on your system. To use it, edit the run_this.py file to import from TwoLinkArm/arm_python to TwoLinkArm/arm and you should be good to go!
More details on getting the MapleSim arm to run can be found in this post.

Tagged , , , , ,

Arm simulation visualization with Matplotlib

One of the downsides of switching to Python from Matlab is that it can be a pain to plot some kinds of things, and I’ve found animations to be one those things. In previous posts I’ve done the visualization of my arm simulations through Pyglet, but I recently started playing around with Matplotlib’s animation function, and the results are pretty smooth. The process is also relatively painless and quick to get up and running, so I thought I would throw up some Matplotlib code for visualizing my previously described 2 link arm MapleSim simulation.

So, let’s look at the code:

#Written by Travis DeWolf (Sept, 2013)
#Based on code by Jake Vanderplas - http://jakevdp.github.com

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import py2LinkArm

class TwoLinkArm:
    """
    :param list u: the torque applied to each joints
    """
    def __init__(self, u = [.1, 0]): 
        self.u = np.asarray(u, dtype='float') # control signal
        self.state = np.zeros(3) # vector for current state
        self.L1=0.37 # length of arm link 1 in m
        self.L2=0.27 # length of arm link 2 in m
        self.time_elapsed = 0

        self.sim = py2LinkArm.pySim()
        self.sim.reset(self.state)
    
    def position(self):
        """Compute x,y position of the hand"""

        x = np.cumsum([0,
                       self.L1 * np.cos(self.state[1]),
                       self.L2 * np.cos(self.state[2])])
        y = np.cumsum([0,
                       self.L1 * np.sin(self.state[1]),
                       self.L2 * np.sin(self.state[2])])
        return (x, y)

    def step(self, dt):
        """Simulate the system and update the state"""
        for i in range(1500):
            self.sim.step(self.state, self.u)
        self.time_elapsed = self.state[0]

#------------------------------------------------------------
# set up initial state and global variables
arm = TwoLinkArm()
dt = 1./30 # 30 fps

#------------------------------------------------------------
# set up figure and animation
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111, aspect='equal', autoscale_on=False,
                     xlim=(-1, 1), ylim=(-1, 1))
ax.grid()

line, = ax.plot([], [], 'o-', lw=4, mew=5)
time_text = ax.text(0.02, 0.95, '', transform=ax.transAxes)

def init():
    """initialize animation"""
    line.set_data([], [])
    time_text.set_text('')
    return line, time_text

def animate(i):
    """perform animation step"""
    global arm, dt
    arm.step(dt)
    
    line.set_data(*arm.position())
    time_text.set_text('time = %.2f' % arm.time_elapsed)
    return line, time_text

# frames=None for matplotlib 1.3
ani = animation.FuncAnimation(fig, animate, frames=None,
                              interval=25, blit=True, 
                              init_func=init)

# uncomment the following line to save the video in mp4 format.
# requires either mencoder or ffmpeg to be installed
#ani.save('2linkarm.mp4', fps=15, 
#         extra_args=['-vcodec', 'libx264'])

plt.show()

There’s not all too much to it, which is nice. I’ve created a class TwoLinkArm that wraps the actual arm simulator, stores the current state of the arm, and has a step function that gets called to move the system forward in time. Then, I created a line and stored a reference to it; this is what is going to be updated in the simulation. I then need functions that specify how the line will be initialized and updated. The init function doesn’t do anything except set the line and text data to nothing, and then the animate function calls the arm simulation’s step function and updates the line and text data.

For more details about it there’s this blog post which steps through the process a bit more. For simple arm simulations the above is all I need though, so I’ll leave it there for now!

Here’s an animation of the resulting sim visualization, I’ve removed a lot of the frames to keep the size down. It’s smoother when running the actual code, which can be found up on my github.

Hurray better visualization!

Tagged , , ,

Robot control part 5: Controlling in the null space

In the last post, I went through how to build an operational space controller. It was surprisingly easy after we’ve worked through all the other posts. But maybe that was a little too easy for you. Maybe you want to do something more interesting like implement more than one controller at the same time. In this post we’ll go through how to work inside the null space of a controller to implement several seperate controllers simultaneously without interference.
Buckle up.

Null space forces

Basic operational space control works all well and fine, but its not uncommon to have several control goals at once. Something like ‘move to this position’, and ‘raise your elbow high’. If our operational space can also serve as generalized coordinates, meaning that the system state specified in operational space constrains all of the degrees of freedom of the robot, then there’s not anything we can do. In the above example, this is the case. We could have used the end-effector Cartesian space (our chosen operational space) as our generalized coordinates system instead, because a specific (x,y) position fully constrains the position of the arm.

But often when using operational space control for more complex robots, this is not the case. In these situations, the forces controlled in operational space have fewer dimensions than the robot has degrees of freedom, and so it’s possible to accomplish the goals a given controller in a number of ways. The null space of this controller is the region of state space where there is a redundancy of solutions. The system can move in a number of ways and still not affect the completion of the goals of the first controller (think about all the different places your elbow can be while you move your arm, without affecting your ability to move your hand in a straight line). In these situations, we can create another controller to operate in the null space of the first, and the full torque of the system is a sum of the torque applied from each of the controller. Here, we’ll look at only two controllers, but note that this process can be applied iteratively.

The second controller’s goals will only be accomplished if it’s possible to do so without affecting the performance of the first controller, which means that it must operate in the null space of the first controller. We denote the operational space torque with \pmb{\tau}_{\textbf{x}}, and the torque from the secondary controller \pmb{\tau}_{\textrm{null}}. Our equation for the overall torque to apply to the system is:

\pmb{\tau} = \pmb{\tau}_{\textbf{x}} + (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}},
where \textbf{J}^{T+} is the pseudo-inverse of \textbf{J}^T.

Looking at that term we just added,

(\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}},
we see that the Jacobian transpose multiplied by it’s pseudo-inverse will be 1′s all along the diagonal, except in the null space, and so the idea is that we end up subtracting \pmb{\tau}_{\textrm{null}} from itself everywhere that affects the operational space and leaving it to do whatever it wants where it doesn’t.

There is a problem here, however. The Jacobian is defined over velocities between two spaces, and so operating in its null space ensures that no velocities get applied elsewhere in the system, but forces affect the system’s acceleration, not the velocity directly. This means that with the above null space filter forces affecting the system’s acceleration can still get through. What we really need here is an acceleration null space, not a velocity null space.

Calculating the acceleration in operational space

So we’re interested in finding an equation for the acceleration in operational space. Once we have that we can look at how we can define a null space in operational space accelerations. Let’s start here by relating the Jacobian to acceleration. We can do this simply by taking the time derivative of the Jacobian equation:

\dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}}
(take time derivative, apply chain rule on right hand side)
\ddot{\textbf{x}} = \dot{\textbf{J}} \; \dot{\textbf{q}} + \textbf{J} \; \ddot{\textbf{q}}
OK. So now at least we have an equation with the Jacobian an accelerations in it that we can work with. Let’s go back and look at our force equation for the torque of the whole system.

\pmb{\tau} = \pmb{\tau}_{\textbf{x}} + (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}},
what we’re interested in is the part of the system’s acceleration that is due to operational space dynamics. We know that force is equal to mass times acceleration, so let’s rearrange the above equation to isolate \pmb{\tau}_\textbf{x} and see where we can go from there:

\pmb{\tau}_{\textbf{x}} = \pmb{\tau} - (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}.
As discussed in the previous post, the sum of forces affecting the system (represented by \pmb{\tau} here) is equal to the system mass times acceleration (both defined in the same coordinate system, here in joint space), plus the Coriolis and centrifugal effects, plus the force of gravity (again, all defined in the joint space). Writing this out formally gives:

\pmb{\tau} = \textbf{M}_{\textbf{q}} \ddot{\textbf{q}} + \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \pmb{\tau}_g
OK. Let’s sub this into the previous equation:

\pmb{\tau}_{\textbf{x}} = \textbf{M}_{\textbf{q}} \ddot{\textbf{q}} + \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \pmb{\tau}_g - (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}
So we have \pmb{\tau}_{\textbf{x}}, let’s break this term down a bit:

\pmb{\tau}_{\textbf{x}} = \textbf{J}^T \textbf{F}_{\textbf{x}_\textrm{full}} = \textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_\textrm{full}.
There’s our acceleration! I’ve added the subscript \textrm{full} to it to represent that these terms take into account the full dynamics of the system, rather than only the operational space dynamics. Subbing this all in we come out with:

\textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_\textrm{full} = \textbf{M}_{\textbf{q}} \ddot{\textbf{q}} + \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \pmb{\tau}_g - (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}.
We need to get rid of that Jacobian transpose and Cartesian coordinates mass matrix. Let’s get them both at once!

(left multiply by \textbf{J}\textbf{M}_{\textbf{q}}^{-1})
\textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_\textrm{full} = \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{M}_{\textbf{q}} \ddot{\textbf{q}} + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \pmb{\tau}_g - \textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}
(simplify)
\textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_\textrm{full} = \textbf{J} \ddot{\textbf{q}} + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \pmb{\tau}_g - \textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}
Now, the first part of that left hand side term is:

\textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{J}^T,
recall that this is exactly how to transform the mass matrix between coordinate systems, so that what this term actually is is the inverse mass matrix in operational space!

\textbf{M}_{\textbf{x}}^{-1} = \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{J}^T
This gives us an equation that looks like:

\textbf{M}_{\textbf{x}}^{-1} \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_\textrm{full} = \textbf{J} \ddot{\textbf{q}} + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \pmb{\tau}_g - \textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}},
(simplify)
\ddot{\textbf{x}}_\textrm{full} = \textbf{J} \ddot{\textbf{q}} + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \pmb{\tau}_g - \textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}.
Additionally, the first term from the right hand side is something that we recognize from our Jacobian acceleration equation, so let’s substitute in for that as well:

\ddot{\textbf{x}}_\textrm{full} = \ddot{\textbf{x}} + \dot{\textbf{J}}\dot{\textbf{q}} + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \pmb{\tau}_g - \textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}.

The way to read this is that the actual acceleration of the system in operational space is equal to the acceleration according to operational space dynamics, plus a whole bunch of other dynamics coming from the rest of the system, including Coriolis and centrifugal effects and gravity (all transformed into operational space) as well as the forces introduced by any other controllers into the system \pmb{\tau}_{\textrm{null}}.

Holy cow, we did it. We found the equation for acceleration of the system in operational space that takes into account the full dynamics of the system! That was a journey.

Now, what we want to do is to make sure that that last term on the right hand side (this one: \textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}}) is always zero, such that we can guarantee that nothing from \pmb{\tau}_{\textrm{null}} ever affects the acceleration of our system in operational space. If you look at that term, there’s only one term that we can play with: the pseudo-inverse.

Choosing a pseudo-inverse for the Jacobian transpose

There are a bunch of different pseudo-inverses that can be chosen for a given situation (infinite, actually). What we want to do here is engineer our pseudo-inverse such that the term multiplying \pmb{\tau}_{\textbf{null}} in the above operational space acceleration equation is guaranteed to go to zero. So let’s set this term to zero and play around with it:

\textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) \pmb{\tau}_{\textrm{null}} = 0,
(we want this to be true for all \pmb{\tau}_{\textrm{null}}, so we can remove it)
\textbf{J}\textbf{M}_{\textbf{q}}^{-1} (\textbf{I} - \textbf{J}^T \textbf{J}^{T+}) = 0,
(multiply through and switch sides)
\textbf{J}\textbf{M}_{\textbf{q}}^{-1} = \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{J}^T \textbf{J}^{T+} ,
(rewrite \textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{J}^T as \textbf{M}_{\textbf{x}}^{-1})
\textbf{J}\textbf{M}_{\textbf{q}}^{-1} = \textbf{M}_{\textbf{x}}^{-1} \textbf{J}^{T+},
(left multiply by \textbf{M}_{\textbf{x}}, rearrange, and simplify)
\textbf{J}^{T+} = \textbf{M}_{\textbf{x}}\textbf{J}\textbf{M}_{\textbf{q}}^{-1}.
And there it is! Now, you might look at this and say that you don’t think that by defining this as the Jacobian psuedo-inverse we’re going to get a bunch of 1′s and 0′s coming out in our matrix multiplication. You would be very correct. But, by setting our Jacobian transpose pseudo-inverse using the above equation we can guarantee that any forces in \pmb{\tau}_{\textrm{null}} that affect acceleration in the operational space will be filtered out and set to zero. Let’s see why.

If we plug this in to the null space filtering term:

\textbf{J}^T \textbf{M}_{\textbf{x}}\textbf{J}\textbf{M}_{\textbf{q}}^{-1}\pmb{\tau}_{\textrm{null}}
(substitute in \pmb{\tau}_{\textrm{null}} = \textbf{M}_\textbf{q} \ddot{\textbf{x}}_{\textrm{null}})

\textbf{J}^T \textbf{M}_{\textbf{x}}\textbf{J}\textbf{M}_{\textbf{q}}^{-1} \textbf{M}_\textbf{q} \ddot{\textbf{q}}_{\textrm{null}}
(simplify)

\textbf{J}^T \textbf{M}_{\textbf{x}}\textbf{J} \ddot{\textbf{q}}_{\textrm{null}}
Now, there’s a term we recognize from above, \textbf{J} \ddot{\textbf{q}}, let’s substitute in for that:

\textbf{J}^T \textbf{M}_{\textbf{x}} (\ddot{\textbf{x}}_{\textrm{null}} - \dot{\textbf{J}}\dot{\textbf{q}}_{\textrm{null}})
(multiply through)

\textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_{\textrm{null}} - \textbf{J}^T \textbf{M}_{\textbf{x}} \dot{\textbf{J}}\dot{\textbf{q}}_{\textrm{null}}
That first term, why, that’s \textbf{F}_{\textrm{null}} = \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}_{\textrm{null}}. That’s exactly the force that’s going to be applied by our null space controller in operational space! Neat! And with the Jacobian transpose in front of a force we have:

\pmb{\tau}_{\textrm{null-in-\textbf{x}}} = \textbf{J}^T \textbf{F}_{\textrm{null}}
Please forgive the terrible notation. But, what we have then is exactly the torques from the null space controller that are going to affect operational space acceleration, and we can just cancel that out and we’re golden.

Then there’s the other term \textbf{J}^T \textbf{M}_{\textbf{x}} \dot{\textbf{J}}\dot{\textbf{q}}, which just joins the abyss of Coriolis and centrifugal terms that we must never speak of. But seriously. For the same reasons we ignored them above. So, that’s easy.

Operational space control of simple robot arm with acceleration null space force filter

Excellent! Now the only thing left to do is to write out the whole control signal. So let’s just go back and grab our controller from the last post:

\pmb{\tau} = \textbf{J}^T \textbf{M}_{\textbf{x}} ( k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}}) + k_i(\ddot{\textbf{x}}_{\textrm{des}} - \ddot{\textbf{x}})) + \pmb{\tau}_g,
and now we’ll add in the null space filtering term:

\pmb{\tau} = \textbf{J}^T \textbf{M}_{\textbf{x}} ( k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}}) + k_i(\ddot{\textbf{x}}_{\textrm{des}} - \ddot{\textbf{x}})) + \pmb{\tau}_g + (\textbf{I} - \textbf{J}^T\textbf{M}_\textbf{x}\textbf{J}\textbf{M}_\textbf{q}^{-1})\pmb{\tau}_{\textrm{null}}

We did it! This will now allow a high-priority operational space controller to execute without interference from a secondary controller operating in its null space to complete it’s own set of goals (when possible).

Conclusions

What the null space filter does is introduce a compensatory force term that cancels out the acceleration effects of other controllers in operational space. Here we’ve used it to account for the effects of additional controllers implemented on the system, but actually this can be used to cancel out the effects of any signal that we can model! So if there’s some pesky force interfering with your system but you’ve got its number and know how to characterize it, then you simply add in a null space filtering term to cancel it out, such that your control signal looks like:

\pmb{\tau} = \textbf{J}^T \textbf{M}_\textbf{x} \ddot{\textbf{x}}^* + \pmb{\tau}_g - (\textbf{J}^T\textbf{M}_\textbf{x}\textbf{J}\textbf{M}_\textbf{q}^{-1} \pmb{\tau}_{\textrm{pesky-signal}}),
and you are good to go. So cool.

It’s a lot of math, but when you start to get a feel for it what’s really awesome is that this is it. We’re describing the whole system, and so by working with these equations we can get a super effective controller. Which is pretty cool. Especially in relation to other possible controllers.

Alright! We’ve now worked through all the basic theory for operational space control, it is time to get some implementations going.

Tagged , , , , ,

Robot control part 4: Operation space control

In this post we’ll look at operational space control and how to derive the control equations. I’d like to mention again that these posts have all come about as a result of me reading and working through Samir Menon’s operational space control tutorial, where he works through an implementation example on a revolute-prismatic-prismatic robot arm.

Generalized coordinates vs operational space

The term generalized coordinates refers to a characterization of the system that uniquely defines its configuration. For example, if our robot has 7 degrees of freedom, then there are 7 state variables, such that when all these variables are given we can fully account for the position of the robot. In the previous posts of this series we’ve been describing robotic arms in joint space, and for these systems joint space is an example of generalized coordinates. This means that if we know the angles of all of the joints, we can draw out exactly what position that robot is in. An example of a coordinate system that does not uniquely define the configuration of a robotic arm would be one that describes only the x position of the end-effector.

So generalized coordinates tell us everything we need to know about where the robot is, that’s great. The problem with generalized coordinates, though, is that planning trajectories in this space for tasks that we’re interested in performing tends not to be straight forward. For example, if we have a robotic arm, and we want to control the position of the end-effector, it’s not obvious how to control the position of the end-effector by specifying a trajectory for each of the arm’s joints to follow through joint space.

The idea behind operational space control is to abstract away from the generalized coordinates of the system and plan a trajectory in a coordinate system that is directly relevant to the task that we wish to perform. Going back to the common end-effector position control situation, we would like to operate our arm in 3D (x,y,z) Cartesian space. In this space, it’s obvious what trajectory to follow to move the end-effector between two positions (most of the time it will just be a straight line in each dimension). So our goal is to build a control system that lets us specify a trajectory in our task space and will transform this signal into generalized coordinates that it can then send out to the system for execution.

Operational space control of simple robot arm

Alright, we’re going to work through an example. The generalized coordinates for this example is going to be joint space, and the operational space is going to be the end-effector Cartesian coordinates relative to the a reference frame attached to the base. Recycling the robot from the second post in this series, here’s the set up we’ll be working with:

RR robot arm
Once again, we’re going to need to find the Jacobians for the end-effector of the robot. Fortunately, we’ve already done this:

\textbf{J} = \left[ \begin{array}{cc} -L_0 sin(\theta_0)  - L_1 sin(\theta_0 + \theta_1)  & - L_1 sin(\theta_0 + \theta_1)  \\ L_0 cos(\theta_0)  + L_1 cos(\theta_0 + \theta_1) & L_1 cos(\theta_0 + \theta_1) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right]

Great! So now that we have \textbf{J}, we can go ahead and transform forces from end-effector (hand) space to joint space as we discussed in the second post:

\pmb{\tau} = \textbf{J}^T \textbf{F}_{\textbf{x}}.
From here, we rewrite our force term

\textbf{F}_{\textbf{x}} = \textbf{M}_{\textbf{x}} \ddot{\textbf{x}},
where \ddot{\textbf{x}} is end-effector acceleration, and \textbf{M}_\textbf{x} = (\textbf{J} \; \textbf{M}_\textbf{q}^{-1} \textbf{J}^T)^{-1}. Recall that \textbf{M}_{\textbf{x}} is the transformation of the mass matrix from generalized coordinates (joint space) to operational space. I’ll talk about the derivation of this in the next post, but for now take it as is.

With the mass matrix of the system incorporated we now have a control signal that compensates for inertia:

\pmb{\tau} = \textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}}.
Further, as we also discussed in the third post, we can add to this to account for the effects of gravity in joint space:

\pmb{\tau} = \textbf{J}^T \textbf{M}_{\textbf{x}} \ddot{\textbf{x}} + \pmb{\tau}_g,
and there we are! We have a control signal that converts desired end-effector acceleration to torque commands, and compensates for inertia and gravity. To implement a PD controller in our operational space, all we have to do is replace \ddot{\textbf{x}} with our desired acceleration:

\ddot{\textbf{x}}^* = k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}}),

to give us our final equation for a PD controller that transforms operational space trajectories into joint space torques:

\pmb{\tau} = \textbf{J}^T \textbf{M}_{\textbf{x}} (k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}})) + \pmb{\tau}_g,

Hurray! That was relatively simple. The great thing about this, though, is that it’s the same process for any robot arm! So go out there and start building controllers! Find your robot’s mass matrix and gravity term in generalized coordinates, the Jacobian for the end effector, and you’re in business.

Conclusions

So, this feels a little anticlimactic without an actual simulation / implementation of operational space, but don’t worry! As avid readers (haha) will remember, a while back I worked out how to import some very realistic MapleSim arm simulations into Python for use with some Python controllers. This seems a perfect application opportunity, so that’s next! A good chance to work through writing the controllers for different arms and also a chance to play with controllers operating in null spaces and all the like.

Actual simulation implementations will also be a good chance to play with trying to incorporate those other force terms into the control equation, and get to see the results without worrying about breaking an actual robot. In actual robots a lot of the time you leave out anything where your model might be inaccurate because the last thing to do is falsely compensate for some forces and end up injecting energy into your system, making it unstable.

There’s still some more theory to work through though, so I’d like to do that before I get to implementing simulations. One more theory post, and then we’ll get back to code!

Tagged , , ,

Robot control part 3: Accounting for mass and gravity

In the exciting previous post we looked at how to go about generating a Jacobian matrix, which we could use to transformation both from joint angle velocities to end-effector velocities, and from desired end-effector forces into joint angle torques. I briefly mentioned right at the end that using just this force transformation to build your control signal was only appropriate for very simple systems that didn’t have to account for things like arm-link mass or gravity. This is easy to see if we look at the equation for acceleration of the robot arm system in joint space:

\ddot{\textbf{q}} = (\pmb{\tau} - \textrm{b}(\textbf{q}, \dot{\textbf{q}}) - \pmb{\tau}_g) / \textbf{M}_\textbf{q},
where \ddot{\textbf{q}} is joint angle acceleration, \pmb{\tau} is torque, \textrm{b}(\textbf{q}, \dot{\textbf{q}}) is a function describing the Coriolis and centrifugal effects, \pmb{\tau}_g is the effect of gravity in joint space, and \textbf{M}_\textbf{q} is the mass matrix of the system in joint space.

Clearly there are a lot of things going on here, and if we just plug in some control signal for \pmb{\tau} we’re ignoring a lot of dynamics and the system isn’t going to do what we want. In this post we’ll remedy that situation to develop an effective PD controller that operates in joint space. We’re going to do this by calculating the effects of inertia and gravity such that we can cancel them out of the above equation. This is, of course, another step along our path to developing an effective operational space controller!

Accounting for inertia

The fact that systems have mass is a pain in our controller’s side because it introduces inertia into our system, making control of how the system will move at any given point in time more difficult. The heavier something is, the more resistant it is to any change in velocity, and the force required to move a system along a desired trajectory depends on both the object’s mass and its current velocity. We don’t want to have to deal with any of that. We want the controller to account for inertia automatically, such that we can pretend the dynamics are simple when calculating our control signal.

Let’s pull up an example robot arm:

3d 2 link

It’s a crude drawing, but the idea is that you have a two link robot arm operating in the (x,z) plane, with the y axis extending into the picture. The yellow circles represent each links center of mass (COM). The position of each link’s COM is defined relative to that link’s reference frame. What we want to do is figure out how much each link’s mass is going to affect the system dynamics, so that we can cancel out its effect. Because the controller we’re developing here operates in joint space, the first step is calculate the position in terms of joint angles and figure out how the COMs will affect joint space dynamics. Does this sound familiar? We need to find the Jacobians for each COM.

Let’s say that the COM positions, relative to each links coordinate frame, for the links are:

\textrm{com}_0 = \left[ \begin{array}{c} \frac{1}{2}cos(\theta_0) \\ 0 \\ \frac{1}{2}sin(\theta_0) \end{array} \right], \;\;\;\; \textrm{com}_1 = \left[ \begin{array}{c} \frac{1}{4}cos(\theta_1) \\ 0 \\ \frac{1}{4}sin(\theta_1) \end{array} \right]
Following the same procedure as in the previous post on deriving Jacobians, we need to find each COM position in reference to the base coordinate frame. We already have the first link’s COM in base coordinates (since the first link and the base share the same coordinate frame), so we just have to find the position of the second link’s COM, which we can do with our robot arm’s transformation matrix:

\textbf{T} \; \textrm{com}_1 = \left[ \begin{array}{cccc} cos(\theta_1) & 0 & -sin(\theta_1) & L_0 cos(\theta_0) \\ 0 & 1 & 0 & 0 \\ sin(\theta_1) & 0 & cos(\theta_1) & L_0 sin(\theta_0) \\ 0 & 0 & 0 & 1 \end{array} \right] \; \; \left[ \begin{array}{c} \frac{1}{4}cos(\theta_1) \\ 0 \\ \frac{1}{4}sin(\theta_1)  \\ 1 \end{array} \right] = \left[ \begin{array}{c} L_0 cos(\theta_0) + \frac{1}{4}cos(\theta_0 + \theta_1) \\ 0 \\ L_0 sin(\theta_0) + \frac{1}{4} cos(\theta_0 + \theta_1) \\ 1 \end{array} \right].
To see the full computation worked out explicitly please see my previous robot control post.

Now that we have the COM positions in terms of joint angles, we can find the Jacobians for each point through our Jacobian equation:

\textbf{J} = \frac{\partial \textbf{x}}{\partial \textbf{q}}.

Using this for each link gives us:
\textbf{J}_0 = \left[ \begin{array}{cc} -\frac{1}{2}sin(\theta_0)  & 0 \\ 0 & 0 \\ \frac{1}{2} cos(\theta_0)  & 0 \\ 0 & 0 \\ 1 & 0 \\ 0 & 0 \end{array} \right]
\textbf{J}_1 = \left[ \begin{array}{cc} -L_0sin(\theta_0) -\frac{1}{4}sin(\theta_0 + \theta_1) & -\frac{1}{4} sin(\theta_0 + \theta_1) \\ 0 & 0 \\ -L_0 cos(\theta_0) -\frac{1}{4}cos(\theta_0 + \theta_1) & -\frac{1}{4} cos(\theta_0 + \theta_1) \\ 0 & 0 \\ 1 & 1 \\ 0 & 0 \end{array} \right].

Kinetic energy

So now that we have the Jacobians for the COM for each arm, what’s next? Well, we’re going to use the Jacobians to calculate the kinetic energy that each link generates during motion. Recall that to get the overall energy of the system we can simply sum up the energy introduced from each source. This is a wonderful property of energy that makes our work here a lot easier. So we’re going to do exactly that and calculate each link’s kinetic energy and then sum them up to get the total energy introduced into the system by the mass and configuration of each link. Our goal is to find this and then go ahead and explicitly cancel it out in our control signal.

As we learned in physics class, kinetic energy (KE) is one half of mass times velocity squared. Writing this in vector / matrix notation for the Cartesian coordinate systems we have:

\textrm{KE} = \frac{1}{2} \; \dot{\textbf{x}}^T \textbf{M}_\textbf{x} \; \dot{\textbf{x}},
where \textbf{M}_\textbf{x} is the mass matrix of our system, with the subscript \textbf{x} denoting that it’s defined in Cartesian space, and \dot{\textbf{x}} is velocity vector.

As a quick reminder, the velocity vector tracks both linear and angular velocities,

\dot{\textbf{x}} = \left[ \begin{array}{c} dx \\ dy \\ dz \\ \omega_x \\ \omega_y \\ \omega_z \end{array} \right]
and the mass matrix is structured

\textbf{M}_\textbf{x} = \left[ \begin{array}{cccccc} m_i & 0 & 0 & 0 & 0 & 0 \\ 0 & m_i & 0 & 0 & 0 & 0 \\ 0 & 0 & m_i & 0 & 0 & 0 \\ 0 & 0 & 0 & I_{xx} & I_{xy} & I_{xz} \\ 0 & 0 & 0 & I_{yx} & I_{yy} & I_{yz} \\ 0 & 0 & 0 & I_{zx} & I_{zy} & I_{zz} \end{array} \right],
where m_i is the mass of the point, and the I_{ij} terms are the moments of inertia, which define the objects resistance to change in angular velocity the axes, the same way the mass element defines the objects resistance to change in linear velocity.

For our robot arm, the mass matrix for the COM of each link is defined in terms to that link’s reference frame, in Cartesian coordinates. But what we need is the mass matrix define in joint angle space, because that’s where our controller operates. Well, maybe we can figure something out using those Jacobians we just calculated.

(start with the equation for KE for the whole system)
\textrm{KE} = \frac{1}{2} \; \Sigma_{i=0}^n ( \dot{\textbf{x}}_i^T \textbf{M}_{\textbf{x}_i} \; \dot{\textbf{x}}_i),
(sub in with \dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}})
\textrm{KE}_i = \frac{1}{2} \; \Sigma_{i=0}^n ( \dot{\textbf{q}}^T \; \textbf{J}_i^T \textbf{M}_{\textbf{x}_i} \textbf{J}_i \; \dot{\textbf{q}}),
(move the \dot{\textbf{q}} terms outside the summation)
\textrm{KE}_i = \frac{1}{2} \; \dot{\textbf{q}}^T \; \Sigma_{i=0}^n (\textbf{J}_i^T \textbf{M}_{\textbf{x}_i} \textbf{J}_i) \; \dot{\textbf{q}},

Looking at this equation, we see that if we set
\textbf{M}_{\textbf{q}} = \Sigma_{i=0}^n \; \textbf{J}_i^T \textbf{M}_{\textbf{x}_i} \; \textbf{J}_i,
then we get

\textrm{KE}_i = \frac{1}{2} \; \dot{\textbf{q}} \; \textbf{M}_\textbf{q} \; \dot{\textbf{q}},
which is the equation for calculating kinetic energy in joint space, which means that we can transform our mass matrix from one coordinate system to another with two matrix multiplications by the Jacobian. Neat!

Now that we’ve successfully calculated the mass matrix of the system in joint space, we can incorporate it into our control signal and cancel out its effects on the system dynamics! On to the next problem!

Accounting for gravity

Here we’re going to once again use the wonderful property of energy equivalence, this time to figure out how to compensate for gravity. The work done by gravity is the same in all coordinate systems. Our controller operates by applying torque on joints, so we would like to know the effect of gravity in joint space so that we can cancel it out.

What the effect of gravity in joint space is isn’t obvious, so we’re going to go with something that is, which is the work done by gravity in Cartisian space. We’ll define it in reference to the base frame of our robot arm. Here, the work done by gravity is simply the summation of the distance each link’s center of mass has moved multiplied by the force of gravity. The force of gravity in Cartesian space is very simply the mass of the object multiplied by -9.38m/s^2 along the z axis.

\textbf{F}_g = \textbf{g} \; \textbf{m}_i,
where \textbf{g} = \left[ \begin{array}{c} 0 \\ 0 \\ -9.8 \end{array} \right].
So let’s write out the equation for work done by gravity:

\textbf{W}_g = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \dot{\textbf{x}}_i),
where the subscript g_i on \textbf{F} denotes the force of gravity on the ith link in Cartesian space. Now because of energy equivalence, the equation for work is equivalent if we calculated it in joint space:

\pmb{\tau}_g^T \dot{\textbf{q}} = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \dot{\textbf{x}}_i),
where the subscript g on \pmb{\tau} denotes the force of gravity in joint space. OK, the force of gravity in joint space, \pmb{\tau}_g, is what we really want, so let’s see what we can do. Well, a first obvious thing is we can replace \dot{\textbf{x}}_i on the left using the Jacobian equation:

(sub in with \dot{\textbf{x}}_i = \textbf{J}_i \; \dot{\textbf{q}})

\pmb{\tau}_g^T \dot{\textbf{q}} = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \textbf{J}_i \; \dot{\textbf{q}}),
(cancel out the \dot{\textbf{q}} terms on both sides)

\pmb{\tau}_g^T = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \textbf{J}_i),
(transpose things around)

\pmb{\tau}_g = \Sigma^n_{i=0} (\textbf{J}_i^T \textbf{F}_{g_i}),

This tells us that to find the effect of gravity in joint space terms we see that all we have to do is multiply the mass of each link by its Jacobian, multiplied by the force of gravity in (x,y,z) space, and sum over each link. So we’re able to use the Jacobian to transform forces once again!

Making a PD controller in joint space

We are now able to account for the energy in the system caused by inertia and gravity, great! Let’s use this to build a simple PD controller in joint space. Control should be very straight forward because once we cancel out the effects of gravity and inertia then we can almost pretend that the system behaves linearly. This means that we can also treat control of each of the joints independently, since their movements no longer affect one another. So in our control system we’re actually going to have a PD controller for each joint.

The above-mentioned nonlinearity that’s left in the system dynamics is due to the Coriolis and centrifugal effects. Now, these can be accounted for, but they require highly accurate model of the moments of inertia. If the moments are incorrect then the controller can actually introduce instability into the system, so it’s better if we just don’t address them.

At any given point in time, the force (torque) in the system can be calculated as

\pmb{\tau} = \textbf{M}_\textbf{q} \; \ddot{\textbf{q}} + \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \pmb{\tau}_g,
where \ddot{\textbf{q}} is the joint acceleration, and the function \textrm{b}(\textbf{q}, \dot{\textbf{q}}) represents the unmodeled Coriolis and centrifugal dynamics of the system. Rewriting in terms of acceleration:

\ddot{\textbf{q}} = (\pmb{\tau} - \textrm{b}(\textbf{q}, \dot{\textbf{q}}) - \pmb{\tau}_g) / \textbf{M}_\textbf{q}.
Ideally, we would have

\pmb{\tau} = (\textbf{M}_\textbf{q} \; \ddot{\textbf{q}}^* + \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \pmb{\tau}_g),
where \ddot{\textbf{q}}^* is the desired acceleration of the system. Plugging this in to our equation for acceleration then would give

\ddot{\textbf{q}} = ((\textbf{M}_\textbf{q} \; \ddot{\textbf{q}}^* + \textrm{b}(\textbf{q}, \dot{\textbf{q}}) + \pmb{\tau}_g) - \textrm{b}(\textbf{q}, \dot{\textbf{q}}) - \pmb{\tau}_g )/ \textbf{M}_\textbf{q}
\ddot{\textbf{q}} = \textbf{M}_\textbf{q} \; \ddot{\textbf{q}}^* \; / \; \textbf{M}_\textbf{q}
\ddot{\textbf{q}} = \ddot{\textbf{q}}^*,
which would be great. As you may have guessed, however, we can’t account for unmodeled dynamics \textrm{b}(\textbf{q}, \dot{\textbf{q}}), so we can’t quite do this.

We can, however, account for gravity and mass. So our equation control signal for torque then will be:

\pmb{\tau} = (\textbf{M}_\textbf{q} \; \ddot{\textbf{q}}^* + \pmb{\tau}_g).

Now all we have to do is fill in our desired acceleration using the standard PD control formula:

\ddot{\textbf{q}}^* = k_p \; (\textbf{q}_{\textrm{des}} - \textbf{q}) + k_v \; (\dot{\textbf{q}}_{\textrm{des}} - \dot{\textbf{q}}),
where k_p and k_v are our gain values.

And that’s it! With our controller using the control signal

\pmb{\tau} = (\textbf{M}_\textbf{q} \; (k_p \; (\textbf{q}_{\textrm{des}} - \textbf{q}) + k_v \; (\dot{\textbf{q}}_{\textrm{des}} - \dot{\textbf{q}})) + \pmb{\tau}_g)
we’ve successfully build an effective PD controller in joint space!

Conclusions

Here we looked at building a PD controller that operates in the joint space of a robotic arm that can cancel out the effects of inertia and gravity. By cancelling out the effects of inertia, we can treat control of each of the joints independently, effectively orthogonalizing their control. This makes PD control super easy, we just set up a simple controller for each joint. Also a neat thing is that all of the required calculations can be performed with algorithms of linear complexity, so it’s not a problem to do all of this super fast.

One of the finer points was that we ignored the Coriolis and centrifugal effects on the robot’s dynamics. This is because in the mass matrix model of the moments of inertia are notoriously hard to accurately capture on actual robots. Often you go based off of a CAD model of your robot and then have to do some fine-tuning by hand. So they will be unaccounted for in our control signal, but most of the time as long as you have a very short feedback loop you’ll be fine.

I am really enjoying working through this, as things build on each other so well here and we’re starting to be able to do some really interesting things with the relatively forward transformation matrices and Jacobians that we learned how to build in the previous posts. This was for a very simple robot, but excitingly the next step after this is moving on to operational space control! At last. From there, we’ll go on to look at more complex robotic situations where things like configuration redundancy are introduced and it’s not quite so straightforward.

Tagged , , , , ,

Robot control part 2: Jacobians, velocity, and force

Jacobian matrices are a super useful tool, and heavily used throughout robotics and control theory. Basically, a Jacobian defines the dynamic relationship between two different representations of a system. For example, if we have a 2-link robotic arm, there are two obvious ways to describe its current position: 1) the end-effector position and orientation (which we will denote \textbf{x}), and 2) as the set of joint angles (which we will denote \textbf{q}). The Jacobian for this system relates how movement of the elements of \textbf{q} causes movement of the elements of \textbf{x}. You can think of a Jacobian as a transform matrix for velocity.

Formally, a Jacobian is a set of partial differential equations: \textbf{J} = \frac{\partial \textbf{x}}{\partial \textbf{q}}.
With a bit of manipulation we can get a neat result:

\textbf{J} = \frac{\partial \textbf{x}}{\partial t} \; \frac{\partial t}{\partial \textbf{q}} \rightarrow \frac{\partial \textbf{x}}{\partial \textbf{t}} = \textbf{J} \frac{\partial \textbf{q}}{\partial t}, or
\dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}}, where \dot{\textbf{x}} and \dot{\textbf{q}} represent the time derivatives of \textbf{x} and \textbf{q}.
This tells us that the end-effector velocity is equal to the Jacobian, \textbf{J}, multiplied by the joint angle velocity.

Why is this important? Well, this goes back to our desire to control in operational (or task) space. We’re interested in planning a trajectory in a different space than the one that we can control directly. Iin our robot arm, control is effected through a set of motors that apply torque to the joint angles, BUT what we’d like is to plan our trajectory in terms of end-effector position (and possibly orientation), generating control signals in terms of forces to apply in (x,y,z) space. Jacobians allow us a direct way to calculate what the control signal is in the space that we control (torques), given a control signal in one we don’t (end-effector forces). The above equivalence is a first step along the path to operational space control. As just mentioned, though, what we’re really interested in isn’t relating velocities, but forces. How can we do this?

Energy equivalence and Jacobians
Energy equivalence is a wonderful property of our universe where the amount of energy expended is the same no matter how the system in question is being represented. This is important to us if we have, say, two different ways to characterize the state of the system.

Let’s go back to our robot arm. Say the arm operates on the horizontal plane, and we characterize the system state in two ways, through joint angle positions, \textbf{q} = [\theta_0, \theta_1], and end-effector position (in terms of (x_0, y_0, z_0)), \textbf{x} = [x, y, 0]. Our arm looks like this:

RR robot arm
Work is the application of force over a distance, i.e. \textbf{W} = \textbf{F}^T \textbf{d}. Power is the rate at which work is performed, \textbf{P} = \frac{\textbf{W}}{s}. In our system, the rate of work at a given moment by applying a force to the end-effector is \textbf{P} = \textbf{F}^T \dot{\textbf{x}}. Because of energy equivalence, we know that work is being applied at the same rate regardless of the characterization of the system, so: \textbf{P} = \pmb{\tau}^T \dot{\textbf{q}}. By setting the two equations equal to each other and plugging in the Jacobian equation above (this one: \dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}}) we can get a neat result:

(set them equal)
\textbf{P} = \textbf{F}^T \dot{\textbf{x}} = \pmb{\tau}^T \dot{\textbf{q}}
(sub in for \dot{\textbf{x}})
\textbf{F}^T \textbf{J} \; \dot{\textbf{q}} = \pmb{\tau}^T \dot{\textbf{q}}
(drop out \dot{\textbf{q}})
\textbf{F}^T \textbf{J} = \pmb{\tau}^T
(rewrite using transform identities)
\pmb{\tau} = \textbf{J}^T \textbf{F}

So not only does our Jacobian relate velocity from one characterization to the other, it also can be use to tell us given a desired set of forces in end-effector space, \textbf{F}, what the corresponding torques, \pmb{\tau} are in joint angle space! Daaang.

To recap: Jacobians are not only instantaneous transform matrices for velocity (\dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}}), but also for force (\pmb{\tau} = \textbf{J}^T \textbf{F})! Well then, if Jacobians are so great, we should probably take a closer look at how to go about deriving them.

Building the Jacobian

First, we need to define the relationship between the (x,y,z) position of the end-effector and the robot’s joint angles, (\theta_0, \theta_1). However will we do it? Well, we know the distances from the shoulder to the elbow, and elbow to the wrist, as well as the joint angles, and we’re interested in finding out where the end-effector is relative to a base coordinate frame…OH MAYBE we should use those forward transformation matrices from the previous post. Let’s do it!

The forward transformation matrix

As you recall, transformation matrices allow us to determine where a given point is in a desired coordinate set when it’s defined in a different frame of reference. In this case, we know the position of the end-effector relative to the second joint of the robot arm, but we’re interested in where it is relative to the the base reference frame, which is the first joint reference frame in this case. This means that we just need one transformation matrix, from the reference frame attached to the second joint back to the base.

The rotation part of this matrix is easy, just the same as in the previous post:

\textbf{T}_{rot} = \left[ \begin{array}{ccc} cos(\theta_0) & -sin(\theta_0) & 0 \\ sin(\theta_0) & cos(\theta_0) & 0 \\ 0 & 0 & 1 \end{array} \right]
The translation part of our transformation matrices is a little different than before though, because now reference frame 1 is changing as a function of the angle of the previous joints angles. So, from trig, we know that if we have a vector of length r and an angle \theta, then the x position of the end point is r\;cos(\theta) and the y position is r\;sin(\theta). As a reminder, our arm is operating in the (x,y) plane, so the z position will always be 0. This means that the translation part of our transformation matrix is going to look like:

\textbf{T}_{pos} = \left[ \begin{array}{c} L_0 cos(\theta_0) \\ L_0 sin(\theta_0) \\ 0 \end{array} \right]
Giving us the forward transformation matrix:

\textbf{T} = \left[ \begin{array}{cc} \textbf{T}_{rot} & \textbf{T}_{pos} \\ \textbf{0} & \textbf{1} \end{array} \right] = \left[ \begin{array}{cccc} cos(\theta_0) & -sin(\theta_0) & 0 & L_0 cos(\theta_0) \\ sin(\theta_0) & cos(\theta_0) & 0 & L_0 sin(\theta_0)\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]
Which transforms a point from reference frame 1 (elbow joint) to reference frame 0 (shoulder joint / base). The point of interest here is the end-effector, which, similar to the position of reference frame 1, is defined as a function of joint angle and the length of this link of the arm:

\textbf{x} = \left[ \begin{array}{c} L_1 cos(\theta_1) \\ L_1 sin(\theta_1) \\ 0 \\ 1 \end{array} \right].

To find the position of our end-effector in terms of our origin reference frame we multiply our point \textbf{x} by the transformation \textbf{T}, giving us:

\textbf{T} \; \textbf{x} = \left[ \begin{array}{cccc} cos(\theta_0) & -sin(\theta_0) & 0 & L_0 cos(\theta_0) \\ sin(\theta_0) & cos(\theta_0) & 0 & L_0 sin(\theta_0)\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] \; \left[ \begin{array}{c} L_1 cos(\theta_1) \\ L_1 sin(\theta_1) \\ 0 \\ 1 \end{array} \right] = \left[ \begin{array}{c} L_1 cos(\theta_0) cos(\theta_1) - L_1 sin(\theta_0) sin(\theta_1) + L_0 cos(\theta_0) \\ L_1 sin(\theta_0) cos(\theta_1) + L_1 cos(\theta_0) sin(\theta_1) + L_0 sin(\theta_0) \\ 0 \\ 1 \end{array} \right]
where by pulling out the L_1 term and using the trig identities

cos(\alpha)cos(\beta) - sin(\alpha)sin(\beta) = cos(\alpha + \beta)
and

sin(\alpha)cos(\beta) + cos(\alpha)sin(\beta) = sin(\alpha + \beta)
we get the position of our end-effector to be

\left[ \begin{array}{c} L_0 cos(\theta_0) + L_1 cos(\theta_0 + \theta_1) \\ L_0 sin(\theta_0) + L_1 sin(\theta_0 + \theta_1) \\ 0 \end{array} \right].
Super! We have defined the position of the end-effector in terms of joint angles. But as I mentioned right at the beginning, we’re interested in both the position of the end-effector and its orientation, so we need to also define how it’s rotated relative to the base frame.

Accounting for orientation

Luckily, this is super straightforward, especially when there isn’t anything complicated like spherical joints involved and it’s just revolute and prismatic joints that we’re dealing with. With prismatic joints, identifying the rotation caused is easy, it’s 0. They’re linear, moving in a single plane, so no rotation.
With revolute joints, the rotation of the end-effector in each axis is simply a sum of rotations of each joint in their respective axes of rotation. In our case, the joints are rotating around the z axis, so the rotation part of our end-effector state [\omega_x, \omega_y, \omega_z]^T = [0, 0, \theta_0 + \theta_1]^T. Easy! If the first joint had been rotating in a different plane, say in the (x, z) plane around the y axis instead, then we would have [\omega_x, \omega_y, \omega_z]^T = [0, \theta_0, \theta_1]^T.

Partial differentiation

We are well on our way to finding the Jacobian for this system. What we need to do now is take the partial differential of the above vectors of equations for the system position and orientation with respect to the elements of \textbf{q}. We’ll break up the Jacobian into two parts to keep things simpler, J_v and J_\omega, for the linear and angular velocity, respectively, of the end-effector.

The linear velocity part of our Jacobian looks like:
\textbf{J}_v = \left[ \begin{array}{cc} \frac{\partial x}{\partial\theta_0} & \frac{\partial x}{\partial \theta_1} \\ \frac{\partial y}{\partial \theta_0} & \frac{\partial y}{\partial \theta_1} \\ \frac{\partial z}{\partial\theta_0} & \frac{\partial z}{\partial \theta_1} \end{array} \right] = \left[ \begin{array}{cc} -L_0 sin(\theta_0) - L_1 sin(\theta_0 + \theta_1) & - L_1 sin(\theta_0 + \theta_1) \\ L_0 cos(\theta_0)  + L_1 cos(\theta_0 + \theta_1) & L_1 cos(\theta_0 + \theta_1) \\ 0 & 0 \end{array} \right].

The angular velocity part of our Jacobian looks like:
\textbf{J}_\omega = \left[ \begin{array}{cc} \frac{\partial \omega_x}{\partial \theta_0} & \frac{\partial \omega_x}{\partial \theta_1} \\ \frac{\partial \omega_y}{\partial \theta_0} & \frac{\partial \omega_y}{\partial \theta_1} \\ \frac{\partial \omega_z}{\partial \theta_0} & \frac{\partial \omega_z}{\partial \theta_1} \end{array} \right] = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right].

To get our full Jacobian, we simply stack our two parts:

\textbf{J} = \left[ \begin{array}{c} \textbf{J}_v \\ \textbf{J}_\omega \end{array} \right].

And that’s that, Jacobian built!

Analyzing the Jacobian

Now that we have our Jacobian, we should take a moment to look at it and see what it’s telling us about the relationship between \dot{\textbf{x}} and \dot{\textbf{q}}.

For example, you may have noticed that there is a large chunk of zeros in the middle of the Jacobian we just defined above, along the row corresponding to linear velocity along the z axis, and the rows corresponding to the angular velocity around the x and y axes. What this means that our z position, and our rotations \omega_x and \omega_y are not controllable. Why is that you say? It goes back to our first Jacobian equation:

\dot{\textbf{x}} = \textbf{J}\;\dot{\textbf{q}}
No matter what joint angle velocities are occurring, it’s impossible to affect those \textbf{x} elements through \dot{\textbf{q}}, because the Jacobian nullifies them. Because of this, these rows of the Jacobian are referred to as its null space.

What this means for our system is that because we can’t control these variables, we won’t even worry about specifying them in our end-effector force vector. We’ll drop these rows from both \textbf{F} and \textbf{J}.

Looking back at the variables we can affect, we notice given any two of x, y, \omega_z we can calculate the third, because the robot only has 2 degrees of freedom (the shoulder and elbow joints). This means that we can actually only control two of the end-effector variables. For our purposes here it’s more useful to control the (x,y) coordinates, so we’ll drop \omega_z from our force vector and the Jacobian as well.

So now, our force vector representing the controllable end-effector forces is:

\textbf{F} = \left[ \begin{array}{c}f_x \\ f_y\end{array} \right]
and a Jacobian that looks like:

\textbf{J} = \left[ \begin{array}{cc} -L_0 sin(\theta_0) - L_1 sin(\theta_0 + \theta_1) & - L_1 sin(\theta_0 + \theta_1) \\ L_0 cos(\theta_0) + L_1 cos(\theta_0 + \theta_1) & L_1 cos(\theta_0 + \theta_1) \end{array} \right].

Note that we could have chosen to specify f_{\omega_z}, in which case (assuming we also chose to control force along the x axis) we would have:

\textbf{F} = \left[ \begin{array}{c} f_x \\ f_{\omega_z}\end{array} \right]
\textbf{J} = \left[ \begin{array}{cc} -L_0 sin(\theta_0) - L_1 sin(\theta_0 + \theta_1) & - L_1 sin(\theta_0 + \theta_1) \\ 1 & 1 \end{array} \right].

But we’ll stick with control of the x and y forces instead, as it’s a little more straightforward.

Using the Jacobian

With our Jacobian, we can find out what different joint angle velocities will cause in terms of the end-effector linear and angular velocities, and we can also transform desired (x,y) forces into (\theta_0, \theta_1) torques. Let’s do a couple of examples. Note that in the former case we’ll be using the full Jacobian, while in the latter case we can use the simplified Jacobian specified just above.

Example 1

Let’s say that we know the joint angle velocities of the arm, and we want to make sure that the resulting (x,y,z) velocity is less than a threshold level. The arm configuration is [\theta_0 = \frac{\pi}{4}, \theta_1 = \frac{3\pi}{8}], the joint angle velocities are [\dot{\theta}_0 = \frac{\pi}{10}\frac{rad}{s}, \dot{\theta}_1 = \frac{\pi}{10}\frac{rad}{s}], and the arm lengths (for simplicity) are each L = 1.

To calculate the resulting (x,y) velocities, we simply plug in all the values at the current time (all the positions and velocities) and then multiply through!

\dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}}
\dot{\textbf{x}} = \left[ \begin{array}{cc} - sin(\frac{\pi}{4}) - sin(\frac{\pi}{4} + \frac{3\pi}{8}) & - sin(\frac{\pi}{4} + \frac{3\pi}{8})  \\ cos(\frac{\pi}{4})  + cos(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4} + \frac{3\pi}{8}) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right] \; \left[ \begin{array}{c} \frac{\pi}{10} \\ \frac{\pi}{10} \end{array} \right] = \left[ \begin{array}{c} -0.8026 \\ -0.01830 \\ 0 \\ 0 \\ 0 \\ \frac{\pi}{5} \end{array} \right]

This tells us that the end-effector’s linear velocity is (-0.8026, -0.01830, 0)^T, and its angular velocity is (0, 0, \frac{\pi}{5})^T. The result of these joint angle velocities in the current system state is that translation of the end-effector along the x_0 axis to the left, and just a slight bit of movement downwards on the y_0 axis, and a rotation around the z axis. We could then check to make sure that this is acceptable and adjust accordingly.

Example 2

This example is the more likely case, and indeed what we’re going to be using Jacobians for as we carry forward and implement operational space control. Imagine that we have the same system configuration as above, and there is a trajectory that we’ve planned in (x,y) space. From this trajectory we calculate the desired (x,y) force to apple to the end-effector. As mentioned above we decided to only control the forces along the x and y axes, and so we can use our simplified Jacobian. Let \textbf{F} = [1, 1]^T, so we’d like to apply one Newton in both directions in (x,y) space. To calculate what this is going to translate into in terms of joint torques:

\pmb{\tau} = \textbf{J}^T \textbf{F} = \left[ \begin{array}{cc} -sin(\frac{\pi}{4}) -sin(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4}) + cos(\frac{\pi}{4} + \frac{3\pi}{8}) \\ -sin(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4} + \frac{3\pi}{8}) \end{array} \right] \left[ \begin{array}{c} 1 \\ 1 \end{array} \right]

Calculating this out we find that from our current configuration, to realize the specified end-effector force we need to apply a torque of \pmb{\tau} = [-1.3066, -1.3066]^T.

And now we are able to transform end-effector forces into torques, and joint angle velocities into end-effector velocities! What a wonderful, wonderful tool to have at our disposal! Hurrah for Jacobians!

Conclusions

In this post I’ve gone through how to use Jacobians to relate the movement of joint angle and end-effector system state characterizations, but Jacobians can be used to relate any two characterizations. All you need to do is define one in terms of the other and do some partial differentiation. The above example scenarios were of course very simple, and didn’t worry about compensating for anything like gravity. But don’t worry, that’s exactly what we’re going to look at in our exciting next chapter!

Something that I found interesting to consider is the need for the orientation of the end-effector and finding the angular velocities. Often in simpler robot arms, we’re only interested in the position of the end-effector, so it’s easy to write off orientation. But if we had a situation where there was a gripper attached to the end-effector, then suddenly the orientation becomes very important, often determining whether or not an object can be picked up or not.

And finally, if you’re interested in reading more about all this, I recommend checking out this book chapter available online, it’s a great resource.

Tagged , , , , , ,
Follow

Get every new post delivered to your Inbox.

Join 27 other followers