I’m doing a tour of learning down at the Brains in Silicon lab run by Dr. Kwabena Boahen for the next month or so working on learning a bunch about building and controlling robots and some other things, and one of the super interesting things that I’m reading about is effective methods for force control of robots. I’ve mentioned operational space (or task space) control of robotic systems before, in the context of learning the inverse kinematic transformation, but down here the approach is to analytically derive the dynamics of the system (as opposed to learning them) and use these to explicitly calculate control signals to move in task space that take advantage of the passive dynamics of the system.

In case you don’t remember what those words mean, operational (or task) space refers to a different configuration space than the basic / default robot configuration space. FOR EXAMPLE: If we have a robot arm with three degrees of freedom (DOF), that looks something like this:

where two joints rotate, and are referred to as , and , respectively, then the most obvious representation for the system state, , is the joint-angle space . So that’s great, but often when we’re using this robot we’re going to be more interested in controlling the position of the end-effector rather than the angles of the joints. We would like to *operate* to complete our *task* in terms of , where are the Cartesian coordinates of our hand in 2D space. So then is our operational (or task) space.

I also mentioned the phrase ‘passive dynamics’. It’s true, go back and check if you don’t believe me. Passive dynamics refer to how the system moves from a given initial condition when no control signal is applied. For example, passive dynamics incorporate the effects of gravity on the system. If we put our arm up in the air and remove any control signal, it falls down by our side. The reason that we’re interested in passive dynamics is because they’re movement for free. So if my goal is to move my arm to be down by my side, I want to take advantage of the fact that the system naturally moves there on it’s own simply by removing my control signal, rather than using a bunch of energy to force my arm to move down.

There are a bunch of steps leading up to building controllers that can plan trajectories in a task space. As a first step, it’s important that we characterize the relationship of each of reference coordinate frames of the robot’s links to the origin, or base, of the robot. The characterization of these relationships are done using what are called forward transformation matrices, and they will be the focus of the remainder of this post.

**Forward transformation matrices in 2D**

As I mentioned mere sentences ago, forward transformation matrices capture the relationship between the reference frames of different links of the robot. A good question might be ‘what is a reference frame?’ A reference frame is basically the point of view of each of the robotic links, where if you were an arm joint yourself what you would consider ‘looking forward’. To get a feel for these and why it’s necessary to be able to move between them, let’s look at the reference frames of each of the links from the above drawn robot:

We know that from our end-effector point is length away along its x-axis. Similarly, we know that is length away from along its x-axis, that is length away from the origin along its y-axis. The question is, then, in terms of the origin’s coordinate frame, where is our point

In this configuration pictured above it’s pretty straightforward to figure out, it’s simply . So you’re feeling pretty cocky, this stuff is easy. OK hotshot, what about NOW:

It’s not as straightforward once rotations start being introduced. So what we’re looking for is a method of automatically accounting for the rotations and translations of points between different coordinate frames, such that if we know the current angles of the robot joints and the relative positions of the coordinate frames we can quickly calculate the position of the point of interest in terms of the origin coordinate frame.

Accounting for rotation

So let’s just look quick at rotating axes. Here’s a picture:

The above image displays two frames of reference with the same origin rotated from each other by degrees. Imagine a point specified in reference frame 1, to find its coordinates in terms of of the origin reference frame, or coordinates, it is necessary to find out the contributions of the and axes to the and axes. The contributions to the axis from are calculated

which takes the position of and maps it to .

To calculate how affects the position in we calculate

which is equivalent to , as shown above.

This term can be rewritten as because and are phase shifted 90 degrees from one another. This leads to the total contributions of a point defined in the axes to the axis being

Similarly for the axis contributions we have

Rewriting the above equations in matrix form gives:

where is called a rotation matrix.

The notation used here for these matrices is that the reference frame number being rotated *from* is denoted in the superscript before, and the reference frame being rotated *into* is in the subscript. denotes a rotation from reference frame 0 into reference frame 1.

To find the location of a point defined in reference frame 1 in reference frame 0 coordinates, we then multiply by the rotation matrix .

Accounting for translation

Alrighty, rotation is great, but as you may have noticed our robots joints are not all right on top of each other. The second part of transformation is translation, and so it is also necessary to account for distances between reference frame origins.

Let’s look at the the reference frames 1 and 0 shown in the above figure, where point in reference frame 1. Reference frame 1 is rotated 45 degrees from and located at in reference frame 0. To account for this translation and rotation a new matrix will be created that includes both rotation and translation. It is generated by appending distances, denoted , to the rotation matrix along with a row of zeros ending in a 1 to get a transformation matrix:

To make the matrix-vector multiplications work out, a homogeneous representation must be used, which adds an extra row with a 1 to the end of the vector to give

When position vector is multiplied by the transformation matrix the answer should be somewhere around from visual inspection, and indeed:

To get the coordinates of in reference frame 0 now simply take the first two elements of the resulting vector .

Applying multiple transformations

We can also string these things together! What if we have a 3 link, planar (i.e. rotating on the plane) robot arm? A setup like this:

We know that our end-effector is at point in reference frame 2, which is at an 80 degree angle from reference frame 1 and located at . That gives us a transformation matrix

.

To get our point in terms of reference frame 0 we account for the transform from reference frame 1 into reference frame 2 with and then account for the transform from reference frame 0 into reference frame 1 with our previously defined transformation matrix

.

So let’s give it a shot! By eyeballing it we should expect our answer to be somewhere around or so, I would say.

.

And it’s a good thing we didn’t just eyeball it! Accurate drawings might have helped but the math gives us an exact answer. Super!

And one more note, if we’re often performing this computation, then instead of performing 2 matrix multiplications every time we can work out

and simply multiply our point in reference frame 2 by this new transformation matrix to calculate the coordinates in reference frame 0.

**Forward transform matrices in 3D**

The example here is taken from Samir Menon’s RPP control tutorial.

It turns out it’s trivial to add in the dimension and start accounting for 3D transformations. Let’s say we have a standard revolute-prismatic-prismatic robot, which looks exactly like this, or roughly like this:

where the base rotates around the axis, and the distance from reference frame 0 to reference frame 1 is 1 unit, also along the axis. The rotation matrix from reference frame 0 to reference frame 1 is:

and the translation vector is

The transformation matrix from reference frame 0 to reference frame 1 is then:

where the third column indicates that there was no rotation around the axis in moving between reference frames, and the forth (translation) column shows that we move 1 unit along the axis. The fourth row is again then only present to make the multiplications work out and provides no information.

For transformation from the reference frame 1 to reference frame 2, there is no rotation (because it is a prismatic joint), and there is translation along the axis of reference frame 1 equal to . This gives a transformation matrix:

The final transformation, from the origin of reference frame 2 to the end-effector position is similarly another transformation with no rotation (because this joint is also prismatic), that translates along the axis:

The full transformation from reference frame 0 to the end-effector is found by combining all of the above transformation matrices:

To transform a point from the end-effector reference frame into terms of the origin reference frame, simply multiply the transformation matrix by the point of interest relative to the end-effector. If it is the end-effector position that is of interest to us, . For example, let , , and , then the end-effector location is:

Inverting our transformation matrices

What if we know where a point is defined in reference frame 0, but we want to know where it is relative to our end-effector’s reference frame? Fortunately this is straightforward thanks to the way that we’ve defined our transform matrices. Continuing the same robot example and configuration as above, and denoting the rotation part of the transform matrix and the translation part , the inverse transform is defined:

If we have a point that’s at in reference frame 0, then we can calculate that relative to the end-effector it is at:

**Conclusions**

These are, of course, just the basics with forward transformation matrices. There are numerous ways to go about this, but this method is fairly straightforward. If you’re interested in more, there are a bunch of youtube videos and detailed tutorials all over the web. There’s a bunch of neat stuff about why the matrices are set up like they are (search: homogeneous transformations) and more complex examples.

The robot example for the 3D case here didn’t have any spherical joints, each joint only moved in 2 dimensions, but it is also possible to derive the forward transformation matrix in this case, it’s just more complex and not necessary to move onward here since they’re not used in the robots I’ll be looking at. This introduction is enough to get started and move on to some more exciting things, so let’s do that!

[…] where the end-effector is relative to a base coordinate frame…OH MAYBE we should use those forward transformation matrices from the previous post. Let’s do […]

[…] Robot control part 1: Forward transformation matrices. […]

Reblogged this on Exoskeleton Project Blog.

Hey, thank you for the great posts! I think there is a technical mistake on this http://bit.ly/19xGQTg , because point p2 has coordinates (1,2)

Good luck! Great job with the blog🙂

also here, where I think it should be -0,2 http://bit.ly/GFjfsW

and here http://bit.ly/GLeaQK, where it should be minus tita 2

You are correct! Fixed. Thanks for the catch, I appreciate it.🙂

thanks for explaining robot matrices. i was strugulling from many days

You’re welcome! I similarly had quite a time figuring them out when I first came across them.🙂

Hey,

There are some things i still done understand. Why do some sources of information say that to go from v to v’ you use

cos -sin

sin cos

which will mean the above matrix rotation would be 0R1 not 1R0 as you said above.

Also, the Transformation matrix seems to be moving points from frame 0 to frame 1 not frame 1 to frame 0.

The logic seems really counter intuitive because it is backwards from information posted on sites like wolframalpha

Thanks

Myyyy goodness, that’s a long standing mistake.

Thank you for this catch, you’re correct, I’ll fix this!

I also updated the notation to some that’s hopefully easier to read.

thanks for your post,

I want to ask you about T12 transformation from the reference frame 2 to reference frame 1. Why theta1 is translated along the y axis. I think that theta1 is translate along x axis.

So, T12 = [1 0 0 theta1;

0 1 0 0.5;

0 0 1 0;

0 0 0 1];

can you explain it to me? thanks

Hi CuPi, if the axes were defined more intuitively you would be correct. But in this example the Z axis is up/down along the picture, the Y axis is left/right, and the X axis comes out/into the picture. It’s just because of this definition of the axes that the transformation matrix isn’t set up how you have above!

edit: I should be more explicit about this in the post, I’ll edit it to make it more clear. Thanks!

Reblogged this on meng91 and commented:

detail and useful

[…] We do that in the joints_x and joints_y variables, and the trig for that is straight from basic robotics. The above code results in the following animation (which is a little choppy because I dropped some […]

[…] this was a time I was very glad to have a previous post talking about generating transformation matrices, because deriving the Jacobian for a 6DOF arm in 3D space comes off as a little daunting when […]

hi i understand it now but just a question on how u got the 0.5+q1 in matrix for T21

Hi watson, yeah that is not clear! Sorry about that, the .5 is the offset of q1’s origin along the y-axis of q0’s reference frame. Similarly, there is a -.2 offset of q2 along the z-axis of q1 reference frame. Does that help? I’ll update the figures asap to hopefully make it more clear where those come from!

its a nice piece of information .. like to see you more ..

[…] For this next chunk I’m going to cut out everything that’s not VREP, since I have a bunch of posts explaining the control signal derivation and forward transformation matrices. […]

https://s0.wp.com/latex.php?latex=%5Ctextbf%7BT%7D%5E%7Bee%7D_0+%3D+%5Ctextbf%7BT%7D%5E1_0+%5C%3B+%5Ctextbf%7BT%7D%5E2_1+%5C%3B+%5Ctextbf%7BT%7D%5E%7Bee%7D_2+%3D+%5Cleft%5B+%5Cbegin%7Barray%7D%7Bcccc%7D+cos%28q_0%29+%26+-sin%28q_0%29+%26+0+%26+-sin%28q_0%29%280.5+%2B+q_1%29+%5C%5C+sin%28q_0%29+%26+cos%28q_0%29+%26+0+%26+cos%28q_0%29+%280.5+%2B+q_1%29+%5C%5C+0+%26+0+%26+1+%26+0.8+-+q_2+%5C%5C+0+%26+0+%26+0+%26+1+%5Cend%7Barray%7D+%5Cright%5D.+&bg=ffffff&fg=555555&s=0

Hey😀 Thank you so much for such a clear and concise explanation. I just have one doubt How did the last column in the matrix ( whose link I have sent) come from ?? Little confused with that.

Howdy! You’re welcome, glad you found it useful!

The last column in the matrix represents the translations along the (x,y,z) axes. They’re calculated by finding the translation between each of the coordinate frames individually, generating the transformation matrices, and then multiplying them all together. The 1 in the last row doesn’t represent anything, it’s just there to make all the calculations work out when you multiply. Does that help?

[…] go over how to calculate the inverse transform at the end of my post on forward transformation matrices, but to save you from having to go back and look through that, here’s the code to do […]