I’m doing a tour of learning down at the Brains in Silicon lab run by Dr. Kwabena Boahen for the next month or so working on learning a bunch about building and controlling robots and some other things, and one of the super interesting things that I’m reading about is effective methods for force control of robots. I’ve mentioned operational space (or task space) control of robotic systems before, in the context of learning the inverse kinematic transformation, but down here the approach is to analytically derive the dynamics of the system (as opposed to learning them) and use these to explicitly calculate control signals to move in task space that take advantage of the passive dynamics of the system.
In case you don’t remember what those words mean, operational (or task) space refers to a different configuration space than the basic / default robot configuration space. FOR EXAMPLE: If we have a robot arm with three degrees of freedom (DOF), that looks something like this:
where two joints rotate, and are referred to as , and
, respectively, then the most obvious representation for the system state,
, is the joint-angle space
. So that’s great, but often when we’re using this robot we’re going to be more interested in controlling the position of the end-effector rather than the angles of the joints. We would like to operate to complete our task in terms of
, where
are the Cartesian coordinates of our hand in 2D space. So then
is our operational (or task) space.
I also mentioned the phrase ‘passive dynamics’. It’s true, go back and check if you don’t believe me. Passive dynamics refer to how the system moves from a given initial condition when no control signal is applied. For example, passive dynamics incorporate the effects of gravity on the system. If we put our arm up in the air and remove any control signal, it falls down by our side. The reason that we’re interested in passive dynamics is because they’re movement for free. So if my goal is to move my arm to be down by my side, I want to take advantage of the fact that the system naturally moves there on it’s own simply by removing my control signal, rather than using a bunch of energy to force my arm to move down.
There are a bunch of steps leading up to building controllers that can plan trajectories in a task space. As a first step, it’s important that we characterize the relationship of each of reference coordinate frames of the robot’s links to the origin, or base, of the robot. The characterization of these relationships are done using what are called forward transformation matrices, and they will be the focus of the remainder of this post.
Forward transformation matrices in 2D
As I mentioned mere sentences ago, forward transformation matrices capture the relationship between the reference frames of different links of the robot. A good question might be ‘what is a reference frame?’ A reference frame is basically the point of view of each of the robotic links, where if you were an arm joint yourself what you would consider ‘looking forward’. To get a feel for these and why it’s necessary to be able to move between them, let’s look at the reference frames of each of the links from the above drawn robot:
We know that from our end-effector point
is length
away along its x-axis. Similarly, we know that
is length
away from
along its x-axis, that
is length
away from the origin along its y-axis. The question is, then, in terms of the origin’s coordinate frame, where is our point
In this configuration pictured above it’s pretty straightforward to figure out, it’s simply . So you’re feeling pretty cocky, this stuff is easy. OK hotshot, what about NOW:
It’s not as straightforward once rotations start being introduced. So what we’re looking for is a method of automatically accounting for the rotations and translations of points between different coordinate frames, such that if we know the current angles of the robot joints and the relative positions of the coordinate frames we can quickly calculate the position of the point of interest in terms of the origin coordinate frame.
Accounting for rotation
So let’s just look quick at rotating axes. Here’s a picture:
The above image displays two frames of reference with the same origin rotated from each other by degrees. Imagine a point
specified in reference frame 1, to find its coordinates in terms of of the origin reference frame, or
coordinates, it is necessary to find out the contributions of the
and
axes to the
and
axes. The contributions to the
axis from
are calculated
To include ‘s effect on position, we add the term
which is equivalent to , as shown above. This term can be rewritten as
because
and
are phase shifted 90 degrees from one another.
The total contributions of a point defined in the axes to the
axis are
Similarly for the axis contributions we have
Rewriting the above equations in matrix form gives:
where is called a rotation matrix.
The notation used here for these matrices is that the reference frame number being rotated from is denoted in the superscript before, and the reference frame being rotated into is in the subscript. denotes a rotation from reference frame 1 into reference frame 0 (using the same notation as described here.
To find the location of a point defined in reference frame 1 in reference frame 0 coordinates, we then multiply by the rotation matrix .
Accounting for translation
Alrighty, rotation is great, but as you may have noticed our robots joints are not all right on top of each other. The second part of transformation is translation, and so it is also necessary to account for distances between reference frame origins.
Let’s look at the the reference frames 1 and 0 shown in the above figure, where point in reference frame 1. Reference frame 1 is rotated 45 degrees from and located at
in reference frame 0. To account for this translation and rotation a new matrix will be created that includes both rotation and translation. It is generated by appending distances, denoted
, to the rotation matrix
along with a row of zeros ending in a 1 to get a transformation matrix:
To make the matrix-vector multiplications work out, a homogeneous representation must be used, which adds an extra row with a 1 to the end of the vector to give
When position vector is multiplied by the transformation matrix
the answer should be somewhere around
from visual inspection, and indeed:
To get the coordinates of in reference frame 0 now simply take the first two elements of the resulting vector
.
Applying multiple transformations
We can also string these things together! What if we have a 3 link, planar (i.e. rotating on the plane) robot arm? A setup like this:
We know that our end-effector is at point in reference frame 2, which is at an 80 degree angle from reference frame 1 and located at
. That gives us a transformation matrix
.
To get our point in terms of reference frame 0 we account for the transform from reference frame 1 into reference frame 2 with and then account for the transform from reference frame 0 into reference frame 1 with our previously defined transformation matrix
.
So let’s give it a shot! By eyeballing it we should expect our answer to be somewhere around or so, I would say.
.
And it’s a good thing we didn’t just eyeball it! Accurate drawings might have helped but the math gives us an exact answer. Super!
And one more note, if we’re often performing this computation, then instead of performing 2 matrix multiplications every time we can work out
and simply multiply our point in reference frame 2 by this new transformation matrix to calculate the coordinates in reference frame 0.
Forward transform matrices in 3D
The example here is taken from Samir Menon’s RPP control tutorial.
It turns out it’s trivial to add in the dimension and start accounting for 3D transformations. Let’s say we have a standard revolute-prismatic-prismatic robot, which looks exactly like this, or roughly like this:
where the base rotates around the
axis, and the distance from reference frame 0 to reference frame 1 is 1 unit, also along the
axis. The rotation matrix from reference frame 0 to reference frame 1 is:
and the translation vector is
The transformation matrix from reference frame 0 to reference frame 1 is then:
where the third column indicates that there was no rotation around the axis in moving between reference frames, and the forth (translation) column shows that we move 1 unit along the
axis. The fourth row is again then only present to make the multiplications work out and provides no information.
For transformation from the reference frame 1 to reference frame 2, there is no rotation (because it is a prismatic joint), and there is translation along the axis of reference frame 1 equal to
. This gives a transformation matrix:
The final transformation, from the origin of reference frame 2 to the end-effector position is similarly another transformation with no rotation (because this joint is also prismatic), that translates along the axis:
The full transformation from reference frame 0 to the end-effector is found by combining all of the above transformation matrices:
To transform a point from the end-effector reference frame into terms of the origin reference frame, simply multiply the transformation matrix by the point of interest relative to the end-effector. If it is the end-effector position that is of interest to us, . For example, let
,
, and
, then the end-effector location is:
Inverting our transformation matrices
What if we know where a point is defined in reference frame 0, but we want to know where it is relative to our end-effector’s reference frame? Fortunately this is straightforward thanks to the way that we’ve defined our transform matrices. Continuing the same robot example and configuration as above, and denoting the rotation part of the transform matrix and the translation part
, the inverse transform is defined:
If we have a point that’s at in reference frame 0, then we can calculate that relative to the end-effector it is at:
Conclusions
These are, of course, just the basics with forward transformation matrices. There are numerous ways to go about this, but this method is fairly straightforward. If you’re interested in more, there are a bunch of youtube videos and detailed tutorials all over the web. There’s a bunch of neat stuff about why the matrices are set up like they are (search: homogeneous transformations) and more complex examples.
The robot example for the 3D case here didn’t have any spherical joints, each joint only moved in 2 dimensions, but it is also possible to derive the forward transformation matrix in this case, it’s just more complex and not necessary to move onward here since they’re not used in the robots I’ll be looking at. This introduction is enough to get started and move on to some more exciting things, so let’s do that!