## Operational space control of 6DOF robot arm with spiking cameras part 2: Deriving the Jacobian

In the previous exciting post in this series I outlined the project, which is in the title, and we worked through getting access to the arm through Python. The next step was deriving the Jacobian, and that’s what we’re going to be talking about in this post!

Alright.
This was a time I was very glad to have a previous post talking about generating transformation matrices, because deriving the Jacobian for a 6DOF arm in 3D space comes off as a little daunting when you’re used to 3DOF in 2D space, and I needed a reminder of the derivation process. The first step here was finding out which motors were what, so I went through and found out how each motor moved with something like the following code:

for ii in range(7):
target_angles = np.zeros(7, dtype='float32')
target_angles[ii] = np.pi / 4.0
rob.move(target_angles)
time.sleep(1)


and I found that the robot is setup in the figures below  this is me trying my hand at making things clearer using Inkscape, hopefully it’s worked. Displayed are the first 6 joints and their angles of rotation, $q_0$ through $q_5$. The 7th joint, $q_6$, opens and closes the gripper, so we’re safe to ignore it in deriving our Jacobian. The arm segment lengths $l_1, l_3,$ and $l_5$ are named based on the nearest joint angles (makes easier reading in the Jacobian derivation).

Find the transformation matrix from end-effector to origin

So first thing’s first, let’s find the transformation matrices. Our first joint, $q_0$, rotates around the $z$ axis, so the rotational part of our transformation matrix $^0_\textrm{org}\textbf{T}$ is $^0_\textrm{org}\textbf{R} = \left[ \begin{array}{ccc} \textrm{cos}(q_0) & -\textrm{sin}(q_0) & 0 \\ \textrm{sin}(q_0) & \textrm{cos}(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right],$

and $q_0$ and our origin frame of reference are on top of each other so we don’t need to account for translation, so our translation component of $^0_\textrm{org}\textbf{T}$ is $^0_\textrm{org}\textbf{D} = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right].$

Stacking these together to form our first transformation matrix we have $^0_\textrm{org}\textbf{T} = \left[ \begin{array}{cc} ^0_\textrm{org}\textbf{R} & ^0_\textrm{org}\textbf{D} \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cccc} \textrm{cos}(q_0) & -\textrm{sin}(q_0) & 0 & 0\\ \textrm{sin}(q_0) & \textrm{cos}(q_0) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .$

So now we are able to convert a position in 3D space from to the reference frame of joint $q_0$ back to our origin frame of reference. Let’s keep going.

Joint $q_1$ rotates around the $x$ axis, and there is a translation along the arm segment $l_1$. Our transformation matrix looks like $^1_0\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_1) & -\textrm{sin}(q_1) & l_1 \textrm{cos}(q_1) \\ 0 & \textrm{sin}(q_1) & \textrm{cos}(q_1) & l_1 \textrm{sin}(q_1) \\ 0 & 0 & 0 & 1 \end{array} \right] .$

Joint $q_2$ also rotates around the $x$ axis, but there is no translation from $q_2$ to $q_3$. So our transformation matrix looks like $^2_1\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_2) & -\textrm{sin}(q_2) & 0 \\ 0 & \textrm{sin}(q_2) & \textrm{cos}(q_2) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .$

The next transformation matrix is a little tricky, because you might be tempted to say that it’s rotating around the $z$ axis, but actually it’s rotating around the $y$ axis. This is determined by where $q_3$ is mounted relative to $q_2$. If it was mounted at 90 degrees from $q_2$ then it would be rotating around the $z$ axis, but it’s not. For translation, there’s a translation along the $y$ axis up to the next joint, so all in all the transformation matrix looks like: $^3_2\textbf{T} = \left[ \begin{array}{cccc} \textrm{cos}(q_3) & 0 & -\textrm{sin}(q_3) & 0\\ 0 & 1 & 0 & l_3 \\ \textrm{sin}(q_3) & 0 & \textrm{cos}(q_3) & 0\\ 0 & 0 & 0 & 1 \end{array} \right] .$

And then the transformation matrices for coming from $q_4$ to $q_3$ and $q_5$ to $q_4$ are the same as the previous set, so we have $^4_3\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_4) & -\textrm{sin}(q_4) & 0 \\ 0 & \textrm{sin}(q_4) & \textrm{cos}(q_4) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .$

and $^5_4\textbf{T} = \left[ \begin{array}{cccc} \textrm{cos}(q_5) & 0 & -\textrm{sin}(q_5) & 0 \\ 0 & 1 & 0 & l_5 \\ \textrm{sin}(q_5) & 0 & \textrm{cos}(q_5) & 0\\ 0 & 0 & 0 & 1 \end{array} \right] .$

Alright! Now that we have all of the transformation matrices, we can put them together to get the transformation from end-effector coordinates to our reference frame coordinates! $^\textrm{ee}_\textrm{org}\textbf{T} = ^0_\textrm{org}\textbf{T} \; ^1_0\textbf{T} \; ^2_1\textbf{T} \; ^3_2\textbf{T} \; ^4_3\textbf{T} \; ^5_4\textbf{T}.$

At this point I went and tested this with some sample points to make sure that everything seemed to be being transformed properly, but we won’t go through that here.

Calculate the derivative of the transform with respect to each joint

The next step in calculating the Jacobian is getting the derivative of $^\textrm{ee}_\textrm{org}T$. This could be a big ol’ headache to do it by hand, OR we could use SymPy, the symbolic computation package for Python. Which is exactly what we’ll do. So after a quick

sudo pip install sympy


I wrote up the following script to perform the derivation for us

import sympy as sp

def calc_transform():
# set up our joint angle symbols (6th angle doesn't affect any kinematics)
q = [sp.Symbol('q0'), sp.Symbol('q1'), sp.Symbol('q2'), sp.Symbol('q3'),
sp.Symbol('q4'), sp.Symbol('q5')]
# set up our arm segment length symbols
l1 = sp.Symbol('l1')
l3 = sp.Symbol('l3')
l5 = sp.Symbol('l5')

Torg0 = sp.Matrix([[sp.cos(q), -sp.sin(q), 0, 0,],
[sp.sin(q), sp.cos(q), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])

T01 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(q), -sp.sin(q), l1*sp.cos(q)],
[0, sp.sin(q), sp.cos(q), l1*sp.sin(q)],
[0, 0, 0, 1]])

T12 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(q), -sp.sin(q), 0],
[0, sp.sin(q), sp.cos(q), 0],
[0, 0, 0, 1]])

T23 = sp.Matrix([[sp.cos(q), 0, sp.sin(q), 0],
[0, 1, 0, l3],
[-sp.sin(q), 0, sp.cos(q), 0],
[0, 0, 0, 1]])

T34 = sp.Matrix([[1, 0, 0, 0],
[0, sp.cos(q), -sp.sin(q), 0],
[0, sp.sin(q), sp.cos(q), 0],
[0, 0, 0, 1]])

T45 = sp.Matrix([[sp.cos(q), 0, sp.sin(q), 0],
[0, 1, 0, l5],
[-sp.sin(q), 0, sp.cos(q), 0],
[0, 0, 0, 1]])

T = Torg0 * T01 * T12 * T23 * T34 * T45

# position of the end-effector relative to joint axes 6 (right at the origin)
x = sp.Matrix([0,0,0,1])

Tx = T * x

for ii in range(6):
print q[ii]
print sp.simplify(Tx.diff(q[ii]))
print sp.simplify(Tx.diff(q[ii]))
print sp.simplify(Tx.diff(q[ii]))


And then consolidated the output using some variable shorthand to write a function that accepts in joint angles and generates the Jacobian:

def calc_jacobian(q):
J = np.zeros((3, 7))

c0 = np.cos(q)
s0 = np.sin(q)
c1 = np.cos(q)
s1 = np.sin(q)
c3 = np.cos(q)
s3 = np.sin(q)
c4 = np.cos(q)
s4 = np.sin(q)

c12 = np.cos(q + q)
s12 = np.sin(q + q)

l1 = self.l1
l3 = self.l3
l5 = self.l5

J[0,0] = -l1*c0*c1 - l3*c0*c12 - l5*((s0*s3 - s12*c0*c3)*s4 + c0*c4*c12)
J[1,0] = -l1*s0*c1 - l3*s0*c12 + l5*((s0*s12*c3 + s3*c0)*s4 - s0*c4*c12)
J[2,0] = 0

J[0,1] = (l1*s1 + l3*s12 + l5*(s4*c3*c12 + s12*c4))*s0
J[1,1] = -(l1*s1 + l3*s12 + l5*s4*c3*c12 + l5*s12*c4)*c0
J[2,1] = l1*c1 + l3*c12 - l5*(s4*s12*c3 - c4*c12)

J[0,2] = (l3*s12 + l5*(s4*c3*c12 + s12*c4))*s0
J[1,2] = -(l3*s12 + l5*s4*c3*c12 + l5*s12*c4)*c0
J[2,2] = l3*c12 - l5*(s4*s12*c3 - c4*c12)

J[0,3] = -l5*(s0*s3*s12 - c0*c3)*s4
J[1,3] = l5*(s0*c3 + s3*s12*c0)*s4
J[2,3] = -l5*s3*s4*c12

J[0,4] = l5*((s0*s12*c3 + s3*c0)*c4 + s0*s4*c12)
J[1,4] = l5*((s0*s3 - s12*c0*c3)*c4 - s4*c0*c12)
J[2,4] = -l5*(s4*s12 - c3*c4*c12)

return J


Alright! Now we have our Jacobian! Really the only time consuming part here was calculating our end-effector to origin transformation matrix, generating the Jacobian was super easy using SymPy once we had that.

Hack position control using the Jacobian

Great! So now that we have our Jacobian we’ll be able to translate forces that we want to apply to the end-effector into joint torques that we want to apply to the arm motors. Since we can’t control applied force to the motors though, and have to pass in desired angle positions, we’re going to do a hack approximation. Let’s first transform our forces from end-effector space into a set of joint angle torques: $\textbf{u} = \textbf{J}^T \; \textbf{u}_\textbf{x}.$

To approximate the control then we’re simply going to take the current set of joint angles (which we know because it’s whatever angles we last told the system to move to) and add a scaled down version of $\textbf{u}$ to approximate applying torque that affects acceleration and then velocity. $\textbf{q}_\textrm{des} = \textbf{q} + \alpha \; \textbf{u},$

where $\alpha$ is the gain term, I used .001 here because it was nice and slow, so no crazy commands that could break the servos would be sent out before I could react and hit the cancel button.

What we want to do then to implement operational space control here then is find the current $(x,y,z)$ position of the end-effector, calculate the difference between it and the target end-effector position, use that to generate the end-effector control signal $u_x$, get the Jacobian for the current state of the arm using the function above, find the set of joint torques to apply, approximate this control by generating a set of target joint angles to move to, and then repeat this whole loop until we’re within some threshold of the target position. Whew.

So, a lot of steps, but pretty straight forward to implement. The method I wrote to do it looks something like:

def move_to_xyz(self, xyz_d):
"""
np.array xyz_d: 3D target (x_d, y_d, z_d)
"""
count = 0
while (1):
count += 1
# get control signal in 3D space
xyz = self.calc_xyz()
delta_xyz = xyz_d - xyz
ux = self.kp * delta_xyz

# transform to joint space
J = self.calc_jacobian()
u = np.dot(J.T, ux)

# target joint angles are current + uq (scaled)
self.q[...] += u * .001
self.robot.move(np.asarray(self.q.copy(), 'float32'))

if np.sqrt(np.sum(delta_xyz**2)) < .1 or count > 1e4:
break


And that is it! We have successfully hacked together a system that can perform operational space control of a 6DOF robot arm. Here is a very choppy video of it moving around to some target points in a grid on a cube. So, granted I had to drop a lot of frames from the video to bring it’s size down to something close to reasonable, but still you can see that it moves to target locations super fast!

Alright this is sweet, but we’re not done yet. We don’t want to have to tell the arm where to move ourselves. Instead we’d like the robot to perform target tracking for some target LED we’re moving around, because that’s way more fun and interactive. To do this, we’re going to use spiking cameras! So stay tuned, we’ll talk about what the hell spiking cameras are and how to use them for a super quick-to-setup and foolproof target tracking system in the next exciting post!

## 17 thoughts on “Operational space control of 6DOF robot arm with spiking cameras part 2: Deriving the Jacobian”

1. […] In the main part of the code, we create another Nengo network. We call this one model because that’s convention for the top-level network in Nengo. We then create two networks using the eyeNet function to hook up to the two cameras. At this point we create a node called eyes, and the role of this node is simply to amalgamate the information from the two cameras from and into . This node is then hooked up to another node called armNode, and all armNode does is call the robot arm’s move_to_xyz function, which we defined in the last post. […]

2. Mike Smorto says:

I came across these posts as I was trying to understand inverse kinematics and I have to tell it is the clearest explanation I have found. Kudo’s to you Travis. Wish I had the SymPi lib berfore it would have saved me a lot of time 🙂

Anyway, I do have a couple of questions for you that maybe you can help me with on your code:
1. What is the kp term at line 11 used for and what is the value you using?
2. You state that the u term is scaled and it looks like the scale factor is 0.001? Is this the same as the alpha you have in the associated equation q*alpha*u?

Mike

• travisdewolf says:

Hi Mike,

thanks! The kp term is just the proportional gain term for the control signal (https://en.wikipedia.org/wiki/PID_controller), I just used 1 here, actually. But the idea is just that you scale it up to move to your target faster. And yup the .001 is the alpha gain term in the above equation!

3. Mike Smorto says:

Hi Travis

I have another question for you. In the SymPy example you have:
# position of the end-effector relative to joint axes 6 (right at the origin)
x = sp.Matrix([0,0,0,1])

can you tell me what the components mean in the Matrix.

I converted your sample to Processing (JAVA based) and it does converge to the correct position but the angles are way off and am starting the debug process.

Thanks
Mike

• travisdewolf says:

Hi Mike,

yup those are just the (x,y,z) coordinates of the point of interest in the 6th joint reference frame, set up as a homogeneous transform (http://www.springer.com/cda/content/document/cda_downloaddocument/9789048137756-c2.pdf?SGWID=0-0-45-1123955-p173940737). Hope the debugging goes quick! I’m trying to find the manual for you for the little robot arm, it’s a little tricky to track down it turns out.

• Mike Smorto says:

Hi Travis.

Got it all debugged and works like a charm. As usual it was user error (a few typos). Will be adding a routine for contraints on the joints. Thanks for your great work. Was driving me crazy until i found you posts and simple code.

Mike

• travisdewolf says:

Oh great! Glad to hear. If you could let me know how the joint constraints go I’d be interested to see the results! 🙂

4. CyberMerln says:

Can you tell me what the DH parameters are for the Veo26a. I can not seem to find any information on the web on the veo26a.

Thanks
Mike

• travisdewolf says:

Hi Mike,

I checked again and it’s actually a “VE026A DENSO training robot”, which is much more searchable on Google! I found the company page but didn’t see the specs in their archive, so i’ve sent them an email requesting info, i’ll let you know if I hear back!

• travisdewolf says:

They just wrote me back and sent along the PDF, i’ve put it up on my github. Here’s the link! https://github.com/studywolf/blog/blob/master/VE026a/Academic%20Robot%20Leaflet.pdf

• Mike Smorto says:

5. Fernando Figueiredo says:

Hello Travis,
I’ve been learning the process of generating and using the Jacobian matrix in non Cartesian robots, and you have made some easy to understand posts about the subject! thank you for it!!!
I just didn’t understand the last step… When you calculate a scalar value u and add it with a factor to each coordinate of q (self.q[…] += u * .001), doesn’t this just equally increment the angle of each joint? Shouldn’t we be using a J^(-1) for reverse kinematics?

Once again, thank you for making your work public!!!
Fernando.

• travisdewolf says:

Hi Fernando,

oop actually u is a vector (u = np.dot(J.T, ux), where ux is the desired forces to apply to the hand)! And we’re using vector transpose control to approximate the joint torques required to apply the desired forces to the hand. You can find out more information on that in some of my previous posts: https://studywolf.wordpress.com/2013/09/02/robot-control-jacobians-velocity-and-force/

So it’s definitely a hacked together thing, rather than using reverse kinematics to take a desired hand position and find the joint angles that get us there we’re taking a desired hand force, and transforming that into a desired joint torque. We then simulate the application of this joint torque by adding u * small_gain_value to the current position.

• Fernando Figueiredo says:

Hi Travis,
It definitely is a hacked solution 🙂 I guess I’ll try to understand it later on…
Anyway at least I understood that u is a vector :)!
By the way I guess I finally got to understand the modeling of a 6 DoF robotic arm and I kind of documented it in a blog, please check it out, you are referenced there and I’m sure you will recognize some of your work, I hope you don’t mind that. Your source code for SymPy and the position direct and inverse calculations where my starting point, then I went on and added the orientation calculations. Thank you.

http://dagmarobotix.blogspot.pt/2016/09/6-dof-robotic-arm-modelation_81.html

• travisdewolf says:

Oh neat! I appreciate the mention, it looks like a great post! I’m looking forward to reading it!

6. Jundith Alterado says:

Hi Travis,
I just have a question, where do I input the values for my x, y and z positions? And also what method is used in the calc_xyz()? Thank you for making your work available for public.

Best Regards,
Jundith

• travisdewolf says:

Hi Jundith, you’ll input the target x,y,z locations into a task space PD controller, to generates $\textbf{u}_\textbf{x}$: $\textbf{u}_\textbf{x} = k_p * (\textbf{x}^* - \textbf{x}) - k_v * \dot{\textbf{x}}$

and calc_xyz() will be a function that uses the transform matrices given above and the current set of joint angles to work out what the (x,y,z) position of the hand is. You can read more about how that works in this post on transformation matrices.