Tag Archives: robotics

Operational space control of 6DOF robot arm with spiking cameras part 3: Tracking a target using spiking cameras

Alright. Previously we got our arm all set up to perform operational space control, accepting commands through Python. In this post we’re going to set it up with a set of spiking cameras for eyes, train it to learn the mapping between camera coordinates and end-effector coordinates, and have it track an LED target.

What is a spiking camera?

Good question! Spiking cameras are awesome, and they come from Dr. Jorg Conradt’s lab. Basically what they do is return you information about movement from the environment. They’re event-driven, instead of clock-driven like most hardware, which means that they have no internal clock that’s dictating when they send information (i.e. they’re asynchronous). They send information out as soon as they receive it. Additionally, they only send out information about the part of the image that has changed. This means that they have super fast response times and their output bandwidth is really low. Dr. Terry Stewart of our lab has written a bunch of code that can be used for interfacing with spiking cameras, which can all be found up on his GitHub.

Let’s use his code to see through a spiking camera’s eye. After cloning his repo and running python setup.py you can plug in a spiking camera through USB, and with the following code have a Matplotlib figure pop-up with the camera output:

import nstbot
import nstbot.connection
import time

eye = nstbot.RetinaBot()
eye.connect(nstbot.connection.Serial('/dev/ttyUSB0', baud=4000000))

time.sleep(1)

eye.retina(True)
eye.show_image()

while True:
    time.sleep(1)

The important parts here are the creation of an instance of the RetinaBot, connecting it to the proper USB port, and calling the show_image function. Pretty easy, right? Here’s some example output, this is me waving my hand and snapping my fingers:
spikingcameratest

How cool is that? Now, you may be wondering how or why we’re going to use a spiking camera instead of a regular camera. The main reason that I’m using it here is because it makes tracking targets super easy. We just set up an LED that blinks at say 100Hz, and then we look for that frequency in the spiking camera output by recording the rate of change of each of the pixels and averaging over all pixel locations changing at the target frequency. So, to do this with the above code we simply add

eye.track_frequencies(freqs=[100])

And now we can track the location of an LED blinking at 100Hz! The visualization code place a blue dot at the estimated target location, and this all looks like:
tracking-100hz
Alright! Easily decoded target location complete.

Transforming between camera coordinates and end-effector coordinates

Now that we have a system that can track a target location, we need to transform that position information into end-effector coordinates for the arm to move to. There are a few ways to go about this. One is by very carefully positioning the camera and measuring the distances between the robot’s origin reference frame and working through the trig etc etc. Another, much less pain-in-the-neck way is to instead record some sample points of the robot end-effector at different positions in both end-effector and camera coordinates, and then use a function approximator to generalize over the rest of space.

We’ll do the latter, because it’s exactly the kind of thing that neurons are great for. We have some weird function, and we want to learn to approximate it. Populations of neurons are awesome function approximators. Think of all the crazy mappings your brain learns. To perform function approximation with neurons we’re going to use the Neural Engineering Framework (NEF). If you’re not familiar with the NEF, the basic idea is to using the response curves of neurons as a big set of basis function to decode some signal in some vector space. So we look at the responses of the neurons in the population as we vary our input signal, and then determine a set of decoders (using least-squares or somesuch) that specify the contribution of each neuron to the different dimensions of the function we want to approximate.

Here’s how this is going to work.

  1. We’re going to attach the LED to the head of the robot,
  2. we specify a set of (x,y,z) coordinates that we send to the robot’s controller,
  3. when the robot moves to each point, record the LED location from the camera as well as the end-effector’s (x,y,z) coordinate,
  4. create a population of neurons that we train up to learn the mapping from camera locations to end-effector (x,y,z) locations
  5. use this information to tell the robot where to move.

A detail that should be mentioned here is that a single camera only provides 2D output. To get a 3D location we’re going to use two separate cameras. One will provide (x,z) information, and the other will provide (y,z) information.

Once we’ve taped (expertly) the LED onto the robot arm, the following script to generate the information we to approximate the function transforming from camera to end-effector space:

import robot
from eye import Eye # this is just a spiking camera wrapper class

import numpy as np
import time

# connect to the spiking cameras
eye0 = Eye(port='/dev/ttyUSB2')
eye1 = Eye(port='/dev/ttyUSB1')
eyes = [eye0, eye1]
# connect to the robot
rob = robot.robotArm()

# define the range of values to test
min_x = -10.0
max_x = 10.0
x_interval = 5.0
min_y = -15.0
max_y = -5.0
y_interval = 5.0
min_z = 10.0
max_z = 20.0
z_interval = 5.0

x_space = np.arange(min_x, max_x, x_interval)
y_space = np.arange(min_y, max_y, y_interval)
z_space = np.arange(min_z, max_z, z_interval)

num_samples = 10 # how many camera samples to average over

try: 
    out_file0 = open('eye_map_0.csv', 'w')
    out_file1 = open('eye_map_1.csv', 'w')

    for i, x_val in enumerate(x_space):
        for j, y_val in enumerate(y_space):
            for k, z_val in enumerate(z_space):

                rob.move_to_xyz(target)
                time.sleep(2) # time for the robot to move

                # take a bunch of samples and average the input to get 
                # the approximation of the LED in camera coordinates
                eye_data0 = np.zeros(2)
                for k in range(num_samples):
                    eye_data0 += eye0.position(0)[:2]
                eye_data0 /= num_samples
                out_file0.write('%0.2f, %0.2f, %0.2f, %0.2f\n' %
                                (y_val, z_val, eye_data0[0], eye_data0[1]))

                eye_data1 = np.zeros(2)
                for k in range(num_samples):
                    eye_data1 += eye1.position(0)[:2]
                eye_data1 /= num_samples
                out_file1.write('%0.2f, %0.2f, %0.2f, %0.2f\n' %
                                (x_val, z_val, eye_data1[0], eye_data1[1]))

    out_file0.close()
    out_file1.close()
except: 
    import sys
    import traceback
    print traceback.print_exc(file=sys.stdout)
finally:
    rob.robot.disconnect()

This script connects to the cameras, defines some rectangle in end-effector space to sample, and then works through each of the points writing the data to file. The results of this code can be seen in the animation posted in part 2 of this series.

OK! So now we have all the information we need to train up our neural population. It’s worth noting that we’re only using 36 sample points to train up our neurons, I did this mostly because I didn’t want to wait around. You can of course use more, though, and the more sample points you have the more accurate your function approximation will be.

Implementing a controller using Nengo

The neural simulation software (which implements the NEF) that we’re going to be using to generate and train our neural population is called Nengo. It’s free to use for non-commercial use, and I highly recommend checking out the introduction and tutorials if you have any interest in neural modeling.

What we need to do now is generate two neural populations, one for each camera, that will receives input from the spiking camera and transform the target’s location information into end-effector coordinates. We will then combine the estimates from the two populations, and send that information out to the robot to tell it where to move. I’ll paste the code in here, and then we’ll step through it below.

from eye import Eye
import nengo
from nengo.utils.connection import target_function
import robot 

import numpy as np
import sys
import traceback

# connect to robot
rob = robot.robotArm()

model = nengo.Network()

try: 
    def eyeNet(port='/dev/ttyUSB0', filename='eye_map.csv', n_neurons=1000, 
               label='eye'):

        # connect to eye
        spiking_cam = Eye(port=port)

        # read in eval points and target output
        eval_points = []
        targets = []

        file_obj = open(filename, 'r')
        file_data = file_obj.readlines()
        for line in file_data:
            line_data = map(float, line.strip().split(','))
            targets.append(line_data[:2])
            eval_points.append(line_data[2:])
        file_obj.close()

        eval_points = np.array(eval_points)
        targets = np.array(targets)

        # create subnetwork for eye
        net = nengo.Network(label=label)
        with net:
            def eye_input(t): 
                return spiking_cam.position(0)[:2]
            net.input = nengo.Node(output=eye_input, size_out=2)
            net.map_ens = nengo.Ensemble(n_neurons, dimensions=2)
            net.output = nengo.Node(size_in=2)

            nengo.Connection(net.input, net.map_ens, synapse=None)
            nengo.Connection(net.map_ens, net.output, synapse=None,
                                    **target_function(eval_points, targets))

        return net

    with model: 
        # create network for spiking camera 0
        eye0 = eyeNet(port='/dev/ttyUSB2', filename='eye_map_0.csv', label='eye0')
        # create network for spiking camera 1 
        eye1 = eyeNet(port='/dev/ttyUSB1', filename='eye_map_1.csv', label='eye1')

        def eyes_func(t, yzxz):
            x = yzxz[2] # x coordinate coded from eye1
            y = yzxz[0] # y coordinate coded from eye0
            z = (yzxz[1] + yzxz[3]) / 2.0 # z coordinate average from eye0 and eye1
            return [x,y,z]
        eyes = nengo.Node(output=eyes_func, size_in=4)
        nengo.Connection(eye0.output, eyes[:2])
        nengo.Connection(eye1.output, eyes[2:])

        # create output node for sending instructions to arm
        def arm_func(t, x):
            if t < .05: return # don't move arm during startup (avoid transients)
            rob.move_to_xyz(np.array(x, dtype='float32'))
        armNode = nengo.Node(output=arm_func, size_in=3, size_out=0)
        nengo.Connection(eyes, armNode)

    sim = nengo.Simulator(model)
    sim.run(10, progress_bar=False)

except: 
    print traceback.print_exc(file=sys.stdout)
finally:
    print 'disconnecting'
    rob.robot.disconnect()

The first thing we’re doing is defining a function (eyeNet) to create our neural population that takes input from a spiking camera, and decodes out an end-effector location. In here, we read in from the file the information we just recorded about the camera positions that will serve as the input signal to the neurons (eval_points) and the corresponding set of function output (targets). We create a Nengo network, net, and then a couple of nodes for connecting the input (net.input) and projecting the output (net.output). The population of neurons that we’ll use to approximate our function is called net.map_ens. To specify the function we want to approximate using the eval_points and targets arrays, we create a connection from net.map_ens to net.output and use **target_function(eval_points, targets). So this is probably a little weird to parse if you haven’t used Nengo before, but hopefully it’s clear enough that you can get the gist of what’s going on.

In the main part of the code, we create another Nengo network. We call this one model because that’s convention for the top-level network in Nengo. We then create two networks using the eyeNet function to hook up to the two cameras. At this point we create a node called eyes, and the role of this node is simply to amalgamate the information from the two cameras from (x,z) and (y,z) into (x,y,z). This node is then hooked up to another node called armNode, and all armNode does is call the robot arm’s move_to_xyz function, which we defined in the last post.

Finally, we create a Simulation from model, which compiles the neural network we just specified above, and we run the simulation. The result of all of this then looks something like the following:

animation

And there we go! Project complete! We have a controller for a 6DOF arm that uses spiking cameras to train up a neural population and track an LED, that requires almost no set up time. I gave a demo of this at the end of the summer school and there’s no real positioning of the cameras relative to the arm required, just have to tape the cameras up somewhere, run the training script, and go!

Future work

From here there are a bunch of fun ways to go about extending this. We could add another LED blinking at a different frequency that the arm needs to avoid, using an obstacle avoidance algorithm like the one in this post, add in another dimension of the task involving the gripper, implement a null-space controller to keep the arm near resting joint angles as it tracks the target, and on and on!

Another thing that I’ve looked at is including learning on the system to fine tune our function approximation online. As is, the controller is able to extrapolate and move the arm to target locations that are outside of the range of space sampled during training, but it’s not super accurate. It would be much better to be constantly refining the estimate using learning. I was able to implement a basic version that works, but getting the learning and the tracking running at the same time turns out to be a bit trickier, so I haven’t had the chance to get it all running yet. Hopefully there will be some more down-time in the near future, however, and be able to finish implementing it.

For now, though, we still have a pretty neat target tracker for our robot arm!

Advertisements
Tagged , , , , , ,

Operational space control of 6DOF robot arm with spiking cameras part 2: Deriving the Jacobian

In the previous exciting post in this series I outlined the project, which is in the title, and we worked through getting access to the arm through Python. The next step was deriving the Jacobian, and that’s what we’re going to be talking about in this post!

Alright.
This was a time I was very glad to have a previous post talking about generating transformation matrices, because deriving the Jacobian for a 6DOF arm in 3D space comes off as a little daunting when you’re used to 3DOF in 2D space, and I needed a reminder of the derivation process. The first step here was finding out which motors were what, so I went through and found out how each motor moved with something like the following code:

for ii in range(7):
    target_angles = np.zeros(7, dtype='float32')
    target_angles[ii] = np.pi / 4.0
    rob.move(target_angles)
    time.sleep(1)

and I found that the robot is setup in the figures below

armangles
armlengths
this is me trying my hand at making things clearer using Inkscape, hopefully it’s worked. Displayed are the first 6 joints and their angles of rotation, q_0 through q_5. The 7th joint, q_6, opens and closes the gripper, so we’re safe to ignore it in deriving our Jacobian. The arm segment lengths l_1, l_3, and l_5 are named based on the nearest joint angles (makes easier reading in the Jacobian derivation).

Find the transformation matrix from end-effector to origin

So first thing’s first, let’s find the transformation matrices. Our first joint, q_0, rotates around the z axis, so the rotational part of our transformation matrix ^0_\textrm{org}\textbf{T} is

^0_\textrm{org}\textbf{R} = \left[ \begin{array}{ccc} \textrm{cos}(q_0) & -\textrm{sin}(q_0) & 0 \\ \textrm{sin}(q_0) & \textrm{cos}(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right],

and q_0 and our origin frame of reference are on top of each other so we don’t need to account for translation, so our translation component of ^0_\textrm{org}\textbf{T} is

^0_\textrm{org}\textbf{D} = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right].

Stacking these together to form our first transformation matrix we have

^0_\textrm{org}\textbf{T} = \left[ \begin{array}{cc} ^0_\textrm{org}\textbf{R} & ^0_\textrm{org}\textbf{D} \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cccc} \textrm{cos}(q_0) & -\textrm{sin}(q_0) & 0 & 0\\ \textrm{sin}(q_0) & \textrm{cos}(q_0) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .

So now we are able to convert a position in 3D space from to the reference frame of joint q_0 back to our origin frame of reference. Let’s keep going.

Joint q_1 rotates around the x axis, and there is a translation along the arm segment l_1. Our transformation matrix looks like

^1_0\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_1) & -\textrm{sin}(q_1) & l_1 \textrm{cos}(q_1) \\ 0 & \textrm{sin}(q_1) & \textrm{cos}(q_1) & l_1 \textrm{sin}(q_1) \\ 0 & 0 & 0 & 1 \end{array} \right] .

Joint q_2 also rotates around the x axis, but there is no translation from q_2 to q_3. So our transformation matrix looks like

^2_1\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_2) & -\textrm{sin}(q_2) & 0 \\ 0 & \textrm{sin}(q_2) & \textrm{cos}(q_2) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .

The next transformation matrix is a little tricky, because you might be tempted to say that it’s rotating around the z axis, but actually it’s rotating around the y axis. This is determined by where q_3 is mounted relative to q_2. If it was mounted at 90 degrees from q_2 then it would be rotating around the z axis, but it’s not. For translation, there’s a translation along the y axis up to the next joint, so all in all the transformation matrix looks like:

^3_2\textbf{T} = \left[ \begin{array}{cccc} \textrm{cos}(q_3) & 0 & -\textrm{sin}(q_3) & 0\\ 0 & 1 & 0 & l_3 \\ \textrm{sin}(q_3) & 0 & \textrm{cos}(q_3) & 0\\ 0 & 0 & 0 & 1 \end{array} \right] .

And then the transformation matrices for coming from q_4 to q_3 and q_5 to q_4 are the same as the previous set, so we have

^4_3\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \textrm{cos}(q_4) & -\textrm{sin}(q_4) & 0 \\ 0 & \textrm{sin}(q_4) & \textrm{cos}(q_4) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] .

and

^5_4\textbf{T} = \left[ \begin{array}{cccc} \textrm{cos}(q_5) & 0 & -\textrm{sin}(q_5) & 0 \\ 0 & 1 & 0 & l_5 \\ \textrm{sin}(q_5) & 0 & \textrm{cos}(q_5) & 0\\ 0 & 0 & 0 & 1 \end{array} \right] .

Alright! Now that we have all of the transformation matrices, we can put them together to get the transformation from end-effector coordinates to our reference frame coordinates!

^\textrm{ee}_\textrm{org}\textbf{T} = ^0_\textrm{org}\textbf{T} \; ^1_0\textbf{T} \; ^2_1\textbf{T} \; ^3_2\textbf{T} \; ^4_3\textbf{T} \; ^5_4\textbf{T}.

At this point I went and tested this with some sample points to make sure that everything seemed to be being transformed properly, but we won’t go through that here.

Calculate the derivative of the transform with respect to each joint

The next step in calculating the Jacobian is getting the derivative of ^\textrm{ee}_\textrm{org}T. This could be a big ol’ headache to do it by hand, OR we could use SymPy, the symbolic computation package for Python. Which is exactly what we’ll do. So after a quick

sudo pip install sympy

I wrote up the following script to perform the derivation for us

import sympy as sp

def calc_transform():
    # set up our joint angle symbols (6th angle doesn't affect any kinematics)
    q = [sp.Symbol('q0'), sp.Symbol('q1'), sp.Symbol('q2'), sp.Symbol('q3'),
            sp.Symbol('q4'), sp.Symbol('q5')]
    # set up our arm segment length symbols
    l1 = sp.Symbol('l1')
    l3 = sp.Symbol('l3')
    l5 = sp.Symbol('l5')

    Torg0 = sp.Matrix([[sp.cos(q[0]), -sp.sin(q[0]), 0, 0,],
                       [sp.sin(q[0]), sp.cos(q[0]), 0, 0],
                       [0, 0, 1, 0],
                       [0, 0, 0, 1]])

    T01 = sp.Matrix([[1, 0, 0, 0],
                     [0, sp.cos(q[1]), -sp.sin(q[1]), l1*sp.cos(q[1])],
                     [0, sp.sin(q[1]), sp.cos(q[1]), l1*sp.sin(q[1])],
                     [0, 0, 0, 1]])

    T12 = sp.Matrix([[1, 0, 0, 0],
                     [0, sp.cos(q[2]), -sp.sin(q[2]), 0],
                     [0, sp.sin(q[2]), sp.cos(q[2]), 0],
                     [0, 0, 0, 1]])

    T23 = sp.Matrix([[sp.cos(q[3]), 0, sp.sin(q[3]), 0],
                     [0, 1, 0, l3],
                     [-sp.sin(q[3]), 0, sp.cos(q[3]), 0],
                     [0, 0, 0, 1]])

    T34 = sp.Matrix([[1, 0, 0, 0],
                     [0, sp.cos(q[4]), -sp.sin(q[4]), 0],
                     [0, sp.sin(q[4]), sp.cos(q[4]), 0],
                     [0, 0, 0, 1]])

    T45 = sp.Matrix([[sp.cos(q[5]), 0, sp.sin(q[5]), 0],
                     [0, 1, 0, l5],
                     [-sp.sin(q[5]), 0, sp.cos(q[5]), 0],
                     [0, 0, 0, 1]])

    T = Torg0 * T01 * T12 * T23 * T34 * T45

    # position of the end-effector relative to joint axes 6 (right at the origin)
    x = sp.Matrix([0,0,0,1])

    Tx = T * x

    for ii in range(6):
        print q[ii]
        print sp.simplify(Tx[0].diff(q[ii]))
        print sp.simplify(Tx[1].diff(q[ii]))
        print sp.simplify(Tx[2].diff(q[ii]))

And then consolidated the output using some variable shorthand to write a function that accepts in joint angles and generates the Jacobian:

def calc_jacobian(q):
    J = np.zeros((3, 7))

    c0 = np.cos(q[0])
    s0 = np.sin(q[0])
    c1 = np.cos(q[1])
    s1 = np.sin(q[1])
    c3 = np.cos(q[3])
    s3 = np.sin(q[3])
    c4 = np.cos(q[4])
    s4 = np.sin(q[4])

    c12 = np.cos(q[1] + q[2])
    s12 = np.sin(q[1] + q[2])

    l1 = self.l1
    l3 = self.l3
    l5 = self.l5

    J[0,0] = -l1*c0*c1 - l3*c0*c12 - l5*((s0*s3 - s12*c0*c3)*s4 + c0*c4*c12)
    J[1,0] = -l1*s0*c1 - l3*s0*c12 + l5*((s0*s12*c3 + s3*c0)*s4 - s0*c4*c12)
    J[2,0] = 0

    J[0,1] = (l1*s1 + l3*s12 + l5*(s4*c3*c12 + s12*c4))*s0
    J[1,1] = -(l1*s1 + l3*s12 + l5*s4*c3*c12 + l5*s12*c4)*c0
    J[2,1] = l1*c1 + l3*c12 - l5*(s4*s12*c3 - c4*c12)

    J[0,2] = (l3*s12 + l5*(s4*c3*c12 + s12*c4))*s0
    J[1,2] = -(l3*s12 + l5*s4*c3*c12 + l5*s12*c4)*c0
    J[2,2] = l3*c12 - l5*(s4*s12*c3 - c4*c12)

    J[0,3] = -l5*(s0*s3*s12 - c0*c3)*s4
    J[1,3] = l5*(s0*c3 + s3*s12*c0)*s4
    J[2,3] = -l5*s3*s4*c12

    J[0,4] = l5*((s0*s12*c3 + s3*c0)*c4 + s0*s4*c12)
    J[1,4] = l5*((s0*s3 - s12*c0*c3)*c4 - s4*c0*c12)
    J[2,4] = -l5*(s4*s12 - c3*c4*c12)

    return J

Alright! Now we have our Jacobian! Really the only time consuming part here was calculating our end-effector to origin transformation matrix, generating the Jacobian was super easy using SymPy once we had that.

Hack position control using the Jacobian

Great! So now that we have our Jacobian we’ll be able to translate forces that we want to apply to the end-effector into joint torques that we want to apply to the arm motors. Since we can’t control applied force to the motors though, and have to pass in desired angle positions, we’re going to do a hack approximation. Let’s first transform our forces from end-effector space into a set of joint angle torques:

\textbf{u} = \textbf{J}^T \; \textbf{u}_\textbf{x}.

To approximate the control then we’re simply going to take the current set of joint angles (which we know because it’s whatever angles we last told the system to move to) and add a scaled down version of \textbf{u} to approximate applying torque that affects acceleration and then velocity.

\textbf{q}_\textrm{des} = \textbf{q} + \alpha \; \textbf{u},

where \alpha is the gain term, I used .001 here because it was nice and slow, so no crazy commands that could break the servos would be sent out before I could react and hit the cancel button.

What we want to do then to implement operational space control here then is find the current (x,y,z) position of the end-effector, calculate the difference between it and the target end-effector position, use that to generate the end-effector control signal u_x, get the Jacobian for the current state of the arm using the function above, find the set of joint torques to apply, approximate this control by generating a set of target joint angles to move to, and then repeat this whole loop until we’re within some threshold of the target position. Whew.

So, a lot of steps, but pretty straight forward to implement. The method I wrote to do it looks something like:

def move_to_xyz(self, xyz_d):
    """
    np.array xyz_d: 3D target (x_d, y_d, z_d)
    """
    count = 0
    while (1):
        count += 1
        # get control signal in 3D space
        xyz = self.calc_xyz()
        delta_xyz = xyz_d - xyz
        ux = self.kp * delta_xyz

        # transform to joint space
        J = self.calc_jacobian()
        u = np.dot(J.T, ux)

        # target joint angles are current + uq (scaled)
        self.q[...] += u * .001
        self.robot.move(np.asarray(self.q.copy(), 'float32'))

        if np.sqrt(np.sum(delta_xyz**2)) < .1 or count > 1e4:
            break

And that is it! We have successfully hacked together a system that can perform operational space control of a 6DOF robot arm. Here is a very choppy video of it moving around to some target points in a grid on a cube.
6dof-operational-space
So, granted I had to drop a lot of frames from the video to bring it’s size down to something close to reasonable, but still you can see that it moves to target locations super fast!

Alright this is sweet, but we’re not done yet. We don’t want to have to tell the arm where to move ourselves. Instead we’d like the robot to perform target tracking for some target LED we’re moving around, because that’s way more fun and interactive. To do this, we’re going to use spiking cameras! So stay tuned, we’ll talk about what the hell spiking cameras are and how to use them for a super quick-to-setup and foolproof target tracking system in the next exciting post!

Tagged , , , , , , ,

Operational space control of 6DOF robot arm with spiking cameras part 1: Getting access to the arm through Python

From June 9th to the 19th we ran a summer school (brain camp) in our lab with people from all over to come and learn how to use our neural modeling software, Nengo, and then work on a project with others in the school and people from the lab. We had a bunch of fun hardware available, from neuromorphic hardware from the SpiNNaker group over at Cambridge to lego robots on omni-wheels to spiking cameras (i’ll discuss what a spiking camera is exactly in part 3 of this series) and little robot arms. There were a bunch of awesome projects, people built neural models capable of performing a simplified version of the Wisconsin card sorting task that they then got running on the SpiNNaker boards, a neural controller built to move a robot leech, a spiking neurons reinforcement-learning system implemented on SpiNNaker with spiking cameras to control the lego robot that learned to move towards green LEDs and avoid red LEDs, and a bunch of others. If you’re interested in participating in these kinds of projects and learning more about this I highly suggest you apply to the summer school for next year!

I took the summer school as a chance to break from other projects and hack together a project. The idea was to take the robot arm, implement an operational space controller (i.e. find the Jacobian), and then used spiking cameras to track an LED and have the robot arm learn how to move to the target, no matter where the cameras were placed, by learning the relationship between where the target is in camera-centric coordinates and arm-centric coordinates. This broke down into several steps: 1) Get access to the arm through Python, 2) derive the Jacobian to implement operational space control, 3) sample state space to get an approximation of the camera-centric to arm-centric function, 4) implement the control system to track the target LED.

Here’s a picture of the set-up during step 3, training.
spiking-cameras-6DOF-arm
So in the center is the 6DOF robot arm with a little LED attached to the head, and highlighted in orange circles are the two spiking cameras, expertly taped to the wall with office-grade scotch tape. You can also see the SpiNNaker board in the top left as a bonus, but I didn’t have enough time to involve it in this project.

I was originally going to write this all up as one post, because the first two parts are re-implementing previous posts, but even skimming through those steps it was getting long and I’m sure no one minds having a few different examples to look through for generating Cython code or deriving a Jacobian. So I’m going to break this into a few parts. Here in this post we’ll work through the first step (Cython) of our journey.

Get access to the arm through Python

The arm we had was the VE026A Denso training robot, on loan from Dr. Bryan Tripp of the neuromorphic robotics lab at UW. Previously an interface had been built up by one of Dr. Tripp’s summer students, written in C. C is great and all but Python is much easier to work with, and the rest of the software I wanted to work you know what I’m done justifying it Python is just great so Python is what I wanted to use. The end.

So to get access to the arm in Python I used the great ol’ C++ wrapper hack described in a previous post. Looking at Murphy-the-summer-student’s C code there were basically three functions I needed access to:

// initialize threads, connect to robot
void start_robot(int *semid, int32_t *ctrlid, int32_t *robotid, float **angles)
// send a set of joint angles to the robot servos
void Robot_Execute_slvMove(int32_t robotid, float j1, float j2, float j3, float j4, float j5, float j6, float j7, float j8)
// kill threads, disconnect from robot
void stop_robot(int semid, int32_t ctrlid, int32_t robotid)

So I changed the extension of the file to ‘.cpp’ (I know, I know, I said this was a hack!), fixed some compiler errors that popped up, and then appended the following to the end of the file:

/* A class to contain all the information that needs to
be passed around between these functions, and can
encapsulate it and hide it from the Python interface.

Written by Travis DeWolf (June, 2015)
*/

class Robot {
/* Very simple class, just stores the variables
* needed for talking to the robot, and a gives access
* to the functions for moving it around */

int semid;
int32_t ctrlid;
int32_t robotid;
float* angles;

public:
Robot();
~Robot();
void connect();
void move(float* angles_d);
void disconnect();
};

Robot::Robot() { }

Robot::~Robot() { free(angles); }

/* Connect to the robot, get the ids and current joint angles */
/* char* usb_port: the name of the port the robot is connected to */
void Robot::connect()
{
start_robot(&amp;semid, &amp;ctrlid, &amp;robotid, &amp;angles);
printf("%i %i %i", robotid, ctrlid, semid);
}

/* Move the robot to the specified angles */
/* float* angles: the target joint angles */
void Robot::move(float* angles)
{
// convert from radians to degrees
Robot_Execute_slvMove(robotid,
angles[0] * 180.0 / 3.14,
angles[1] * 180.0 / 3.14,
angles[2] * 180.0 / 3.14,
angles[3] * 180.0 / 3.14,
angles[4] * 180.0 / 3.14,
angles[5] * 180.0 / 3.14,
angles[6] * 180.0 / 3.14,
angles[7] * 180.0 / 3.14);
}

/* Disconnect from the robot */
void Robot::disconnect()
{
stop_robot(semid, ctrlid, robotid);
}

int main()
{
Robot robot = Robot();
// connect to robot
robot.connect();

// move robot to some random target angles
float target_angles[7] = {0, np.pi / 2.0, 0.0, 0, 0, 0, 0};
robot.move(target_angles);

sleep(1);

// disconnect
robot.disconnect();

return 0;
}

So very simple class. Basically just wanted to create a set of functions to hide some of the unnecessary parameters from Python, do the conversion from radians to degrees (who works in degrees? honestly), and have a short little main function to test the creation of the class, and connection, movement, and disconnection of the robot. Like I said, there were a few compiler errors when switching from C to C++, but really that was the only thing that took any time on this part. A few casts and everything was gravy.

The next part was writing the Cython pyRobot.pyx file (I describe the steps involved in this in more detail in this post):

import numpy as np
cimport numpy as np

cdef extern from "bcap.cpp":
cdef cppclass Robot:
Robot()
void connect()
void move(float* angles)
void disconnect()

cdef class pyRobot:
cdef Robot* thisptr

def __cinit__(self):
self.thisptr = new Robot()

def __dealloc__(self):
del self.thisptr

def connect(self):
"""
Set up the connection to the robot, pass in a vector,
get back the current joint angles of the arm.
param np.ndarray angles: a vector to store current joint angles
"""
self.thisptr.connect()

def move(self, np.ndarray[float, mode="c"] angles):
"""
Step the simulation forward one timestep. Pass in target angles,
get back the current arm joint angles.
param np.ndarray angles: 7D target angle vector
"""
self.thisptr.move(&amp;angles[0])

def disconnect(self):
"""
Disconnect from the robot.
"""
self.thisptr.disconnect()

the setup.py file:

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

setup(
name = 'Demos',
ext_modules=[
Extension("test",
sources=["pyRobot.pyx"],
language="c++"),
],
cmdclass = {'build_ext': build_ext},

)

and then compiling!

run setup.py build_ext -i

With all of this working, a nice test.so file was created, and it was now possible to connect to the robot in Python with

import test
rob = test.pyRobot()
rob.connect()
target_angles = np.array([0, np.pi/2, 0, np.pi/4, 0, 0, 0, 0], dtype='float32')
rob.move(target_angles)
import time
time.sleep(1)
rob.disconnect()

In the above code we’re instantiating the pyRobot class, connecting to the robot, defining a set of target angles and telling the robot to move there, waiting for 1 second to give the robot time to actually move, and then disconnecting from the robot. Upon connection we have to pass in a set of joint angles for the servos, and so you see the robot arm jerk into position, and then move to the target set of joint angles, it looks something exactly like this:

6dof-connect-move

Neat, phase 1 complete.

At the end of phase 1 we are able to connect to the robot arm through Python and send commands in terms of joint angles. But we don’t want to send commands in terms of joint angles, we want to just specify the end-effector position and have the robot work out the angles! I’ve implemented an inverse kinematics solver using constrained optimization before for a 3-link planar arm, but we’re not going to go that route. Find out what we’ll do by joining us next time! or by remembering what I said we’d do at the beginning of this post.

Tagged , , ,

Dynamic movement primitives part 2: Controlling end-effector trajectories

The dynamic movement primitive (DMP) framework was designed for trajectory control. It so happens that in previous posts we’ve built up to having several arm simulations that are ripe for throwing a trajectory controller on top, and that’s what we’ll do in this post. The system that we will be controlling here is the 3 link arm model with an operational space controller (OSC) that translates end-effector forces into joint torques. The DMPs here will be controlling the (x,y) trajectory of the hand, and the OSC will take care of turning the desired hand forces into torques that can be applied to the arm. All of the code used to generate the animations throughout this post can of course be found up on my github (to run play around with variants of python run.py arm3 dmp write).

Controlling a 3 link arm with DMPs
We have our 3 link arm and its OSC controller; this whole setup we’ll collectively refer to as ‘the plant’ throughout this post. We are going to pass in some (x,y) force signal to the plant, and the plant will carry it out. Additionally we’ll get a feedback signal with the (x,y) position of the hand. At the same time, we also have a DMP system that’s doing its own thing, tracing out a desired trajectory in (x,y) space. We have to tie these two systems together.

To do this is easy, we’ll generate the control signal for the plant from our DMP system simply by measuring the difference between the state of our DMP system and the plant state, use that to drive the plant to the state of the DMP system. Formally,

u = k_p(y_{\textrm{DMP}} - y)

where y_{\textrm{DMP}} is the state of the DMP system, y is the state of the plant, and k_p and is the position error gain term.

Once we have this, we just go ahead and step our DMP system forward and make sure the gain values on the control signal are high enough that the plant follows the DMP’s trajectory. And that’s pretty much it, just run the DMP system to the end of the trajectory and then stop your simulation.

To give a demonstration of DMP control I’ve set up the DMP system to follow the same number trajectories that the SPAUN arm followed. As you can see the combination of DMPs and operational space control is much more effective than my previous implementation.

Incorporating system feedback

One of the issues in implementing the control above is that we have to be careful about how quickly the DMP trajectory moves, because while the DMP system isn’t constrained by any physical dynamics, the plant is. Depending on the size of the movement the DMP trajectory may be moving a foot a second or an inch a second. You can see above that the arm doesn’t fully draw out the desired trajectories in places where the DMP system moved too quickly in and out and sharp corners. The only way to remedy this without feedback is to have the DMP system move more slowly throughout the entire trajectory. What would be nice, instead, would be to just say ‘go as fast as you can, as long as the plant state is within some threshold distance of you’, and this is where system feedback comes in.

It’s actually very straightforward to implement this using system feedback: If the plant state drifts away from the state of the DMPs, slow down the execution speed of the DMP to allow the plant time to catch up. The do this we just have to multiply the DMP timestep dt by a new term:

1 / (1 + \alpha_{\textrm{err}}(y_{\textrm{plant}} - y_{\textrm{DMP}})).

All this new term does is slow down the canonical system when there’s an error, you can think of it as a scaling on the time step. Additionally, the sensitivity of this term can be modulated the scaling term \alpha_{\textrm{err}} on the difference between the plant and DMP states.

We can get an idea of how this affects the system by looking at the dynamics of the canonical system when an error term is introduced mid-run:

CSwitherror
When the error is introduced the dynamics of the system slow down, great! Lets look at an example comparing execution with and without this feedback term. Here’s the system drawing the number 3 without any feedback incorporation:

and here’s the system drawing the number 3 with the feedback term included:


These two examples are a pretty good case for including the feedback term into your DMP system. You can still see in the second case that the specified trajectory isn’t traced out exactly, but if that’s what you’re shooting for you can just crank up the \alpha_{\textrm{err}} to make the DMP timestep really slow down whenever the DMP gets ahead of the plant at all.

Interpolating trajectories for imitation

When imitating trajectories there can be some issues with having enough sample points and how to fit them to the canonical system’s timeframe, they’re not difficult to get around but I thought I would go over what I did here. The approach I took was to always run the canonical system for 1 second, and whenever a trajectory is passed in that should be imitated to scale the x-axis of the trajectory such that it’s between 0 and 1. Then I shove it into an interpolator and use the resulting function to generate an abundance of nicely spaced sample points for the DMP imitator to match. Here’s the code for that:

# generate function to interpolate the desired trajectory
import scipy.interpolate

path = np.zeros((self.dmps, self.timesteps))
x = np.linspace(0, self.cs.run_time, y_des.shape[1])

for d in range(self.dmps):

    # create interpolation function
    path_gen = scipy.interpolate.interp1d(x, y_des[d])

    # create evenly spaced sample points of desired trajectory
    for t in range(self.timesteps):
        path[d, t] = path_gen(t * self.dt)

y_des = path

Direct trajectory control vs DMP based control

Now, using the above described interpolation function we can just directly use its output to guide our system. And, in fact, when we do this we get very precise control of the end-effector, more precise than the DMP control, as it happens. The reason for this is because our DMP system is approximating the desired trajectory and with a set of basis functions, and some accuracy is being lost in this approximation.

So if we instead use the interpolation function to drive the plant we can get exactly the points that we specified. The times when this comes up especially are when the trajectories that you’re trying to imitate are especially complicated. There are ways to address this with DMPs by placing your basis functions more appropriately, but if you’re just looking for the exact replication of an input trajectory (as often people are) this is a simpler way to go. You can see the execution of this in the trace.py code up on my github. Here’s a comparison of a single word drawn using the interpolation function:

draw_word_traj

and here’s the same word drawn using a DMP system with 1,000 basis function per DOF:

draw_word_dmp
We can see that just using the interpolation function here gives us the exact path that we specified, where using DMPs we have some error, and this error increases with the size of the desired trajectory. But! That’s for exactly following a given trajectory, which is often not the case. The strength of the DMP framework is that the trajectory is a dynamical system. This lets us do simple things to get really neat performance, like scale the trajectory spatially on the fly simply by changing the goal, rather than rescaling the entire trajectory:

Conclusions

Some basic examples of using DMPs to control the end-effector trajectory of an arm with operational space control were gone over here, and you can see that they work really nicely together. I like when things build like this. We also saw that power of DMPs in this situation is in their generalizability, and not in exact reproduction of a given path. If I have a single complex trajectory that I only want the end-effector to follow once then I’m going to be better off just interpolating that trajectory and feeding the coordinates into the arm controller rather than go through the whole process of setting up the DMPs.

Drawing words, though, is just one basic example of using the DMP framework. It’s a very simple application and really doesn’t do justice to the flexibility and power of DMPs. Other example applications include things like playing ping pong. This is done by creating a desired trajectory showing the robot how to swing a ping pong paddle, and then using a vision system to track the current location of the incoming ping pong ball and changing the target of the movement to compensate dynamically. There’s also some really awesome stuff with object avoidance, that is implemented by adding another term with some simple dynamics to the DMP. Discussed here, basically you just have another system that moves you away from the object with a strength relative to your distance from the object. You can also use DMPs to control gain terms on your PD control signal, which is useful for things like object manipulation.

And of course I haven’t touched on rhythmic DMPs or learning with DMPs at all, and those are both also really interesting topics! But this serves as a decent introduction to the whole area, which has been developed in the Schaal lab over the last decade or so. I recommend further reading with some of these papers if you’re interested, there are a ton of neat ways to apply the DMP framework! And, again, the code for everything here is up on my github in the control and pydmps repositories.

Tagged , , , , ,

Robot control part 7: OSC of a 3-link arm

So we’ve done control for the 2-link arm, and control of the one link arm is trivial (where we control joint angle, or x or y coordinate of the pendulum), so here I’ll just show an implementation of operation space control for a more interesting arm model, the 3-link arm model. The code can all be found up on my Github.

In theory there’s nothing different or more difficult about controlling a 3-link arm versus a 2-link arm. For the inertia matrix, what I ended up doing here is just jacking up all the values of the matrix to about 100, which causes the controller to way over control the arm, and you can see the torques are much larger than they would need to be if we had an accurate inertia matrix. But the result is the same super straight trajectories that we’ve come to expect from operational space control:

3link

It’s a little choppy because I cut out a bunch of frames to keep the gif size down. But you get the point, it works. And quite well!

Because this is also a 3-link arm now and our task space force signal is 2D, we have redundant space of solutions, meaning that the task space control signal can be carried out in a number of ways. In other words, a null space exists for this controller. This means that we can throw another controller in our system to operate inside the null space of the first controller. We’ve already worked through all the math for this, so it’s straightforward to implement.

What kind of null space controller should we put in? Well, you may have noticed in the above animation the arm goes through itself, here’s another example:

3linknonull

Often it’s desirable to avoid this (because of physics or whatever), so what we can do is add a secondary controller that works to keep the arm’s elbow and wrist near some comfortable default angles. When we do this we get the following:

3linknull

And there you have it! Operational space control of a three link arm with a secondary controller in the null space to try to keep the angles near their default / resting positions.

I also added mouse based control to the arm so it will try to follow your mouse when you move over the figure, which makes it pretty fun to explore the power of the controller. It’s interesting to see where the singularities become an issue, and how having a null space controller that’s operating in joint space can actually come to help the system move through those problem points more smoothly. Check it out! It’s all up on my Github.

Tagged , , , , , ,

Robot control part 5: Controlling in the null space

In the last post, I went through how to build an operational space controller. It was surprisingly easy after we’ve worked through all the other posts. But maybe that was a little too easy for you. Maybe you want to do something more interesting like implement more than one controller at the same time. In this post we’ll go through how to work inside the null space of a controller to implement several seperate controllers simultaneously without interference.
Buckle up.

Null space forces

The last example comprises the basics of operational space control; describe the system, calculate the system dynamics, transform desired forces from an operational space to the generalized coordinates, and build the control signal to cancel out the undesired system dynamics. Basic operational space control works quite well, but it is not uncommon to have several control goals at once; such as `move the end-effector to this position’ (primary goal), and `keep the elbow raised high’ (secondary goal) in the control of a robot arm.

If the operational space can also serve as generalized coordinates, i.e. if the system state specified in operational space constrains all of the degrees of freedom of the robot, then there is nothing that can be done without affecting the performance of the primary controller. In the case of controlling a two-link robot arm this is the case. The end-effector Cartesian space (chosen as the operational space) could also be a generalized coordinates system, because a specific (x,y) position fully constrains the position of the arm.

But often when using operational space control for more complex robots this is not the case. In these situations, the forces controlled in operational space have fewer dimensions than the robot has degrees of freedom, and so it is possible to accomplish the primary goal in a number of ways. The null space of this primary controller is the region of state space where there is a redundancy of solutions; the system can move in a number of ways and still not affect the completion of the goals of the primary controller. An example of this is all the different configurations the elbow can be in while a person moves their hand in a straight line. In these situations, a secondary controller can be created to operate in the null space of the primary controller, and the full control signal sent to the system is a sum of the primary control signal and a filtered version of the secondary control signal. In this section the derivation of the null-space filter will be worked through for a system with only a primary and secondary controller, but note that the process can be applied iteratively for systems with further controllers.

The filtering of the secondary control signal means that the secondary controller’s goals will only be accomplished if it is possible to do so without affecting the performance of the first controller. In other words, the secondary controller must operate in the null space of the first controller. Denote the primary operational space control signal, e.g. the control signal defined in the previous post, as \textbf{u}_{\textbf{x}} and the control signal from the secondary controller \textbf{u}_{\textrm{null}}. Define the force to apply to the system

\textbf{u} = \textbf{u}_{\textbf{x}} + (\textbf{I} - \textbf{J}_{ee}^T(\textbf{q}) \; \textbf{J}_{ee}^{T+}(\textbf{q})) \textbf{u}_{\textrm{null}},

where \textbf{J}_{ee}^{T+}(\textbf{q}) is the pseudo-inverse of \textbf{J}_{ee}^T(\textbf{q}).

Examining the filtering term that was added,

(\textbf{I} - \textbf{J}_{ee}^T(\textbf{q}) \textbf{J}_{ee}^{T+}(\textbf{q})) \textbf{u}_{\textrm{null}},

it can be seen that the Jacobian transpose multiplied by its pseudo-inverse will be 1’s all along the diagonal, except in the null space. This means that \textbf{u}_{\textrm{null}} is subtracted from itself everywhere that affects the operational space movement and is left to apply any arbitrary control signal in the null space of the primary controller.

Unfortunately, this initial set up does not adequately filter out the effects of forces that might be generated by the secondary controller. The Jacobian is defined as a relationship between the velocities of two spaces, and so operating in the null space defined by the Jacobian ensures that no velocities are applied in operational space, but the required filter must also prevent any accelerations from affecting movement in operational space. The standard Jacobian pseudo-inverse null space is a velocity null space, and so a filter built using it will allow forces affecting the system’s acceleration to still get through. What is required is a pseudo-inverse Jacobian defined to filter signals through an acceleration null space.

To acquire this acceleration filter, our control signal will be substituted into the equation for acceleration in the operational space, which, after cancelling out gravity effects with the control signal and removing the unmodeled dynamics, gives

\ddot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q}) \textbf{M}^{-1}(\textbf{q}) [\textbf{J}_{ee}^T(\textbf{q}) \; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) \; \ddot{\textbf{x}}_\textrm{des} - (\textbf{I} - \textbf{J}_{ee}^T(\textbf{q})\;\textbf{J}_{ee}^{T+}(\textbf{q}))\;\textbf{u}_{\textrm{null}}].

Rewriting this to separate the secondary controller into its own term

\ddot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q}) \textbf{M}^{-1}(\textbf{q}) \textbf{J}_{ee}^T(\textbf{q}) \; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) \; \ddot{\textbf{x}}_\textrm{des} - \textbf{J}_{ee}(\textbf{q}) \textbf{M}^{-1}(\textbf{q})[\textbf{I} - \textbf{J}_{ee}^T(\textbf{q})\;\textbf{J}_{ee}^{T+}(\textbf{q})]\;\textbf{u}_{\textrm{null}},

it becomes clear that to not cause any unwanted movement in operational space the second term must be zero.

There is only one free term left in the second term, and that is the pseudo-inverse. There are numerous different pseudo-inverses that can be chosen for a given situation, and here what is required is to engineer a pseudo-inverse such that the term multiplying \textbf{u}_{\textrm{null}} in the above operational space acceleration equation is guaranteed to go to zero.

\textbf{J}_{ee}(\textbf{q})\textbf{M}^{-1}(\textbf{q}) [\textbf{I} - \textbf{J}_{ee}^T (\textbf{q})\textbf{J}_{ee}^{T+}(\textbf{q})] \textbf{u}_{\textrm{null}} = \textbf{0},

this needs to be true for all \textbf{u}_{\textrm{null}}, so it can be removed,

\textbf{J}_{ee} (\textbf{q}) \; \textbf{M}^{-1}(\textbf{q}) [\textbf{1} - \textbf{J}_{ee}^T (\textbf{q}) \; \textbf{J}_{ee}^{T+} (\textbf{q})] = \textbf{0},

\textbf{J}_{ee}(\textbf{q}) \; \textbf{M}^{-1}(\textbf{q}) = \textbf{J}_{ee}(\textbf{q}) \; \textbf{M}^{-1}(\textbf{q}) \; \textbf{J}_{ee}^T(\textbf{q})\; \textbf{J}_{ee}^{T+}(\textbf{q}),

substituting in our inertia matrix for operational space, which defines

\textbf{J}_{ee} (\textbf{q}) \textbf{M}^{-1} (\textbf{q}) = \textbf{M}_{\textbf{x}_{ee}}^{-1} (\textbf{q}) \textbf{J}_{ee}^{T+} (\textbf{q}),

\textbf{J}_{ee}^{T+}(\textbf{q}) = \textbf{M}_{\textbf{x}_{ee}} (\textbf{q}) \; \textbf{J}_{ee}(\textbf{q}) \; \textbf{M}^{-1}(\textbf{q}).

This specific Jacobian inverse was presented in this 1987 paper by Dr. Oussama Khatib and is called the `dynamically consistent generalized inverse’. Using this psuedo-inverse guarantees that any signal coming from the secondary controller will not affect movement in the primary controller’s operational space. Just as a side-note, the name ‘pseudo-inverse’ is a bit of misnomer here, since it doesn’t try to produce the identity when multiplied by the original Jacobian transpose, but hey. That’s what they’re calling it.

The null space filter cancels out the acceleration effects of forces in operational space from a signal that is being applied as part of the control system. But it can also be used to cancel out the effects of any unwanted signal that can be modeled. Given some undesirable force signal interfering with the system that can be effectively modeled, a null space filtering term can be implemented to cancel it out. The control signal in this case, with one primary operational space controller and a null space filter for the undesired force, looks like:

\textbf{u} = \textbf{J}^T_{ee}(\textbf{q}) \; \textbf{M}_\textbf{x}(\textbf{q}) \; \ddot{\textbf{x}}_\textrm{des} - \textbf{g}(\textbf{q}) - \textbf{J}^T_{ee}(\textbf{q}) \;\textbf{J}^{T+}_{ee}(\textbf{q}) \; \textbf{u}_{\textrm{undesired force}}.

We did it! This will now allow a high-priority operational space controller to execute without interference from a secondary controller operating in its null space to complete it’s own set of goals (when possible).

Example:

Given a three link arm (revolute-revolute-revolute) operating in the (x,z) plane, shown below:

rotation and distance2

this example will construct the control system for a primary controller controlling the end-effector and a secondary controller working to keep the arm near its joint angles’ default resting positions.

Let the system state be \textbf{q} = [q_0, q_1, q_2]^T with default positions \textbf{q}^0 = \left[\frac{\pi}{3}, \frac{\pi}{4}, \frac{\pi}{4} \right]^T. The control signal of the secondary controller is the difference between the target state and the current system state

\textbf{u}_{\textrm{null}} = k_{p_{\textrm{null}}}(\textbf{q}^0 - \textbf{q}),

where k_{p_\textrm{null}} is a gain term.

Let the centres of mass be

\textrm{com}_0 = \left[ \begin{array}{c} \frac{1}{2}cos(q_0) \\ 0 \\ \frac{1}{2}sin(q_0) \end{array} \right], \;\;\;\; \textrm{com}_1 = \left[ \begin{array}{c} \frac{1}{4}cos(q_1) \\ 0 \\ \frac{1}{4}sin(q_1) \end{array} \right] \;\;\;\; \textrm{com}_2 = \left[ \begin{array}{c} \frac{1}{2}cos(q_2) \\ 0 \\ \frac{1}{4} sin (q_2) \end{array} \right],

the Jacobians for the COMs are

\textbf{J}_0(\textbf{q}) = \left[ \begin{array}{ccc} -\frac{1}{2} sin(q_0) & 0 & 0 \\ 0 & 0 & 0 \\ \frac{1}{2} cos(q_0) & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right],

\textbf{J}_1(\textbf{q}) = \left[ \begin{array}{ccc} -L_0sin(q_0) - \frac{1}{4}sin(q_{01}) & -\frac{1}{4}sin(q_{01}) & 0 \\ 0 & 0 & 0 \\ L_0 cos(q_0) + \frac{1}{4} cos(q_{01})& \frac{1}{4} cos(q_{01}) & 0 \\ 0 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right]

\textbf{J}_2(\textbf{q}) = \left[ \begin{array}{ccc} -L_0sin(q_0) - L_1sin(q_{01}) - \frac{1}{2}sin(q_{012}) & -L_1sin(q_{01}) - \frac{1}{2}sin(q_{012}) & - \frac{1}{2}sin(q_{012}) \\ 0 & 0 & 0 \\ L_0 cos(q_0) + L_1 cos(q_{01}) + \frac{1}{4}cos(q_{012}) & L_1 cos(q_{01}) + \frac{1}{4} cos(q_{012}) & \frac{1}{4}cos(q_{012}) \\ 0 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 0 \end{array} \right].

The Jacobian for the end-effector of this three link arm is

\textbf{J}_{ee} = \left[ \begin{array}{ccc} -L_0 sin(q_0) - L_1 sin(q_{01}) - L_2 sin(q_{012}) & - L_1 sin(q_{01}) - L_2 sin(q_{012}) & - L_2 sin(q_{012}) \\ L_0 cos(q_0) + L_1 cos(q_{01}) + L_2 cos(q_{012}) & L_1 cos(q_{01}) + L_2 cos(q_{012}) & L_2 cos(q_{012}), \end{array} \right]

where q_{01} = q_0 + q_1 and q_{012} = q_0 + q_1 + q_2.

Taking the control signal developed in Section~\ref{sec:exampleOS}

\textbf{u} = \textbf{J}^T_{ee}(\textbf{q}) \; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) [k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}})] - \textbf{g}(\textbf{q}),

where \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) was defined in the previous post, and \textbf{g}(\textbf{q}) is defined two posts ago, and k_p and k_v are gain terms, usually set such that k_v = \sqrt{k_p}, and adding in the null space control signal and filter gives

\textbf{u} = \textbf{J}^T_{ee}(\textbf{q}) \; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) [k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}})] - (\textbf{I} - \textbf{J}^T_{ee}(\textbf{q}) \textbf{J}^{T+}_{ee}(\textbf{q})) \textbf{u}_{\textrm{null}} - \textbf{g}(\textbf{q}),

where \textbf{J}^{T+}(\textbf{q}) is the dynamically consistent generalized inverse defined above, and \textbf{u}_{\textrm{null}} is our null space signal!

Conclusions

It’s a lot of math, but when you start to get a feel for it what’s really awesome is that this is it. We’re describing the whole system, and so by working with these equations we can get a super effective controller. Which is pretty cool. Especially in relation to other possible controllers.

Alright! We’ve now worked through all the basic theory for operational space control, it is time to get some implementations going.

Tagged , , , , ,

Robot control part 4: Operation space control

In this post we’ll look at operational space control and how to derive the control equations. I’d like to mention again that these posts have all come about as a result of me reading and working through Samir Menon’s operational space control tutorial, where he works through an implementation example on a revolute-prismatic-prismatic robot arm.

Generalized coordinates vs operational space

The term generalized coordinates refers to a characterization of the system that uniquely defines its configuration. For example, if our robot has 7 degrees of freedom, then there are 7 state variables, such that when all these variables are given we can fully account for the position of the robot. In the previous posts of this series we’ve been describing robotic arms in joint space, and for these systems joint space is an example of generalized coordinates. This means that if we know the angles of all of the joints, we can draw out exactly what position that robot is in. An example of a coordinate system that does not uniquely define the configuration of a robotic arm would be one that describes only the x position of the end-effector.

So generalized coordinates tell us everything we need to know about where the robot is, that’s great. The problem with generalized coordinates, though, is that planning trajectories in this space for tasks that we’re interested in performing tends not to be straight forward. For example, if we have a robotic arm, and we want to control the position of the end-effector, it’s not obvious how to control the position of the end-effector by specifying a trajectory for each of the arm’s joints to follow through joint space.

The idea behind operational space control is to abstract away from the generalized coordinates of the system and plan a trajectory in a coordinate system that is directly relevant to the task that we wish to perform. Going back to the common end-effector position control situation, we would like to operate our arm in 3D (x,y,z) Cartesian space. In this space, it’s obvious what trajectory to follow to move the end-effector between two positions (most of the time it will just be a straight line in each dimension). So our goal is to build a control system that lets us specify a trajectory in our task space and will transform this signal into generalized coordinates that it can then send out to the system for execution.

Operational space control of simple robot arm

Alright, we’re going to work through an example. The generalized coordinates for this example is going to be joint space, and the operational space is going to be the end-effector Cartesian coordinates relative to the a reference frame attached to the base. Recycling the robot from the second post in this series, here’s the set up we’ll be working with:

RR robot arm

Once again, we’re going to need to find the Jacobians for the end-effector of the robot. Fortunately, we’ve already done this:

\textbf{J} = \left[ \begin{array}{cc} -L_0 sin(\theta_0) - L_1 sin(\theta_0 + \theta_1) & - L_1 sin(\theta_0 + \theta_1) \\ L_0 cos(\theta_0) + L_1 cos(\theta_0 + \theta_1) & L_1 cos(\theta_0 + \theta_1) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right]

Great! So now that we have \textbf{J}, we can go ahead and transform forces from end-effector (hand) space to joint space as we discussed in the second post:

\textbf{u} = \textbf{J}_{ee}^T(\textbf{q}) \; \textbf{F}_{\textbf{x}}.

Rewriting \textbf{F}_\textbf{x} as its component parts

\textbf{F}_{\textbf{x}} = \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) \; \ddot{\textbf{x}}_\textrm{des},

where \ddot{\textbf{x}} is end-effector acceleration, and \textbf{M}_{\textbf{x}_{ee}(\textbf{q})} is the inertia matrix in operational space. Unfortunately, this isn’t just the normal inertia matrix, so let’s take a look here at how to go about deriving it.

Inertia in operational space

Being able to calculate \textbf{M}(\textbf{q}) allows inertia to be cancelled out in joint-space by incorporating it into the control signal, but to cancel out the inertia of the system in operational space more work is still required. The first step will be calculating the acceleration in operational space. This can be found by taking the time derivative of our original Jacobian equation.

\frac{d}{d t}\dot{\textbf{x}} = \frac{d}{d t} (\textbf{J}_{ee}(\textbf{q}) \; \dot{\textbf{q}}),

\ddot{\textbf{x}} = \dot{\textbf{J}}_{ee}(\textbf{q}) \; \dot{\textbf{q}} + \textbf{J}_{ee} (\textbf{q})\; \ddot{\textbf{q}}.

Substituting in the dynamics of the system, as defined in the previous post, but ignoring the effects of gravity for now, gives:

\ddot{\textbf{x}} = \dot{\textbf{J}}_{ee}(\textbf{q}) \; \dot{\textbf{q}} + \textbf{J}_{ee} (\textbf{q})\; \textbf{M}^{-1}(\textbf{q}) [ \textbf{u} - \textbf{C}(\textbf{q}, \dot{\textbf{q}})].

Define the control signal

\textbf{u} = \textbf{J}_{ee}^T(\textbf{q}) \textbf{F}_\textbf{x},

where substituting in for \textbf{F}_\textbf{x}, the desired end-effector force, gives

\textbf{u} = \textbf{J}_{ee}^T(\textbf{q})\; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q})\; \ddot{\textbf{x}}_\textrm{des},

where \ddot{\textbf{x}}_\textrm{des} denotes the desired end-effector acceleration. Substituting the above equation into our equation for acceleration in operational space gives

\ddot{\textbf{x}} = \dot{\textbf{J}}_{ee}(\textbf{q}) \; \dot{\textbf{q}} + \textbf{J}_{ee} (\textbf{q})\; \textbf{M}^{-1}(\textbf{q}) [ \textbf{J}_{ee}^T(\textbf{q})\; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q})\; \ddot{\textbf{x}}_\textrm{des} - \textbf{C}(\textbf{q}, \dot{\textbf{q}})].

Rearranging terms leads to

\ddot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q})\; \textbf{M}^{-1}(\textbf{q}) \; \textbf{J}_{ee}^T(\textbf{q})\; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q})\; \ddot{\textbf{x}}_\textrm{des} + [\dot{\textbf{J}}_{ee}(\textbf{q}) \; \dot{\textbf{q}} - \textbf{J}_{ee}(\textbf{q})\textbf{M}^{-1}(\textbf{q}) \; \textbf{C}(\textbf{q}, \dot{\textbf{q}})],

the last term is ignored due to the complexity of modeling it, resulting in

\ddot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q})\; \textbf{M}^{-1}(\textbf{q}) \textbf{J}_{ee}^T(\textbf{q})\; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q})\; \ddot{\textbf{x}}_\textrm{des}.

At this point, to get the dynamics \ddot{\textbf{x}} to be equal to the desired acceleration \ddot{\textbf{x}}_\textrm{des}, the end-effector inertia matrix \textbf{M}_{\textbf{x}_{ee}} needs to be chosen carefully. By setting

\textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) = [\textbf{J}_{ee}(\textbf{q}) \; \textbf{M}^{-1}(\textbf{q}) \; \textbf{J}_{ee}^T(\textbf{q})]^{-1},

we now get

\ddot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q})\; \textbf{M}^{-1}(\textbf{q}) \textbf{J}_{ee}^T(\textbf{q})\; [\textbf{J}_{ee}(\textbf{q}) \; \textbf{M}^{-1}(\textbf{q}) \; \textbf{J}_{ee}^T(\textbf{q})]^{-1} \; \ddot{\textbf{x}}_\textrm{des},

\ddot{\textbf{x}} = \ddot{\textbf{x}}_\textrm{des}.

And that’s why and how the inertia matrix in operational space is defined!

The whole signal

Going back to the control signal we were building, let’s now add in a term to cancel the effects of gravity in joint space. This gives

\textbf{u} = \textbf{J}_{ee}^T(\textbf{q}) \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) \ddot{\textbf{x}}_\textrm{des} + \textbf{g}(\textbf{q}),

where \textbf{g}(\textbf{q}) is the same as defined in the previous post. This controller converts desired end-effector acceleration into torque commands, and compensates for inertia and gravity.

Defining a basic PD controller in operational space

\ddot{\textbf{x}}_\textrm{des} = k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}}),

and the full equation for the operational space control signal in joint space is:

\textbf{u} = \textbf{J}_{ee}^T(\textbf{q}) \; \textbf{M}_{\textbf{x}_{ee}}(\textbf{q}) [k_p (\textbf{x}_{\textrm{des}} - \textbf{x}) + k_v (\dot{\textbf{x}}_{\textrm{des}} - \dot{\textbf{x}})] + \textbf{g}(\textbf{q}).

Hurray! That was relatively simple. The great thing about this, though, is that it’s the same process for any robot arm! So go out there and start building controllers! Find your robot’s mass matrix and gravity term in generalized coordinates, the Jacobian for the end effector, and you’re in business.

Conclusions

So, this feels a little anticlimactic without an actual simulation / implementation of operational space, but don’t worry! As avid readers (haha) will remember, a while back I worked out how to import some very realistic MapleSim arm simulations into Python for use with some Python controllers. This seems a perfect application opportunity, so that’s next! A good chance to work through writing the controllers for different arms and also a chance to play with controllers operating in null spaces and all the like.

Actual simulation implementations will also be a good chance to play with trying to incorporate those other force terms into the control equation, and get to see the results without worrying about breaking an actual robot. In actual robots a lot of the time you leave out anything where your model might be inaccurate because the last thing to do is falsely compensate for some forces and end up injecting energy into your system, making it unstable.

There’s still some more theory to work through though, so I’d like to do that before I get to implementing simulations. One more theory post, and then we’ll get back to code!

Tagged , , ,

Robot control part 3: Accounting for mass and gravity

In the exciting previous post we looked at how to go about generating a Jacobian matrix, which we could use to transformation both from joint angle velocities to end-effector velocities, and from desired end-effector forces into joint angle torques. I briefly mentioned right at the end that using just this force transformation to build your control signal was only appropriate for very simple systems that didn’t have to account for things like arm-link mass or gravity.

In general, however, mass and gravity must be accounted for and cancelled out. The full dynamics of a robot arm are

\textbf{M}(\textbf{q}) \ddot{\textbf{q}} = (\textbf{u} - \textbf{C}(\textbf{q}, \dot{\textbf{q}}) - \textbf{g}(\textbf{q})) ,

where \ddot{\textbf{q}} is joint angle acceleration, \textbf{u} is the control signal (specifying torque), \textbf{C}(\textbf{q}, \dot{\textbf{q}}) is a function describing the Coriolis and centrifugal effects, \textbf{g}(\textbf{q}) is the effect of gravity in joint space, and \textbf{M} is the mass matrix of the system in joint space.

There are a lot of terms involved in the system acceleration, so while the Jacobian can be used to transform forces between coordinate systems it is clear that just setting the control signal \textbf{u} = \textbf{J}_{ee}^T (\textbf{q})\textbf{F}_\textbf{x} is not sufficient, because a lot of the dynamics affecting acceleration aren’t accounted for. In this section an effective PD controller operating in joint space will be developed that will allow for more precise control by cancelling out unwanted acceleration terms. To do this the effects of inertia and gravity need to be calculated.

Accounting for inertia

The fact that systems have mass is a pain in our controller’s side because it introduces inertia into our system, making control of how the system will move at any given point in time more difficult. Mass can be thought of as an object’s unwillingness to respond to applied forces. The heavier something is, the more resistant it is to acceleration, and the force required to move a system along a desired trajectory depends on both the object’s mass and its current acceleration.

To effectively control a system, the system inertia needs to be calculated so that it can be included in the control signal and cancelled out.


3d_2_linkGiven the robot arm above, operating in the (x,z) plane, with the y axis extending into the picture where the yellow circles represent each links centre-of-mass (COM). The position of each link is COM is defined relative to that link’s reference frame, and the goal is to figure out how much each link’s mass will affect the system dynamics.

The first step is to transform the representation of each of the COM from Cartesian coordinates in the reference frame of their respective arm segments into terms of joint angles, such that the Jacobian for each COM can be calculated.

Let the COM positions relative to each segment’s coordinate frame be

\textrm{com}_0 = \left[ \begin{array}{c} \frac{1}{2}cos(q_0) \\ 0 \\ \frac{1}{2}sin(q_0) \end{array} \right], \;\;\;\; \textrm{com}_1 = \left[ \begin{array}{c} \frac{1}{4}cos(q_1) \\ 0 \\ \frac{1}{4}sin(q_1) \end{array} \right].

The first segment’s COM is already in base coordinates (since the first link and the base share the same coordinate frame), so all that is required is the position of the second link’s COM in the base reference frame, which can be done with the transformation matrix

^1_0\textbf{T} = \left[ \begin{array}{cccc} cos(q_1) & 0 & -sin(q_1) & L_0 cos(q_0) \\ 0 & 1 & 0 & 0 \\ sin(q_1) & 0 & cos(q_1) & L_0 sin(q_0) \\ 0 & 0 & 0 & 1 \end{array} \right].

Using ^1_0\textbf{T} to transform the \textrm{com}_1 gives

^1_0\textbf{T} \; \textrm{com}_1 = \left[ \begin{array}{cccc} cos(q_1) & 0 & -sin(q_1) & L_0 cos(q_0) \\ 0 & 1 & 0 & 0 \\ sin(q_1) & 0 & cos(q_1) & L_0 sin(q_0) \\ 0 & 0 & 0 & 1 \end{array} \right] \; \; \left[ \begin{array}{c} \frac{1}{4}cos(q_1) \\ 0 \\ \frac{1}{4}sin(q_1) \\ 1 \end{array} \right]

^1_0\textbf{T} \; \textrm{com}_1 = \left[ \begin{array}{c} L_0 cos(q_0) + \frac{1}{4}cos(q_0 + q_1) \\ 0 \\ L_0 sin(q_0) + \frac{1}{4} cos(q_0 + q_1) \\ 1 \end{array} \right].

To see the full computation worked out explicitly please see my previous robot control post.

Now that we have the COM positions in terms of joint angles, we can find the Jacobians for each point through our Jacobian equation:

\textbf{J} = \frac{\partial \textbf{x}}{\partial \textbf{q}}.

Using this for each link gives us:

\textbf{J}_0 = \left[ \begin{array}{cc} -\frac{1}{2}sin(q_0) & 0 \\ 0 & 0 \\ \frac{1}{2} cos(q_0) & 0 \\ 0 & 0 \\ 1 & 0 \\ 0 & 0 \end{array} \right]
\textbf{J}_1 = \left[ \begin{array}{cc} -L_0sin(q_0) -\frac{1}{4}sin(\theta_0 + q_1) & -\frac{1}{4} sin(q_0 + \theta_1) \\ 0 & 0 \\ L_0 cos(q_0) + \frac{1}{4}cos(q_0 + q_1) & \frac{1}{4} cos(q_0 +q_1) \\ 0 & 0 \\ 1 & 1 \\ 0 & 0 \end{array} \right].

Kinetic energy

The total energy of a system can be calculated as a sum of the energy introduced from each source. The Jacobians just derived will be used to calculate the kinetic energy each link generates during motion. Each link’s kinetic energy will be calculated and summed to get the total energy introduced into the system by the mass and configuration of each link.

Kinetic energy (KE) is one half of mass times velocity squared:

\textrm{KE} = \frac{1}{2} \; \dot{\textbf{x}}^T \textbf{M}_\textbf{x}(\textbf{q}) \; \dot{\textbf{x}},

where \textbf{M}_\textbf{x} is the mass matrix of the system, with the subscript \textbf{x} denoting that it is defined in Cartesian space, and \dot{\textbf{x}} is a velocity vector, where \dot{\textbf{x}} is of the form

\dot{\textbf{x}} = \left[ \begin{array}{c} \dot{x} \\ \dot{y} \\ \dot{z} \\ \dot{\omega_x} \\ \dot{\omega_y} \\ \dot{\omega_z} \end{array} \right],

and the mass matrix is structured

\textbf{M}_{\textbf{x}_i} (\textbf{q})= \left[ \begin{array}{cccccc} m_i & 0 & 0 & 0 & 0 & 0 \\ 0 & m_i & 0 & 0 & 0 & 0 \\ 0 & 0 & m_i & 0 & 0 & 0 \\ 0 & 0 & 0 & I_{xx} & I_{xy} & I_{xz} \\ 0 & 0 & 0 & I_{yx} & I_{yy} & I_{yz} \\ 0 & 0 & 0 & I_{zx} & I_{zy} & I_{zz} \end{array} \right],

where m_i is the mass of COM i, and the I_{ij} terms are the moments of inertia, which define the object’s resistance to change in angular velocity about the axes, the same way that the mass element defines the object’s resistance to changes in linear velocity.

As mentioned above, the mass matrix for the COM of each link is defined in Cartesian coordinates in its respective arm segment’s reference frame. The effects of mass need to be found in joint angle space, however, because that is where the controller operates. Looking at the summation of the KE introduced by each COM:

\textrm{KE} = \frac{1}{2} \; \Sigma_{i=0}^n ( \dot{\textbf{x}}_i^T \textbf{M}_{\textbf{x}_i}(\textbf{q}) \; \dot{\textbf{x}}_i),

and substituting in \dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}},

\textrm{KE}_i \ \frac{1}{2} \; \Sigma_{i=0}^n (\dot{\textbf{q}}^T \; \textbf{J}_i^T \textbf{M}_{\textbf{x}_i}(\textbf{q})\textbf{J}_i \; \dot{\textbf{q}}),

 

and moving the \dot{\textbf{q}} terms outside the summation,

\textrm{KE}_i = \frac{1}{2} \; \dot{\textbf{q}}^T \; \Sigma_{i=0}^n (\textbf{J}_i^T \textbf{M}_{\textbf{x}_i}(\textbf{q}) \textbf{J}_i) \; \dot{\textbf{q}}.

Defining

\textbf{M}(\textbf{q}) = \Sigma_{i=0}^n \; \textbf{J}_i^T(\textbf{q}) \textbf{M}_{\textbf{x}_i}(\textbf{q}) \; \textbf{J}_i(\textbf{q}),

gives

\textrm{KE} = \frac{1}{2} \; \dot{\textbf{q}}^T \; \textbf{M}(\textbf{q}) \; \dot{\textbf{q}},

which is the equation for calculating kinetic energy in joint space. Thus, \textbf{M}(\textbf{q}) denotes the inertia matrix in joint space.

Now that we’ve successfully calculated the mass matrix of the system in joint space, we can incorporate it into our control signal and cancel out its effects on the system dynamics! On to the next problem!

Accounting for gravity

With the forces of inertia accounted for, we can now address the problem of gravity. To compensate for gravity the concept of conservation of energy (i.e. the work done by gravity is the same in all coordinate systems) will once again be pulled out. The controller operates by applying torque on joints, so it is necessary to be able to calculate the effect of gravity in joint space to cancel it out.

While the effect of gravity in joint space isn’t obvious, it is quite easily defined in Cartesian coordinates in the base frame of reference. Here, the work done by gravity is simply the summation of the distance each link’s center of mass has moved multiplied by the force of gravity. Where the force of gravity in Cartesian space is the mass of the object multiplied by -9.8m/s^2 along the z axis, the equation for the work done by gravity is written:

\textbf{W}_g = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \dot{\textbf{x}}_i),

where \textbf{F}_{g_i} is the force of gravity on the ith arm segment. Because of the conservation of energy, the equation for work is equivalent when calculated in joint space, substituting into the above equation with the equation for work:

\textbf{F}_\textbf{q}^T \dot{\textbf{q}} = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \dot{\textbf{x}}_i),

and then substitute in using \dot{\textbf{x}}_i = \textbf{J}_i(\textbf{q}) \; \dot{\textbf{q}},

\textbf{F}_\textbf{q}^T \dot{\textbf{q}} = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \textbf{J}_i(\textbf{q}) \; \dot{\textbf{q}}),

and cancelling out the \dot{\textbf{q}} terms on both sides,

\textbf{F}_\textbf{q}^T = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \textbf{J}_i(\textbf{q})),

\textbf{F}_\textbf{q} = \Sigma^n_{i=0} (\textbf{J}_i^T(\textbf{q}) \textbf{F}_{g_i}) = \textbf{g}(\textbf{q}),

which says that to find the effect of gravity in joint space simply multiply the mass of each link by its Jacobian, multiplied by the force of gravity in (x,y,z) space, and sum over each link. This summation gives the total effect of the gravity on the system.

Making a PD controller in joint space

We are now able to account for the energy in the system caused by inertia and gravity, great! Let’s use this to build a simple PD controller in joint space. Control should be very straight forward because once we cancel out the effects of gravity and inertia then we can almost pretend that the system behaves linearly. This means that we can also treat control of each of the joints independently, since their movements no longer affect one another. So in our control system we’re actually going to have a PD controller for each joint.

The above-mentioned nonlinearity that’s left in the system dynamics is due to the Coriolis and centrifugal effects. Now, these can be accounted for, but they require highly accurate model of the moments of inertia. If the moments are incorrect then the controller can actually introduce instability into the system, so it’s better if we just don’t address them.

Rewriting the system dynamics presented at the very top, in terms of acceleration gives

\ddot{\textbf{q}} = \textbf{M}^{-1}(\textbf{q}) (\textbf{u} - \textbf{C}(\textbf{q}, \dot{\textbf{q}}) - \textbf{g}(\textbf{q})).

Ideally, the control signal would be constructed

\textbf{u} = (\textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} + \textbf{C}(\textbf{q}, \dot{\textbf{q}}) + \textbf{g}(\textbf{q})),

where \ddot{\textbf{q}}_\textrm{des} is the desired acceleration of the system. This would result in system acceleration

\ddot{\textbf{q}} = \textbf{M}^{-1}(\textbf{q})((\textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} + \textbf{C}(\textbf{q}, \dot{\textbf{q}}) + \textbf{g}(\textbf{q})) - \textbf{C}(\textbf{q}, \dot{\textbf{q}}) - \textbf{g}(\textbf{q})),

\ddot{\textbf{q}} = \textbf{M}^{-1}(\textbf{q}) \textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} ,

\ddot{\textbf{q}} = \ddot{\textbf{q}}_\textrm{des},

which would be ideal. As mentioned, because the Coriolis and centrifugal effects are tricky to account for we’ll leave them out, so the instead the control signal is

\textbf{u} = (\textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} + \textbf{g}(\textbf{q})).

Using a standard PD control formula to generate the desired acceleration:

\ddot{\textbf{q}}_\textrm{des} = k_p \; (\textbf{q}_{\textrm{des}} - \textbf{q}) + k_v \; (\dot{\textbf{q}}_{\textrm{des}} - \dot{\textbf{q}}),

where k_p and k_v are our gain values, and the control signal has been fully defined:

\textbf{u} = (\textbf{M}(\textbf{q}) \; (k_p \; (\textbf{q}_{\textrm{des}} - \textbf{q}) + k_v \; (\dot{\textbf{q}}_{\textrm{des}} - \dot{\textbf{q}})) + \textbf{g}(\textbf{q})),

and we’ve successfully build an effective PD controller in joint space!

Conclusions

Here we looked at building a PD controller that operates in the joint space of a robotic arm that can cancel out the effects of inertia and gravity. By cancelling out the effects of inertia, we can treat control of each of the joints independently, effectively orthogonalizing their control. This makes PD control super easy, we just set up a simple controller for each joint. Also a neat thing is that all of the required calculations can be performed with algorithms of linear complexity, so it’s not a problem to do all of this super fast.

One of the finer points was that we ignored the Coriolis and centrifugal effects on the robot’s dynamics. This is because in the mass matrix model of the moments of inertia are notoriously hard to accurately capture on actual robots. Often you go based off of a CAD model of your robot and then have to do some fine-tuning by hand. So they will be unaccounted for in our control signal, but most of the time as long as you have a very short feedback loop you’ll be fine.

I am really enjoying working through this, as things build on each other so well here and we’re starting to be able to do some really interesting things with the relatively forward transformation matrices and Jacobians that we learned how to build in the previous posts. This was for a very simple robot, but excitingly the next step after this is moving on to operational space control! At last. From there, we’ll go on to look at more complex robotic situations where things like configuration redundancy are introduced and it’s not quite so straightforward.

Tagged , , , , ,

Robot control part 2: Jacobians, velocity, and force

Jacobian matrices are a super useful tool, and heavily used throughout robotics and control theory. Basically, a Jacobian defines the dynamic relationship between two different representations of a system. For example, if we have a 2-link robotic arm, there are two obvious ways to describe its current position: 1) the end-effector position and orientation (which we will denote \textbf{x}), and 2) as the set of joint angles (which we will denote \textbf{q}). The Jacobian for this system relates how movement of the elements of \textbf{q} causes movement of the elements of \textbf{x}. You can think of a Jacobian as a transform matrix for velocity.

Formally, a Jacobian is a set of partial differential equations:

\textbf{J} = \frac{\partial \textbf{x}}{\partial \textbf{q}}.

With a bit of manipulation we can get a neat result:

\textbf{J} = \frac{\partial \textbf{x}}{\partial t} \; \frac{\partial t}{\partial \textbf{q}} \rightarrow \frac{\partial \textbf{x}}{\partial \textbf{t}} = \textbf{J} \frac{\partial \textbf{q}}{\partial t},

or

\dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}},

where \dot{\textbf{x}} and \dot{\textbf{q}} represent the time derivatives of \textbf{x} and \textbf{q}. This tells us that the end-effector velocity is equal to the Jacobian, \textbf{J}, multiplied by the joint angle velocity.

Why is this important? Well, this goes back to our desire to control in operational (or task) space. We’re interested in planning a trajectory in a different space than the one that we can control directly. Iin our robot arm, control is effected through a set of motors that apply torque to the joint angles, BUT what we’d like is to plan our trajectory in terms of end-effector position (and possibly orientation), generating control signals in terms of forces to apply in (x,y,z) space. Jacobians allow us a direct way to calculate what the control signal is in the space that we control (torques), given a control signal in one we don’t (end-effector forces). The above equivalence is a first step along the path to operational space control. As just mentioned, though, what we’re really interested in isn’t relating velocities, but forces. How can we do this?

Energy equivalence and Jacobians
Conservation of energy is a property of all physical systems where the amount of energy expended is the same no matter how the system in question is being represented. The planar two-link robot arm shown below will be used for illustration.

RR robot arm

Let the joint angle positions be denoted \textbf{q} = [q_0, q_1]^T, and end-effector position be denoted \textbf{x} = [x, y, 0]^T.

Work is the application of force over a distance

\textbf{W} = \int \textbf{F}^T \textbf{v} \; dt,

where \textbf{W} is work, \textbf{F} is force, and \textbf{v} is velocity.

Power is the rate at which work is performed

\textbf{P} = \frac{\textbf{W}}{t},

where \textbf{P} is power.
Substituting in the equation for work into the equation for power gives:

\textbf{P} = \frac{\textbf{W}}{t} = \frac{\textbf{F}^T \textbf{d}}{t} = \textbf{F}^T \frac{\textbf{d}}{t} = \textbf{F}^T\textbf{v}.

Because of energy equivalence, work is performed at the same rate regardless of the characterization of the system. Rewriting this terms of end-effector space gives:

\textbf{P} = \textbf{F}_\textbf{x}^T \dot{\textbf{x}},

where \textbf{F}_\textbf{x} is the force applied to the hand, and \dot{\textbf{x}} is the velocity of the hand. Rewriting the above in terms of joint-space gives:

\textbf{P} = \textbf{F}_\textbf{q}^T \dot{\textbf{q}},

where \textbf{F}_\textbf{q} is the torque applied to the joints, and \dot{\textbf{q}} is the angular velocity of the joints. Setting these two equations (in end-effector and joint space) equal to each other and substituting in our equation for the Jacobian gives:

\textbf{F}_{q_{hand}}^T \dot{\textbf{q}} = \textbf{F}_\textbf{x}^T \dot{\textbf{x}},

\textbf{F}_{q_{hand}}^T \dot{\textbf{q}} = \textbf{F}_\textbf{x}^T \textbf{J}_{ee}(\textbf{q}) \; \dot{\textbf{q}},

\textbf{F}_{q_{hand}}^T \textbf{F}_\textbf{x}^T \textbf{J}_{ee}(\textbf{q}),

\textbf{F}_{q_{hand}} = \textbf{J}_{ee}^T(\textbf{q}) \textbf{F}_\textbf{x}.

where \textbf{J}_{ee}(\textbf{q}) is the Jacobian for the end-effector of the robot, and \textbf{F}_{q_{hand}} represents the forces in joint-space that affect movement of the hand. This says that not only does the Jacobian relate velocity from one state-space representation to another, it can also be used to calculate what the forces in joint space should be to effect a desired set of forces in end-effector space.

Building the Jacobian

First, we need to define the relationship between the (x,y,z) position of the end-effector and the robot’s joint angles, (q_0, q_1). However will we do it? Well, we know the distances from the shoulder to the elbow, and elbow to the wrist, as well as the joint angles, and we’re interested in finding out where the end-effector is relative to a base coordinate frame…OH MAYBE we should use those forward transformation matrices from the previous post. Let’s do it!

The forward transformation matrix

Recall that transformation matrices allow a given point to be transformed between different reference frames. In this case, the position of the end-effector relative to the second joint of the robot arm is known, but where it is relative to the base reference frame (the first joint reference frame in this case) is of interest. This means that only one transformation matrix is needed, transforming from the reference frame attached to the second joint back to the base.

The rotation part of this matrix is straight-forward to define, as in the previous section:

^1_0\textbf{R} = \left[ \begin{array}{ccc} cos(q_0) & -sin(q_0) & 0 \\ sin(q_0) & cos(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right].

The translation part of the transformation matrices is a little different than before because reference frame 1 changes as a function of the angle of the previous joint’s angles. From trigonometry, given a vector of length r and an angle q the x position of the end point is defined r \; cos(q), and the y position is r \; sin(q). The arm is operating in the (x,y) plane, so the z position will always be 0.

Using this knowledge, the translation part of the transformation matrix is defined:

^1_0\textbf{D} = \left[ \begin{array}{c} L_0 cos(q_0) \\ L_0 sin(q_0) \\ 0 \end{array} \right].

Giving the forward transformation matrix:

^1_0\textbf{T} = \left[ \begin{array}{cc} ^1_0\textbf{R} & ^1_0\textbf{D} \\ \textbf{0} & \textbf{1} \end{array} \right] = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & L_0 cos(q_0) \\ sin(q_0) & cos(q_0) & 0 & L_0 sin(q_0)\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right],

which transforms a point from reference frame 1 (elbow joint) to reference frame 0 (shoulder joint / base).

The point of interest is the end-effector which is defined in reference frame 1 as a function of joint angle, q_1 and the length of second arm segment, L_1:

\textbf{x} = \left[ \begin{array}{c} L_1 cos(q_1) \\ L_1 sin(q_1) \\ 0 \\ 1 \end{array} \right].

To find the position of our end-effector in terms of the origin reference frame multiply the point \textbf{x} by the transformation ^1_0\textbf{T}:

^1_0\textbf{T} \; \textbf{x} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & L_0 cos(q_0) \\ sin(q_0) & cos(q_0) & 0 & L_0 sin(q_0)\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] \; \left[ \begin{array}{c} L_1 cos(q_1) \\ L_1 sin(q_1) \\ 0 \\ 1 \end{array} \right],

^1_0\textbf{T} \textbf{x} = \left[ \begin{array}{c} L_1 cos(q_0) cos(q_1) - L_1 sin(q_0) sin(q_1) + L_0 cos(q_0) \\ L_1 sin(q_0) cos(q_1) + L_1 cos(q_0) sin(q_1) + L_0 sin(q_0) \\ 0 \\ 1 \end{array} \right]

where, by pulling out the L_1 term and using the trig identities

cos(\alpha)cos(\beta) - sin(\alpha)sin(\beta) = cos(\alpha + \beta),

and

sin(\alpha)cos(\beta) + cos(\alpha)sin(\beta) = sin(\alpha + \beta),

the position of our end-effector can be rewritten:

\left[ \begin{array}{c} L_0 cos(q_0) + L_1 cos(q_0 + q_1) \\ L_0 sin(q_0) + L_1 sin(q_0 + q_1) \\ 0 \end{array} \right],

which is the position of the end-effector in terms of joint angles. As mentioned above, however, both the position of the end-effector and its orientation are needed; the rotation of the end-effector relative to the base frame must also be defined.

Accounting for orientation

Fortunately, defining orientation is simple, especially for systems with only revolute and prismatic joints (spherical joints will not be considered here). With prismatic joints, which are linear and move in a single plane, the rotation introduced is 0. With revolute joints, the rotation of the end-effector in each axis is simply a sum of rotations of each joint in their respective axes of rotation.

In the example case, the joints are rotating around the z axis, so the rotation part of our end-effector state is

\left[ \begin{array}{c} \omega_x \\ \omega_y \\ \omega_z \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ q_0 + q_1 \end{array} \right],

where \omega denotes angular rotation. If the first joint had been rotating in a different plane, e.g. in the (x, z) plane around the y axis instead, then the orientation would be

\left[ \begin{array}{c} \omega_x \\ \omega_y \\ \omega_z \end{array} \right] = \left[ \begin{array}{c} 0 \\ q_0 \\ q_1 \end{array} \right].

Partial differentiation

Once the position and orientation of the end-effector have been calculated, the partial derivative of these equations need to be calculated with respect to the elements of \textbf{q}. For simplicity, the Jacobian will be broken up into two parts, J_v and J_\omega, representing the linear and angular velocity, respectively, of the end-effector.

The linear velocity part of our Jacobian is:

\textbf{J}_v(\textbf{q}) = \left[ \begin{array}{cc} \frac{\partial x}{\partial q_0} & \frac{\partial x}{\partial q_1} \\ \frac{\partial y}{\partial q_0} & \frac{\partial y}{\partial q_1} \\ \frac{\partial z}{\partial q_0} & \frac{\partial z}{\partial q_1} \end{array} \right] = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ L_0 cos(q_0) + L_1 cos(q_0 + q_1) & L_1 cos(q_0 + q_1) \\ 0 & 0 \end{array} \right].

The angular velocity part of our Jacobian is:

\textbf{J}_\omega(\textbf{q}) = \left[ \begin{array}{cc} \frac{\partial \omega_x}{\partial q_0} & \frac{\partial \omega_x}{\partial q_1} \\ \frac{\partial \omega_y}{\partial q_0} & \frac{\partial \omega_y}{\partial q_1} \\ \frac{\partial \omega_z}{\partial q_0} & \frac{\partial \omega_z}{\partial q_1} \end{array} \right] = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right].

The full Jacobian for the end-effector is then:

\textbf{J}_{ee}(\textbf{q}) = \left[ \begin{array}{c} \textbf{J}_v(\textbf{q}) \\ \textbf{J}_\omega(\textbf{q}) \end{array} \right] = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ L_0 cos(q_0) + L_1 cos(q_0 + q_1) & L_1 cos(q_0 + q_1) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right].

Analyzing the Jacobian

Once the Jacobian is built, it can be analysed for insight about the relationship between \dot{\textbf{x}} and \dot{\textbf{q}}.

For example, there is a large block of zeros in the middle of the Jacobian defined above, along the row corresponding to linear velocity along the z axis, and the rows corresponding to the angular velocity around the x and y axes. This means that the z position, and rotations \omega_x and \omega_y are not controllable. This can be seen by going back to the first Jacobian equation:

\dot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q})\;\dot{\textbf{q}}.

No matter what the values of \dot{\textbf{q}}, it is impossible to affect \omega_x, \omega_y, or z, because the corresponding rows during the above multiplication with the Jacobian are \textbf{0}. These rows of zeros in the Jacobian are referred to as its `null space’. Because these variables can’t be controlled, they will be dropped from both \textbf{F}_\textbf{x} and \textbf{J}(\textbf{q}).

Looking at the variables that can be affected it can be seen that given any two of x, y, \omega_z the third can be calculated because the robot only has 2 degrees of freedom (the shoulder and elbow). This means that only two of the end-effector variables can actually be controlled. In the situation of controlling a robot arm, it is most useful to control the (x,y) coordinates, so \omega_z will be dropped from the force vector and Jacobian.

After removing the redundant term, the force vector representing the controllable end-effector forces is

\textbf{F}_\textbf{x} = \left[ \begin{array}{c}f_x \\ f_y\end{array} \right],

where f_x is force along the x axis, f_y is force along the y axis, and the Jacobian is written

\textbf{J}_{ee}(\textbf{q}) = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ L_0 cos(q_0) + L_1 cos(q_0 + q_1) & L_1 cos(q_0 + q_1) \end{array} \right].

If instead f_{\omega_z}, i.e. torque around the z axis, were chosen as a controlled force then the force vector and Jacobian would be (assuming force along the x axis was also chosen):

\textbf{F}_\textbf{x} = \left[ \begin{array}{c} f_x \\ f_{\omega_z}\end{array} \right],
\textbf{J}_{ee}(\textbf{q}) = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ 1 & 1 \end{array} \right].

But we’ll stick with control of the x and y forces instead, as it’s a little more straightforward.

Using the Jacobian

With our Jacobian, we can find out what different joint angle velocities will cause in terms of the end-effector linear and angular velocities, and we can also transform desired (x,y) forces into (\theta_0, \theta_1) torques. Let’s do a couple of examples. Note that in the former case we’ll be using the full Jacobian, while in the latter case we can use the simplified Jacobian specified just above.

Example 1

Given known joint angle velocities with arm configuration

\textbf{q} = \left[ \begin{array}{c} \frac{\pi}{4} \\ \frac{3 \pi}{8} \end{array}\right] \;\;\;\; \dot{\textbf{q}} = \left[ \begin{array}{c} \frac{\pi}{10} \\ \frac{\pi}{10} \end{array} \right]

and arm segment lengths L_i = 1, the (x,y) velocities of the end-effector can be calculated by substituting in the system state at the current time into the equation for the Jacobian:

\dot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q}) \; \dot{\textbf{q}},

\dot{\textbf{x}} = \left[ \begin{array}{cc} - sin(\frac{\pi}{4}) - sin(\frac{\pi}{4} + \frac{3\pi}{8}) & - sin(\frac{\pi}{4} + \frac{3\pi}{8}) \\ cos(\frac{\pi}{4}) + cos(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4} + \frac{3\pi}{8}) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right] \; \left[ \begin{array}{c} \frac{\pi}{10} \\ \frac{\pi}{10} \end{array} \right],

\dot{\textbf{x}} = \left[ -0.8026, -0.01830, 0, 0, 0, \frac{\pi}{5} \right]^T.

And so the end-effector is linear velocity is (-0.8026, -0.01830, 0)^T, with angular velocity is (0, 0, \frac{\pi}{5})^T.

Example 2

Given the same system and configuration as in the previous example as well as a trajectory planned in (x,y) space, the goal is to calculate the torques required to get the end-effector to move as desired. The controlled variables will be forces along the x and y axes, and so the reduced Jacobian from the previous section will be used. Let the desired (x,y) forces be

\textbf{F}_\textbf{x} = \left[ \begin{array}{c} 1 \\ 1 \end{array}\right],

to calculate the corresponding joint torques the desired end-effector forces and current system state parameters are substituted into the equation relating forces in in end-effector and joint space:

\textbf{F}_\textbf{q} = \textbf{J}^T_{ee}(\textbf{q}) \textbf{F}_\textbf{x},

\textbf{F}_\textbf{q} = \left[ \begin{array}{cc} -sin(\frac{\pi}{4}) -sin(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4}) + cos(\frac{\pi}{4} + \frac{3\pi}{8}) \\ -sin(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4} + \frac{3\pi}{8}) \end{array} \right] \left[ \begin{array}{c} 1 \\ 1 \end{array} \right],

\textbf{F}_\textbf{q} = \left[ \begin{array}{c} -1.3066 \\ -1.3066 \end{array}\right].

So given the current configuration to get the end-effector to move as desired, without accounting for the effects of inertia and gravity, the torques to apply to the system are \textbf{F}_\textbf{q} = [-1.3066, -1.3066]^T.

And now we are able to transform end-effector forces into torques, and joint angle velocities into end-effector velocities! What a wonderful, wonderful tool to have at our disposal! Hurrah for Jacobians!

Conclusions

In this post I’ve gone through how to use Jacobians to relate the movement of joint angle and end-effector system state characterizations, but Jacobians can be used to relate any two characterizations. All you need to do is define one in terms of the other and do some partial differentiation. The above example scenarios were of course very simple, and didn’t worry about compensating for anything like gravity. But don’t worry, that’s exactly what we’re going to look at in our exciting next chapter!

Something that I found interesting to consider is the need for the orientation of the end-effector and finding the angular velocities. Often in simpler robot arms, we’re only interested in the position of the end-effector, so it’s easy to write off orientation. But if we had a situation where there was a gripper attached to the end-effector, then suddenly the orientation becomes very important, often determining whether or not an object can be picked up or not.

And finally, if you’re interested in reading more about all this, I recommend checking out ‘Velocity kinematics – The manipulator Jacobian’ available online, it’s a great resource.

Tagged , , , , , ,

Robot control part 1: Forward transformation matrices

I’m doing a tour of learning down at the Brains in Silicon lab run by Dr. Kwabena Boahen for the next month or so working on learning a bunch about building and controlling robots and some other things, and one of the super interesting things that I’m reading about is effective methods for force control of robots. I’ve mentioned operational space (or task space) control of robotic systems before, in the context of learning the inverse kinematic transformation, but down here the approach is to analytically derive the dynamics of the system (as opposed to learning them) and use these to explicitly calculate control signals to move in task space that take advantage of the passive dynamics of the system.

In case you don’t remember what those words mean, operational (or task) space refers to a different configuration space than the basic / default robot configuration space. FOR EXAMPLE: If we have a robot arm with three degrees of freedom (DOF), that looks something like this:

robot-arm1
where two joints rotate, and are referred to as q_1, and q_2, respectively, then the most obvious representation for the system state, \textbf{q}, is the joint-angle space \textbf{q} = [q_1, q_2]. So that’s great, but often when we’re using this robot we’re going to be more interested in controlling the position of the end-effector rather than the angles of the joints. We would like to operate to complete our task in terms of \textbf{q} = [x, y], where x, y are the Cartesian coordinates of our hand in 2D space. So then \textbf{q} = [x, y] is our operational (or task) space.

I also mentioned the phrase ‘passive dynamics’. It’s true, go back and check if you don’t believe me. Passive dynamics refer to how the system moves from a given initial condition when no control signal is applied. For example, passive dynamics incorporate the effects of gravity on the system. If we put our arm up in the air and remove any control signal, it falls down by our side. The reason that we’re interested in passive dynamics is because they’re movement for free. So if my goal is to move my arm to be down by my side, I want to take advantage of the fact that the system naturally moves there on it’s own simply by removing my control signal, rather than using a bunch of energy to force my arm to move down.

There are a bunch of steps leading up to building controllers that can plan trajectories in a task space. As a first step, it’s important that we characterize the relationship of each of reference coordinate frames of the robot’s links to the origin, or base, of the robot. The characterization of these relationships are done using what are called forward transformation matrices, and they will be the focus of the remainder of this post.

Forward transformation matrices in 2D

As I mentioned mere sentences ago, forward transformation matrices capture the relationship between the reference frames of different links of the robot. A good question might be ‘what is a reference frame?’ A reference frame is basically the point of view of each of the robotic links, where if you were an arm joint yourself what you would consider ‘looking forward’. To get a feel for these and why it’s necessary to be able to move between them, let’s look at the reference frames of each of the links from the above drawn robot:

robot_coordinate_frames1
We know that from q_2 our end-effector point \textbf{p} is length d_2 away along its x-axis. Similarly, we know that q_2 is length d_1 away from q_1 along its x-axis, that q_1 is length d_0 away from the origin along its y-axis. The question is, then, in terms of the origin’s coordinate frame, where is our point \textbf{p}?

In this configuration pictured above it’s pretty straightforward to figure out, it’s simply (x = d_2, y = d_0 + d_1). So you’re feeling pretty cocky, this stuff is easy. OK hotshot, what about NOW:

rotated_robot_coordinate_frames1
It’s not as straightforward once rotations start being introduced. So what we’re looking for is a method of automatically accounting for the rotations and translations of points between different coordinate frames, such that if we know the current angles of the robot joints and the relative positions of the coordinate frames we can quickly calculate the position of the point of interest in terms of the origin coordinate frame.

Accounting for rotation

So let’s just look quick at rotating axes. Here’s a picture:

rotation

The above image displays two frames of reference with the same origin rotated from each other by q degrees. Imagine a point \textbf{p} = (p_{x_1}, p_{y_1}) specified in reference frame 1, to find its coordinates in terms of of the origin reference frame, or (x_0, y_0) coordinates, it is necessary to find out the contributions of the x_1 and y_1 axes to the x_0 and y_0 axes. The contributions to the x_0 axis from p_{x_1} are calculated

p_{x_0} = cos(q) p_{x_1},

which takes the x_1 position of p_{x_1} and maps it to x_0.
To calculate how p_{y_1} affects the position in x_0 we calculate

p_{y_0} = cos(90 + q) p_{y_1},

which is equivalent to -cos(90 - q), as shown above.

This term can be rewritten as -sin(q) because sin and cos are phase shifted 90 degrees from one another. This leads to the total contributions of a point defined in the (x_1, y_1) axes to the x_0 axis being

p_{0_x} = cos(q) p_{x_1} - sin(q) p_{y_1}.

Similarly for the y_0 axis contributions we have

p_{0_y} = sin(q) p_{x_1} + sin(90 - q) p_{y_1},

p_{0_y} = sin(q) p_{x_1} + cos(q) p_{y_1}.

Rewriting the above equations in matrix form gives:

^1_0\textbf{R} \; \textbf{p}_1 = \left[ \begin{array}{cc} cos(q_0) & -sin(q_0) \\ sin(q_0) & cos(q_0) \end{array} \right] \left[ \begin{array}{c} p_{x_1} \\ p_{y_1} \end{array} \right],

where ^1_0\textbf{R} is called a rotation matrix.
The notation used here for these matrices is that the reference frame number being rotated from is denoted in the superscript before, and the reference frame being rotated into is in the subscript. ^1_0\textbf{R} denotes a rotation from reference frame 1 into reference frame 0 (using the same notation as described here.

To find the location of a point defined in reference frame 1 in reference frame 0 coordinates, we then multiply by the rotation matrix ^1_0\textbf{R}.

Accounting for translation

Alrighty, rotation is great, but as you may have noticed our robots joints are not all right on top of each other. The second part of transformation is translation, and so it is also necessary to account for distances between reference frame origins.

rotation_and_distance
Let’s look at the the reference frames 1 and 0 shown in the above figure, where point \textbf{p} = (2,2) in reference frame 1. Reference frame 1 is rotated 45 degrees from and located at (3, 2) in reference frame 0. To account for this translation and rotation a new matrix will be created that includes both rotation and translation. It is generated by appending distances, denoted \textbf{D}, to the rotation matrix ^1_0\textbf{R} along with a row of zeros ending in a 1 to get a transformation matrix:

^1_0\textbf{T} = \left[ \begin{array}{cc} ^1_0\textbf{R} & ^1_0\textbf{D} \\ \textbf{0} & \textbf{1} \end{array} \right],
^1_0\textbf{T} = \left[ \begin{array}{ccc} cos(q_0) & -sin(q_0) & d_{x_0} \\ sin(q_0) & cos(q_0) & d_{y_0} \\ 0 & 0 & 1 \end{array} \right].

To make the matrix-vector multiplications work out, a homogeneous representation must be used, which adds an extra row with a 1 to the end of the vector \textbf{p} to give

\textbf{p} = \left[ \begin{array}{c} p_x \\ p_y \\ 1 \end{array} \right].

When position vector \textbf{p} is multiplied by the transformation matrix ^1_0\textbf{T} the answer should be somewhere around (3, 5) from visual inspection, and indeed:

^1_0\textbf{T} \; \textbf{p} = \left[ \begin{array}{ccc} cos(45) & -sin(45) & 3 \\ sin(45) & cos(45) & 2 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{c} 2 \\ 2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} 3 \\ 4.8285 \\ 1 \end{array} \right].

To get the coordinates of \textbf{p} in reference frame 0 now simply take the first two elements of the resulting vector \textbf{p} = (3, 4.8285).

Applying multiple transformations

We can also string these things together! What if we have a 3 link, planar (i.e. rotating on the (x,y) plane) robot arm? A setup like this:

rotation and distance2
We know that our end-effector is at point (1,2) in reference frame 2, which is at an 80 degree angle from reference frame 1 and located at (x_1 = 2.5, y_1 = 4). That gives us a transformation matrix

^2_1\textbf{T} = \left[ \begin{array}{ccc} cos(80) & -sin(80) & 2.5 \\ sin(80) & cos(80) & 4 \\ 0 & 0 & 1 \end{array} \right].

To get our point in terms of reference frame 0 we account for the transform from reference frame 1 into reference frame 2 with ^2_1\textbf{T} and then account for the transform from reference frame 0 into reference frame 1 with our previously defined transformation matrix

^1_0\textbf{T} = \left[ \begin{array}{ccc} cos(45) & -sin(45) & 3 \\ sin(45) & cos(45) & 2 \\ 0 & 0 & 1 \end{array} \right].

So let’s give it a shot! By eyeballing it we should expect our answer to be somewhere around (7,0) or so, I would say.

\textbf{p}_0 = ^1_0\textbf{T} \; ^2_1\textbf{T} \; \textbf{p}_2 = \\ \\ \left[ \begin{array}{ccc} cos(45) & -sin(45) & 3 \\ sin(45) & cos(45) & 2 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{ccc} cos(80) & -sin(80) & 2.5 \\ sin(80) & cos(80) & 4 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{c} 1 \\ 2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} -0.273 \\ 6.268 \\ 1 \end{array} \right].

And it’s a good thing we didn’t just eyeball it! Accurate drawings might have helped but the math gives us an exact answer. Super!

And one more note, if we’re often performing this computation, then instead of performing 2 matrix multiplications every time we can work out

^2_0\textbf{T} = ^1_0\textbf{T} \; ^2_1\textbf{T}

and simply multiply our point in reference frame 2 by this new transformation matrix ^2_0\textbf{T} to calculate the coordinates in reference frame 0.

Forward transform matrices in 3D

The example here is taken from Samir Menon’s RPP control tutorial.

It turns out it’s trivial to add in the z dimension and start accounting for 3D transformations. Let’s say we have a standard revolute-prismatic-prismatic robot, which looks exactly like this, or roughly like this:

robot_coordinate_frames_3D where the base rotates around the z axis, and the distance from reference frame 0 to reference frame 1 is 1 unit, also along the z axis. The rotation matrix from reference frame 0 to reference frame 1 is:

^1_0\textbf{R} = \left[ \begin{array}{ccc} cos(q_0) & -sin(q_0) & 0 \\ sin(q_0) & cos(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right]

and the translation vector is

^1_0\textbf{D} = [0, 0, 1]^T.

The transformation matrix from reference frame 0 to reference frame 1 is then:

^1_0\textbf{T} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & 0 \\ sin(q_0) & cos(q_0) & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right],

where the third column indicates that there was no rotation around the z axis in moving between reference frames, and the forth (translation) column shows that we move 1 unit along the z axis. The fourth row is again then only present to make the multiplications work out and provides no information.

For transformation from the reference frame 1 to reference frame 2, there is no rotation (because it is a prismatic joint), and there is translation along the y axis of reference frame 1 equal to .5 + q_1. This gives a transformation matrix:

^2_1\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0.5 + q_1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right].

The final transformation, from the origin of reference frame 2 to the end-effector position is similarly another transformation with no rotation (because this joint is also prismatic), that translates along the z axis:

^{ee}_2\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & -0.2 - q_2 \\ 0 & 0 & 0 & 1 \end{array} \right].

The full transformation from reference frame 0 to the end-effector is found by combining all of the above transformation matrices:

^{ee}_0\textbf{T} = ^1_0\textbf{T} \; ^2_1\textbf{T} \; ^{ee}_2\textbf{T} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & -sin(q_0)(0.5 + q_1) \\ sin(q_0) & cos(q_0) & 0 & cos(q_0) (0.5 + q_1) \\ 0 & 0 & 1 & 0.8 - q_2 \\ 0 & 0 & 0 & 1 \end{array} \right].

To transform a point from the end-effector reference frame into terms of the origin reference frame, simply multiply the transformation matrix by the point of interest relative to the end-effector. If it is the end-effector position that is of interest to us, p = [0, 0, 0, 1]^T. For example, let q_0 = \frac{\pi}{3}, q_1 = .3, and q_2 = .4, then the end-effector location is:

^{ee}_0\textbf{T} \; \textbf{p} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & -sin(q_0)(0.5 + q_1) \\ sin(q_0) & cos(q_0) & 0 & cos(q_0) (0.5 + q_1) \\ 0 & 0 & 1 & .8 + q_2 \\ 0 & 0 & 0 & 1 \end{array} \right] \; \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array} \right] = \left[ \begin{array}{c} -0.693 \\ 0.4 \\ 0.8 \\ 1 \end{array} \right].

Inverting our transformation matrices

What if we know where a point is defined in reference frame 0, but we want to know where it is relative to our end-effector’s reference frame? Fortunately this is straightforward thanks to the way that we’ve defined our transform matrices. Continuing the same robot example and configuration as above, and denoting the rotation part of the transform matrix \textbf{R} and the translation part \textbf{D}, the inverse transform is defined:

(^{ee}_0\textbf{T})^{-1} = \left[ \begin{array}{cc} (^{ee}_0\textbf{R})^T & -(^{ee}_0\textbf{R})^T \; ^{ee}_0\textbf{D} \\ 0 & 1 \end{array} \right].

If we have a point that’s at \textbf{p}_0 = [1, 1, .5, 1]^T in reference frame 0, then we can calculate that relative to the end-effector it is at:

\textbf{p} = (^{ee}_0\textbf{T})^{-1} \; \textbf{p}_0 = [1.37, -1.17, -0.3, 1]^T.

Conclusions

These are, of course, just the basics with forward transformation matrices. There are numerous ways to go about this, but this method is fairly straightforward. If you’re interested in more, there are a bunch of youtube videos and detailed tutorials all over the web. There’s a bunch of neat stuff about why the matrices are set up like they are (search: homogeneous transformations) and more complex examples.

The robot example for the 3D case here didn’t have any spherical joints, each joint only moved in 2 dimensions, but it is also possible to derive the forward transformation matrix in this case, it’s just more complex and not necessary to move onward here since they’re not used in the robots I’ll be looking at. This introduction is enough to get started and move on to some more exciting things, so let’s do that!

Tagged , , , ,
Advertisements