Inverse kinematics is a common topic in robotics control; one that most anyone working with a robotic arm needs to address at some point. The use case I’ll be looking at here is the situation where we want to specify an coordinate for the end-effector, and solve for the appropriate joint angle configuration that best achieves this. It’s surprisingly straightforward to handle with the right approach, so it’s a nice little exercise in Python optimization (and we’ll even work in some graphical display!) And per usual, the code is all up on my github.

Most of the time the servos that anyone is working with are position controlled, meaning that you provide them with a set of desired angles for them to move to, rather than force controlled, where you provide them with a torque signal that directly moves them. In this case, inverse kinematics is all you need. You simply work out the set of joint angles you want the servos to move to and send those out directly. In biological systems this clearly isn’t the end of the work, because even if you know where you want to go you still have to provide a force (muscle activation) signal that will move your system there. But! as mentioned in the motor babbling paper review post I wrote a while back, having the target specified in the most basic space (i.e. joint angles instead of coordinates for a servo based arm, or muscle lengths instead of joint angles for a muscle based arm) can make calculating, or learning, the appropriate force signal much easier; so something that can do the inverse kinematics calculation is worth having.

**Inverse kinematics**

Now then, how do we go about finding the joint angle configuration that best achieves a desired coordinate? Constrained minimization! Our constraint is that the tip of the arm must be at the specified position, and we need to specify some cost function for the system to minimize to choose among all the possible configurations that accomplish this. The cost function that we’re going to use here is a very intuitive one, that I first encountered in the paper ‘Learning operational space control’ by Jan Peters. What we’re going to do is specify a default, or resting, joint state configuration, and minimize our distance from that. Which makes a lot of sense.

I’m going to post the code for calculating the inverse kinematics of a 3-link arm, and then we’ll work through it.

import math import numpy as np import scipy.optimize class Arm3Link: def __init__(self, q=None, q0=None, L=None): """Set up the basic parameters of the arm. All lists are in order [shoulder, elbow, wrist]. :param list q: the initial joint angles of the arm :param list q0: the default (resting state) joint configuration :param list L: the arm segment lengths """ # initial joint angles if q is None: q = [.3, .3, 0] self.q = q # some default arm positions if q0 is None: q0 = np.array([math.pi/4, math.pi/4, math.pi/4]) self.q0 = q0 # arm segment lengths if L is None: L = np.array([1, 1, 1]) self.L = L self.max_angles = [math.pi, math.pi, math.pi/4] self.min_angles = [0, 0, -math.pi/4] def get_xy(self, q=None): if q is None: q = self.q x = self.L[0]*np.cos(q[0]) + \ self.L[1]*np.cos(q[0]+q[1]) + \ self.L[2]*np.cos(np.sum(q)) y = self.L[0]*np.sin(q[0]) + \ self.L[1]*np.sin(q[0]+q[1]) + \ self.L[2]*np.sin(np.sum(q)) return [x, y] def inv_kin(self, xy): def distance_to_default(q, *args): # weights found with trial and error, get some wrist bend, but not much weight = [1, 1, 1.3] return np.sqrt(np.sum([(qi - q0i)**2 * wi for qi,q0i,wi in zip(q, self.q0, weight)])) def x_constraint(q, xy): x = ( self.L[0]*np.cos(q[0]) + self.L[1]*np.cos(q[0]+q[1]) + self.L[2]*np.cos(np.sum(q)) ) - xy[0] return x def y_constraint(q, xy): y = ( self.L[0]*np.sin(q[0]) + self.L[1]*np.sin(q[0]+q[1]) + self.L[2]*np.sin(np.sum(q)) ) - xy[1] return y return scipy.optimize.fmin_slsqp( func=distance_to_default, x0=self.q, eqcons=[x_constraint, y_constraint], args=[xy], iprint=0) # iprint=0 suppresses output

I’ve taken out most of the comments for compactness, but there are plenty in the code on github, don’t you worry. Alright, let’s take a look!

First, we go through and set up the parameters of our arm. If nothing was passed in during construction, then we’re going to make an arm that is initially at it’s resting position, with shoulder and elbow joint angles = , and wrist angle = 0. We also set up the arm segment lengths to all be 1, and the minimum and maximum joint angles.

Next, there’s the `get_xy`

function, which just uses some simple trig to calculate the current position of the end-effector. This is going to be a handy thing to have when we want to see how well the joint configurations we calculate achieve our desired hand position.

And then after this, we have the actual inverse kinematics function. `inv_kin`

takes in a desired position, and returns to us a list of the joint angles that will best achieve this. We define three methods inside `inv_kin`

that define our constrained optimization problem, `distance_to_default`

, which is the function we are minimizing, and `x_constraint`

and `y_constraint`

, which are the constraints that must be met for a solution to be valid. These are all pretty straight forward functions, the `distance_to_default`

function calculates the Euclidean distance in joint space to the resting state, and the `constraint`

functions calculate the difference between the actual and desired end-effector positions for a given set of joint angles.

There is one point of interest in the `distance_to_default`

method, and that is the use of the `weights`

therein. What is the point of applying a gain to these values, you ask? Why, simply that it lets us weight the relative importance of each of these joints remaining close to their default configuration! If you think about moving your arm on a horizontal plane, the wrist actually bends very little. To mimic this we can set the weight applied to the distance of the wrist from its resting state higher than those of the other joints, so that it becomes a higher priority to keep the wrist near its resting state. I suggest playing around with once the graphical display (below) is built up, it’s fun to see the effects different weightings can give on joint configurations.

With these three functions defined, we can now call up the `scipy.optimize.fmin_slsqp`

function, which performs the sequential least squares method to arrive at a solution.

Really, that’s all there is to it!

To test this now, we have the following method:

def test(): ############Test it!################## arm = Arm3Link() # set of desired (x,y) hand positions x = np.arange(-.75, .75, .05) y = np.arange(0, .75, .05) # threshold for printing out information, to find trouble spots thresh = .025 count = 0 total_error = 0 # test it across the range of specified x and y values for xi in range(len(x)): for yi in range(len(y)): # test the inv_kin function on a range of different targets xy = [x[xi], y[yi]] # run the inv_kin function, get the optimal joint angles q = arm.inv_kin(xy=xy) # find the (x,y) position of the hand given these angles actual_xy = arm.get_xy(q) # calculate the root squared error error = np.sqrt((np.array(xy) - np.array(actual_xy))**2) # total the error total_error += error # if the error was high, print out more information if np.sum(error) > thresh: print '-------------------------' print 'Initial joint angles', arm.q print 'Final joint angles: ', q print 'Desired hand position: ', xy print 'Actual hand position: ', actual_xy print 'Error: ', error print '-------------------------' count += 1 print '\n---------Results---------' print 'Total number of trials: ', count print 'Total error: ', total_error print '-------------------------'

Which is simply working through a set of target points, and calculating the total error across all of them. If any particular point gives an error above the threshold level, print out the information for that point so we can take a closer look if so desired. Fortunately, the threshold error isn’t even approached in this test, and for output we get:

---------Results--------- Total number of trials: 450 Total error: [ 3.33831421e-05 2.89667496e-05] -------------------------

Not bad!

**Visualization**

So that’s all great and dandy, but it’s always nice to be able to really visualize these things, so I wrote another method that uses Pyglet to help visualize. This is a really easy to use graphics program that I tested out as a means here, and I was very happy with it. I ended up writing a method that pops up a window with a 3-link arm drawn on it, and the arm uses the above inverse kinematics function in the `Arm`

class written above to calculate the appropriate joint angles for the arm to be at to follow the mouse. Once the joint angles are calculated, then their locations are also calculated, and the arm is drawn. Pleasingly, the `inv_kin`

above is fast enough to work in real time, so it’s a kind of fun toy example. Again, I’ll show the code and then we’ll work through it below.

import numpy as np import pyglet import time import Arm def plot(): """A function for plotting an arm, and having it calculate the inverse kinematics such that given the mouse (x, y) position it finds the appropriate joint angles to reach that point.""" # create an instance of the arm arm = Arm.Arm3Link(L = np.array([300,200,100])) # make our window for drawin' window = pyglet.window.Window() label = pyglet.text.Label('Mouse (x,y)', font_name='Times New Roman', font_size=36, x=window.width//2, y=window.height//2, anchor_x='center', anchor_y='center') def get_joint_positions(): """This method finds the (x,y) coordinates of each joint""" x = np.array([ 0, arm.L[0]*np.cos(arm.q[0]), arm.L[0]*np.cos(arm.q[0]) + arm.L[1]*np.cos(arm.q[0]+arm.q[1]), arm.L[0]*np.cos(arm.q[0]) + arm.L[1]*np.cos(arm.q[0]+arm.q[1]) + arm.L[2]*np.cos(np.sum(arm.q)) ]) + window.width/2 y = np.array([ 0, arm.L[0]*np.sin(arm.q[0]), arm.L[0]*np.sin(arm.q[0]) + arm.L[1]*np.sin(arm.q[0]+arm.q[1]), arm.L[0]*np.sin(arm.q[0]) + arm.L[1]*np.sin(arm.q[0]+arm.q[1]) + arm.L[2]*np.sin(np.sum(arm.q)) ]) return np.array([x, y]).astype('int') window.jps = get_joint_positions() @window.event def on_draw(): window.clear() label.draw() for i in range(3): pyglet.graphics.draw(2, pyglet.gl.GL_LINES, ('v2i', (window.jps[0][i], window.jps[1][i], window.jps[0][i+1], window.jps[1][i+1]))) @window.event def on_mouse_motion(x, y, dx, dy): # call the inverse kinematics function of the arm # to find the joint angles optimal for pointing at # this position of the mouse label.text = '(x,y) = (%.3f, %.3f)'%(x,y) arm.q = arm.inv_kin([x - window.width/2, y]) # get new arm angles window.jps = get_joint_positions() # get new joint (x,y) positions pyglet.app.run() plot()

There are really only a few things that we’re doing in this method: We create an arm with somewhat normal segment lengths and a window for plotting; we have a function for calculating the coordinates of each of the joints for plotting; we have a function that tracks the position of the mouse and updates our arm joint angles by calling the `arm.inv_kin`

function we wrote above; and lastly we plot it! Easy. The functions aren’t quite in that order, and that’s because I chose alphabetical ordering over conceptual. Here’s a picture!

I think the only kind of weird thing going on in here is that my variables that I use in both this method and in the overwritten `window`

methods are defined as belonging to `window`

. Aside from that, it’s straightforward trig to calculate the joint angle Cartesian coordinates, and then there’s also a label that displays the mouse position for the hell of it. Honestly, the label was in a Pyglet tutorial example and I just left it in.

It’s not going to win any awards for graphics by any means, but it’s simple and it works! And, assuming you have Pyglet installed (which is very easy to do), you should straight up be able to run it and be amazed. Or a little impressed. Or not, whatever. But in any case now you have your very own inverse kinematics example on a 3-link arm! Moving to any other number of joints is exactly the same, you just have to add the corresponding number of default joint configurations and everything should work out as is.

From here, the plan is to look at bringing in an actual force driven arm model to the mix, and then building up a system that can track the mouse, but with a control signal that specifies forces to apply to the arm, rather than positions. Which I think will be pretty neat. And in case you missed it at the top, here’s a link to all the code on my github.