Full body obstacle collision avoidance

Previously I’ve discussed how to avoid obstacles using DMPs in the end-effector trajectory. This is good when you’re controlling a single disconnected point-mass, like a mobile robot navigating around an environment. But if you want to use this system to control a robotic manipulator, then pretty quickly you run into the problem that your end-effector is not a disconnected point-mass moving around the environment. It’s attached to the rest of the arm, and moving such that the arm segments and joints also avoid the obstacle is a whole other deal.

I was doing a quick lit scan on methods for control methods for avoiding joint collision with obstacles, and was kind of surprised that there wasn’t really much in the realm of recent discussions about it. There is, however, a 1986 paper from Dr. Oussama Khatib titled Real-time obstacle avoidance for manipulators and mobile robots that pretty much solves this problem short of getting into explicit path planning methods. Which could be why there aren’t papers since then about it. All the same, I couldn’t find any implementations of it online, and there are a few tricky parts, so in this post I’m going to work through the implementation.

Note: The implementation that I’ve worked out here uses spheres to represent the obstacles. This works pretty well in general by just making the sphere large enough to cover whatever obstacle you’re avoiding. But if you want a more precise control around other shapes, you can check out the appendix of Dr. Khatib’s paper, where he works provides the math for cubes and cones as well.

Note: You can find all of the code I use here and everything you need for the VREP implementation up on my GitHub. I’ve precalculated the functions that are required, because generating them is quite computationally intensive. Hopefully the file saving doesn’t get weird between different operating systems, but if it does, try deleting all of the files in the ur5_config folder and running the code again. This will regenerate those files (on my laptop this takes ~4 hours, though, so beware).

The general idea

Since it’s from Dr. Khatib, you might expect that this approach involves task space. And indeed, your possible suspicions are correct! The algorithm is going to work by identifying forces in Cartesian coordinates that will move any point of the arm that’s too close to an obstacle away from it. The algorithm follows the form:

Setup

  • Specify obstacle location and size
  • Specify a threshold distance to the obstacle

While running

  • Find the closest point of each arm segment to obstacles
  • If within threshold of obstacle, generate force to apply to that point
  • Transform this force into joint torques
  • Add directly to the outgoing control signal

Something that you might notice about this is that it’s similar to the addition of the null space controller that we’ve seen before in operational space control. There’s a distinct difference here though, in that the control signal for avoiding obstacles is added directly to the outgoing control signal, and that it’s not filtered (like the null space control signal) such that there’s no guarantee that it won’t affect the movement of the end-effector. In fact, it’s very likely to affect the movement of the end-effector, but that’s also desirable, as not ramming the arm through an obstacle is as important as getting to the target.

OK, let’s walk through these steps one at a time.

Setup

I mentioned that we’re going to treat all of our obstacles as spheres. It’s actually not much harder to do these calculations for cubes too, but this is already going to be long enough so I’m only addressing sphere’s here. This algorithm assumes we have a list of every obstacle’s centre-point and radius.

We want the avoidance response of the system to be a function of the distance to the obstacle, such that the closer the arm is to the obstacle the stronger the response. The function that Dr. Khatib provides is of the following form:

\textbf{F}_{psp} = \left\{ \begin{array}{cc}\eta (\frac{1.0}{\rho} - \frac{1}{\rho_0}) \frac{1}{\rho^2} \frac{\partial \rho}{\partial \textbf{x}} & \rho \leq \rho_0 \\ \textbf{0} & \rho > \rho_0 \end{array} \right. ,

where \rho is the distance to the target, \rho_0 is the threshold distance to the target at which point the avoidance function activates, \frac{\partial \rho}{\partial \textbf{x}} is the partial derivative of the distance to the target with respect to a given point on the arm, and \eta is a gain term.

This function looks complicated, but it’s actually pretty intuitive. The partial derivative term in the function works simply to point in the opposite direction of the obstacle, in Cartesian coordinates, i.e. tells the system how to get away from the obstacle. The rest of the term is just a gain that starts out at zero when \rho = \rho_0, and gets huge as the obstacle nears the object (as \rho \to 0 \Rightarrow \frac{1}{\rho} \to \infty). Using \eta = .2 and \rho_0 = .2 gives us a function that looks like

gain

So you can see that very quickly a very, very strong push away from this obstacle is going to be generated once we enter the threshold distance. But how do we know exactly when we’ve entered the threshold distance?

Find the closest point

We want to avoid the obstacle with our whole body, but it turns out we can reduce the problem to only worrying about the closest point of each arm segment to the obstacle, and move that one point away from the obstacle if threshold distance is hit.

To find the closest point on a given arm segment to the obstacle we’ll do some pretty neat trig. I’ll post the code for it and then discuss it below. In this snippet, p1 and p2 are the beginning and ending (x,y,z) locations of arm segment (which we are assuming is a straight line), and v is the center of the obstacle.

# the vector of our line
vec_line = p2 - p1
# the vector from the obstacle to the first line point
vec_ob_line = v - p1
# calculate the projection normalized by length of arm segment
projection = (np.dot(vec_ob_line, vec_line) /
              np.dot(vec_line, vec_line))
if projection < 0:               
    # then closest point is the start of the segment
    closest = p1  
elif projection > 1:
    # then closest point is the end of the segment
    closest = p2
else:
    closest = p1 + projection * vec_line

The first thing we do is find the arm segment line, and then line from the obstacle center to the start point of the arm segment. Once we have these, we do:

\frac{\textbf{v}_\textrm{ob\_line} \; \cdot \; \textbf{v}_\textrm{line}}{\textbf{v}_\textrm{line} \; \cdot \; \textbf{v}_\textrm{line}},

using the geometric definition of the dot product two vectors, we can rewrite the above as

\frac{||\textbf{v}_\textrm{ob\_line}|| \; || \textbf{v}_\textrm{line} || \; \textrm{cos}(\theta)}{||\textbf{v}_\textrm{line}||^2} = \frac{||\textbf{v}_\textrm{ob\_line}||} {||\textbf{v}_\textrm{line}||} \textrm{cos}(\theta)

which reads as the magnitude of vec_ob_line divided by the magnitude of vec_line (I know, these are terrible names, sorry) multiplied by the angle between the two vectors. If the angle between the vectors is < 0 (projection will also be < 0), then right off the bat we know that the start of the arm segment, p1, is the closest point. If the projection value is > 1, then we know that 1) the length from the start of the arm segment to the obstacle is longer than the length of the arm, and 2) the angle is such that the end of the arm segment, p2, is the closest point to the obstacle.

Finally, in the last case we know that the closest point is somewhere along the arm segment. To find where exactly, we do the following

\textbf{p}_1 + \textrm{projection} \; \textbf{v}_\textrm{line},

which can be rewritten

\textbf{p}_1 + \frac{||\textbf{v}_\textrm{ob\_line}||} {||\textbf{v}_\textrm{line}||} \textrm{cos}(\theta) \; \textbf{v}_\textrm{line},

I find it more intuitive if we rearrange the second term to be

\textbf{p}_1 + \frac{\textbf{v}_\textrm{line}} {||\textbf{v}_\textrm{line}||} \; ||\textbf{v}_\textrm{ob\_line} || \; \textrm{cos}(\theta).

So then what this is all doing is starting us off at the beginning of the arm segment, p1, and to that we add this other fun term. The first part of this fun term provides direction normalized to magnitude 1. The second part of this term provides magnitude, specifically the exact distance along vec_line we need to traverse to form reach a right angle (giving us the shortest distance) pointing at the obstacle. This handy figure from the Wikipedia page helps illustrate exactly what’s going on with the second part, where B is be vec_line and A is vec_ob_line:

dot_product
Armed with this information, we understand how to find the closest point of each arm segment to the obstacle, and we are now ready to generate a force to move the arm in the opposite direction!

Check distance, generate force

To calculate the distance, all we have to do is calculate the Euclidean distance from the closest point of the arm segment to the center of the sphere, and then subtract out the radius of the sphere:

# calculate distance from obstacle vertex to the closest point
dist = np.sqrt(np.sum((v - closest)**2))
# account for size of obstacle
rho = dist - obstacle[3]

Once we have this, we can check it and generate F_{psp} using the equation we defined above. The one part of that equation that wasn’t specified was exactly what \frac{\partial \rho}{\partial \textbf{x}} was. Since it’s just the partial derivative of the distance to the target with respect to the closest point, we can calculate it as the normalized difference between the two points:

drhodx = (v - closest) / rho

Alright! Now we’ve found the closest point, and know the force we want to apply, from here it’s standard operational space procedure.

Transform the force into torques

As we all remember, the equation for transforming a control signal from operational space to involves two terms aside from the desired force. Namely, the Jacobian and the operational space inertia matrix:

\textbf{u}_\textrm{psp} = \textbf{J}^T_{psp} \textbf{M}_{psp} \textbf{F}_{psp},

where \textbf{J}_{psp} is the Jacobian for the point of interest, \textbf{M}_{psp} is the operational space inertia matrix for the point of interest, and \textbf{F}_{psp} is the force we defined above.

Calculating the Jacobian for an unspecified point

So the first thing we need to calculate is the Jacobian for this point on the arm. There are a bunch of ways you could go about this, but the way I’m going to do it here is by building on the post where I used SymPy to automate the Jacobian derivation. The way we did that was by defining the transforms from the origin reference frame to the first link, from the first link to the second, etc, until we reached the end-effector. Then, whenever we needed a Jacobian we could string together the transforms to get the transform from the origin to that point on the arm, and take the partial derivative with respect to the joints (using SymPy’s derivative method).

As an example, say we wanted to calculate the Jacobian for the third joint, we would first calculate:

^{\textrm{org}}\textbf{T}_3 = ^{\textrm{org}}\textbf{T}_0 \; ^0\textbf{T}_1 \; ^1\textbf{T}_2 \; ^2\textbf{T}_3,

where ^n\textbf{T}_m reads the transform from reference frame n to reference frame m.

Once we have this transformation matrix, ^\textrm{org}\textbf{T}_3, we multiply it by the point of interest in reference frame 3, which, previously, has usually been \textbf{x} = [0, 0, 0]. In other words, usually we’re just interested in the origin of reference frame 3. So the Jacobian is just

\frac{\partial \; ^\textrm{org}\textbf{T}_3 \textbf{x}}{\partial \textbf{q}}.

what if we’re interested in some non-specified point along link 3, though? Well, using SymPy we set make \textbf{x} = [x_0, x_1, x_2, 1] instead of \textbf{x} = [0, 0, 0, 1] (recall the 1 at the end in these vectors is just to make the math work), and make the Jacobian function SymPy generates for us dependent on both \textbf{q} and \textbf{x}, rather than just \textbf{q}. In code this looks like:

Torg3 = self._calc_T(name="3")
# transform x into world coordinates
Torg3x = sp.simplify(Torg3 * sp.Matrix(x))
J3_func = sp.lambdify(q + x, Torg3)

Now it’s possible to calculate the Jacobian for any point along link 3 just by changing the parameters that we pass into J3_func! Most excellent.

We are getting closer.

NOTE: This parameterization can significantly increase the build time of the function, it took my laptop about 4 hours. To decrease build time you can try commenting out the simplify calls from the code, which might slow down run-time a bit but significantly drops the generation time.

Where is the closest point in that link’s reference frame?

A sneaky problem comes up when calculating the closest point of each arm segment to the object: We’ve calculated the closest point of each arm segment in the origin’s frame of reference, and we need thew relative to each link’s own frame of reference. Fortunately, all we have to do is calculate the inverse transform for the link of interest. For example, the inverse transform of ^\textrm{org}\textbf{T}_3 transforms a point from the origin’s frame of reference to the reference frame of the 3rd joint.

I go over how to calculate the inverse transform at the end of my post on forward transformation matrices, but to save you from having to go back and look through that, here’s the code to do it:

Torg3 = self._calc_T(name="3")
rotation_inv = Torg3[:3, :3].T
translation_inv = -rotation_inv * Torg3[:3, 3]
Torg3_inv = rotation_inv.row_join(translation_inv).col_join(
    sp.Matrix([[0, 0, 0, 1]]))

And now to find the closest point in the coordinates of reference frame 3 we simply

x = np.dot(Torg3_inv, closest)

This x value is what we’re going to plug in as parameters to our J3_func above to find the Jacobian for the closest point on link 3.

Calculate the operational space inertia matrix for the closest point

OK. With the Jacobian for the point of interest we are now able to calculate the operational space inertia matrix. This code I’ve explicitly worked through before, and I’ll show it in the full code below, so I won’t go over it again here.

The whole implementation

You can run an example of all this code controlling the UR5 arm to avoid obstacles in VREP using this code up on my GitHub. The specific code added to implement obstacle avoidance looks like this:

# find the closest point of each link to the obstacle
for ii in range(robot_config.num_joints):
    # get the start and end-points of the arm segment
    p1 = robot_config.Tx('joint%i' % ii, q=q)
    if ii == robot_config.num_joints - 1:
        p2 = robot_config.Tx('EE', q=q)
    else:
        p2 = robot_config.Tx('joint%i' % (ii + 1), q=q)

    # calculate minimum distance from arm segment to obstacle
    # the vector of our line
    vec_line = p2 - p1
    # the vector from the obstacle to the first line point
    vec_ob_line = v - p1
    # calculate the projection normalized by length of arm segment
    projection = (np.dot(vec_ob_line, vec_line) /
                  np.sum((vec_line)**2))
    if projection < 0:         
        # then closest point is the start of the segment
        closest = p1
    elif projection > 1:
        # then closest point is the end of the segment
        closest = p2
    else:
        closest = p1 + projection * vec_line
    # calculate distance from obstacle vertex to the closest point
    dist = np.sqrt(np.sum((v - closest)**2))
    # account for size of obstacle
    rho = dist - obstacle_radius

    if rho < threshold:
        eta = .02
        drhodx = (v - closest) / rho
        Fpsp = (eta * (1.0/rho - 1.0/threshold) *
                1.0/rho**2 * drhodx)

        # get offset of closest point from link's reference frame
        T_inv = robot_config.T_inv('link%i' % ii, q=q)
        m = np.dot(T_inv, np.hstack([closest, [1]]))[:-1]
        # calculate the Jacobian for this point
        Jpsp = robot_config.J('link%i' % ii, x=m, q=q)[:3]

        # calculate the inertia matrix for the
        # point subjected to the potential space
        Mxpsp_inv = np.dot(Jpsp,
                        np.dot(np.linalg.pinv(Mq), Jpsp.T))
        svd_u, svd_s, svd_v = np.linalg.svd(Mxpsp_inv)
        # cut off singular values that could cause problems
        singularity_thresh = .00025
        for ii in range(len(svd_s)):
            svd_s[ii] = 0 if svd_s[ii] < singularity_thresh else \
                1./float(svd_s[ii])
        # numpy returns U,S,V.T, so have to transpose both here
        Mxpsp = np.dot(svd_v.T, np.dot(np.diag(svd_s), svd_u.T))

        u_psp = -np.dot(Jpsp.T, np.dot(Mxpsp, Fpsp))
        if rho < .01:
            u = u_psp
        else:
            u += u_psp

The one thing in this code I didn’t talk about is that you can see that if rho < .01 then I set u = u_psp instead of just adding u_psp to u. What this does is basically add in a fail safe take over of the robotic control saying that “if we are about to hit the obstacle forget about everything else and get out of the way!”.

Results

And that’s it! I really enjoy how this looks when it’s running, it’s a really effective algorithm. Let’s look at some samples of it in action.

First, in a 2D environment, where it’s real easy to move around the obstacle and see how it changes in response to the new obstacle position. The red circle is the target and the blue circle is the obstacle:
avoid2d

And in 3D in VREP, running the code example that I’ve put up on my GitHub implementing this. The example of it running without obstacle avoidance code is on the left, and running with obstacle avoidance is on the right. It’s kind of hard to see but on the left the robot moves through the far side of the obstacle (the gold sphere) on its way to the target (the red sphere):

And one last example, the arm dodging a moving obstacle on its way to the target.

movingavoid3d

The implementation is a ton of fun to play around with. It’s a really nifty algorithm, that works quite well, and I haven’t found many alternatives in papers that don’t go into path planning (if you know of some and can share that’d be great!). This post was a bit of a journey, but hopefully you found it useful! I continue to find it impressive how many different neat features like this can come about once you have the operational space control framework in place.

Tagged , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: