Dynamic movement primitives part 3: Rhythmic movements

So far we’ve looked at using DMPs for discrete movements, but as I briefly mentioned it’s also possible to use them to perform rhythmic movements. In this post we’re going to look at the implementation of rhythmic DMPs and see exactly how it’s done. It’s actually pretty straightforward, which is always nich. We’ll go through everything that has to change and then see a couple of different applications of rhythmic DMPs. Again, all the code for the DMPs and arm control implementations can be found up on my Github.

Obtaining consistent and repeatable basis function activation

We’ll start with the canonical system. In the discrete case our canonical system (which drives the activation of the basis functions) decayed from 1 to 0 and then we were all done. In the rhythmic case we want our system to repeat indefinitely, so we need a reliable way of continuously activating the basis functions in the same order. One function that may come to mind that reliably repeats is the cosine function. But how to use the cosine function, exactly?

The idea is that we’ll lay out our basis functions in the range from 0 to 2\pi, and we’ll set the canonical system to be ever increasing linearly. Then we’ll use the difference between the canonical system state and each of the center points as the value we pass in to our cosine function. Because the cosine function repeats at 2\pi we’ll get a nice repeating spread of basis function activations. We’ll then throw these values through our Gaussian equation (with the gain and bias terms) and we’ll be done! Here’s the equation:

\psi = \textrm{exp}(h * \textrm{cos}(x - c) - 1),
where x is the state of the canonical system, c is the basis function center point, and h is a gain term controlling the variance of the Gaussian. And here’s a picture of the activations of three basis functions with centers spread out evenly between 0 and 2\pi:

gauss_rhythmic

Here’s a picture of the basis function activations on a longer time scale:

gauss_rhythmic_longer

As long as our canonical system increases at a steady pace the basis functions will continue to activate in a reliable and repeatable way. As mentioned, to get the canonical system to increase at a steady pace we simply give it linear dynamics:

\dot{x} = 1
The placement of the center points of the rhythmic system is a lot easier than in the discrete case because the rhythmic canonical system dynamics are linear.

Other differences between discrete and rhythmic

In the actual implementation of the rhythmic system there are a couple of other differences that you need to be aware of. The first is that there is no diminishing term in the rhythmic system, whereas there is in the discrete case.

The second is how to go about establishing goal states when imitating an path. We’re going to assume that whatever path is passed in to us is going to be a rhythmic pattern, and then goal state is going to be set as the center point of the desired trajectory. So for rhythmic systems the ‘goal’ is actually more like a center point for the trajectory.

And that’s pretty much it! If you look through the code that I have up for the rhythmic DMP subclass you’ll see that only a handful of functions needed to be redefined, the rest is the same! Well that’s great, let’s look at some applications.

Example

I was thinking of different examples that would be interesting to see of the rhythmic DMPs, and after a little consideration the most obvious application of rhythmic DMPs is really to a system that does something like walking. In the spinal cord of animals there are circuits which are known as central pattern generators (CPGs), and when stimulated these have been shown to generate rhythmic, repeated movements (as described in Principles of rhythmic motor pattern generation and other papers). Let’s go ahead and build our own CPG here using DMPs. I don’t have a simulation of a set of legs, however, so instead we’ll look at getting a single 3-link arm to reproduce a pattern of behaviour that you’d expect in a leg that is walking.

To do this, I’m going to need to specify the trajectories for the three joints of the arm, and I’ll be controlling the system in joint space, using the generalized coordinates controller, rather than the operational space controller used in the discrete DMP examples for controlling the end-effector position. So that’s one thing. Another thing is that I’m going to need the kinematics of leg joints during motion. After a quick search of the web I found this presentation from Simon Fraser University which has the following images:

leg_rhythms

Which is some very useful information! I just need to transfer this into a Numpy vector that I can feed into the system and convert degrees to radians and we should be good to go! To get the information out of the graph, use your favorite data stripper, I used http://arohatgi.info/WebPlotDigitizer/app/.

I also had to do a bit of a shuffle with the trajectories provided to make it look right on the arm. The hip to shoulder joint maps fine, but for the knee to elbow the flexion and extension are reversed, and for the wrist to foot the 0 angle for the foot is straight out. Once those have been accounted for, though, everything works great. Here is an animation of a single leg walking along, if you use your imagination you can just picture a happy-go-lucky torso attached to this leg whistling while he gets his very human-like gait on:

leg_walking

And there you go! It’s neat how straightforward that was, once you have the data of the system you want to model. It shows the power of DMPs for coordinating multiple degrees of freedom quickly and easily. Also something that’s worth pointing out once again is that this is a force based controller. We were able to go from a kinematic description of the movement we wanted to a reliable system generating torques. It’s another good example of the dexterity of dynamic movement primitives.

You can see the code for everything here up online on my github control repo on the walking_demo branch. To run the code you should run:

python run.py

If you have any questions or trouble running the code please contact me.

Tagged , , ,

5 thoughts on “Dynamic movement primitives part 3: Rhythmic movements

  1. Ron DeSpain says:

    Hello Travis
    I just viewed your Maplesoft brain model presentation and I would love to share your work with my LinkedIn group “ImageJ Users”. I’m the guy who asked about Java. I’d love to talk to Goosens re Maplesim as well, but I didn’t see a link for him. Please contact me at ron_despain@hotmail.com. I started my LinkedIn group to promote and share research on intelligent software as we (humans) grope our way toward self aware machines. I would be particularly interested in the non linear adaptive learning/control you discussed.
    Very nice presentation.
    Ron DeSpain

  2. Arne says:

    Hi,
    I am thinking about using dmps to teach the NAO robot some simple movements for my masters thesis. However I do not fully understand dmps, yet. Therefore I would like to know why you switched from task-space to joint-space for the walking example? Was it just out of convenience or is there a ‘real’ reason for not using the task-space? Thanks🙂

    • travisdewolf says:

      Hello!
      Oh neat, that sounds really interesting, please keep me posted on progress and results from it! For the walking example I switched to the joint-space because that was where the trajectories were defined that I wanted to follow in the experimental data. I might have been able to get something similar by tracking the ‘toe’ position and following that in task-space with a secondary controller trying to keep the joints near default leg angles, but the paths I needed were already in joint-space so I just used that! Also I think for the leg and walking it makes more sense for paths to be planned in terms of joint angles rather than toe position.

  3. danangcity45 says:

    Hello Travis,

    I find your post very useful to me. I just run your control code by “python run.py” but the error appears like:

    “/Users/LTPHAN/Documents/Coding/DynamicMovementPrimitives/control-walking_demo/Control/Arms/three_link/py3LinkArm.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00”

    Like it can not read py3LinkArm.so file. How to solve this error?

    Thank you so much for this nice post.!

    • travisdewolf says:

      Hello! Glad to hear you’ve found it useful!

      Hmm, which version of Python and operating system are you using? I’ve not encountered this error before. I can try to help debug it though! The command you ran was just python run.py? Have you tried going into the arms/three_link folder and running python setup.py build_ext -i?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: