Category Archives: math

Robot control part 3: Accounting for mass and gravity

In the exciting previous post we looked at how to go about generating a Jacobian matrix, which we could use to transformation both from joint angle velocities to end-effector velocities, and from desired end-effector forces into joint angle torques. I briefly mentioned right at the end that using just this force transformation to build your control signal was only appropriate for very simple systems that didn’t have to account for things like arm-link mass or gravity.

In general, however, mass and gravity must be accounted for and cancelled out. The full dynamics of a robot arm are

\textbf{M}(\textbf{q}) \ddot{\textbf{q}} = (\textbf{u} - \textbf{C}(\textbf{q}, \dot{\textbf{q}}) - \textbf{g}(\textbf{q})) ,

where \ddot{\textbf{q}} is joint angle acceleration, \textbf{u} is the control signal (specifying torque), \textbf{C}(\textbf{q}, \dot{\textbf{q}}) is a function describing the Coriolis and centrifugal effects, \textbf{g}(\textbf{q}) is the effect of gravity in joint space, and \textbf{M} is the mass matrix of the system in joint space.

There are a lot of terms involved in the system acceleration, so while the Jacobian can be used to transform forces between coordinate systems it is clear that just setting the control signal \textbf{u} = \textbf{J}_{ee}^T (\textbf{q})\textbf{F}_\textbf{x} is not sufficient, because a lot of the dynamics affecting acceleration aren’t accounted for. In this section an effective PD controller operating in joint space will be developed that will allow for more precise control by cancelling out unwanted acceleration terms. To do this the effects of inertia and gravity need to be calculated.

Accounting for inertia

The fact that systems have mass is a pain in our controller’s side because it introduces inertia into our system, making control of how the system will move at any given point in time more difficult. Mass can be thought of as an object’s unwillingness to respond to applied forces. The heavier something is, the more resistant it is to acceleration, and the force required to move a system along a desired trajectory depends on both the object’s mass and its current acceleration.

To effectively control a system, the system inertia needs to be calculated so that it can be included in the control signal and cancelled out.


3d_2_linkGiven the robot arm above, operating in the (x,z) plane, with the y axis extending into the picture where the yellow circles represent each links centre-of-mass (COM). The position of each link is COM is defined relative to that link’s reference frame, and the goal is to figure out how much each link’s mass will affect the system dynamics.

The first step is to transform the representation of each of the COM from Cartesian coordinates in the reference frame of their respective arm segments into terms of joint angles, such that the Jacobian for each COM can be calculated.

Let the COM positions relative to each segment’s coordinate frame be

\textrm{com}_0 = \left[ \begin{array}{c} \frac{1}{2}cos(q_0) \\ 0 \\ \frac{1}{2}sin(q_0) \end{array} \right], \;\;\;\; \textrm{com}_1 = \left[ \begin{array}{c} \frac{1}{4}cos(q_1) \\ 0 \\ \frac{1}{4}sin(q_1) \end{array} \right].

The first segment’s COM is already in base coordinates (since the first link and the base share the same coordinate frame), so all that is required is the position of the second link’s COM in the base reference frame, which can be done with the transformation matrix

^1_0\textbf{T} = \left[ \begin{array}{cccc} cos(q_1) & 0 & -sin(q_1) & L_0 cos(q_0) \\ 0 & 1 & 0 & 0 \\ sin(q_1) & 0 & cos(q_1) & L_0 sin(q_0) \\ 0 & 0 & 0 & 1 \end{array} \right].

Using ^1_0\textbf{T} to transform the \textrm{com}_1 gives

^1_0\textbf{T} \; \textrm{com}_1 = \left[ \begin{array}{cccc} cos(q_1) & 0 & -sin(q_1) & L_0 cos(q_0) \\ 0 & 1 & 0 & 0 \\ sin(q_1) & 0 & cos(q_1) & L_0 sin(q_0) \\ 0 & 0 & 0 & 1 \end{array} \right] \; \; \left[ \begin{array}{c} \frac{1}{4}cos(q_1) \\ 0 \\ \frac{1}{4}sin(q_1) \\ 1 \end{array} \right]

^1_0\textbf{T} \; \textrm{com}_1 = \left[ \begin{array}{c} L_0 cos(q_0) + \frac{1}{4}cos(q_0 + q_1) \\ 0 \\ L_0 sin(q_0) + \frac{1}{4} cos(q_0 + q_1) \\ 1 \end{array} \right].

To see the full computation worked out explicitly please see my previous robot control post.

Now that we have the COM positions in terms of joint angles, we can find the Jacobians for each point through our Jacobian equation:

\textbf{J} = \frac{\partial \textbf{x}}{\partial \textbf{q}}.

Using this for each link gives us:

\textbf{J}_0 = \left[ \begin{array}{cc} -\frac{1}{2}sin(q_0) & 0 \\ 0 & 0 \\ \frac{1}{2} cos(q_0) & 0 \\ 0 & 0 \\ 1 & 0 \\ 0 & 0 \end{array} \right]
\textbf{J}_1 = \left[ \begin{array}{cc} -L_0sin(q_0) -\frac{1}{4}sin(\theta_0 + q_1) & -\frac{1}{4} sin(q_0 + \theta_1) \\ 0 & 0 \\ L_0 cos(q_0) + \frac{1}{4}cos(q_0 + q_1) & \frac{1}{4} cos(q_0 +q_1) \\ 0 & 0 \\ 1 & 1 \\ 0 & 0 \end{array} \right].

Kinetic energy

The total energy of a system can be calculated as a sum of the energy introduced from each source. The Jacobians just derived will be used to calculate the kinetic energy each link generates during motion. Each link’s kinetic energy will be calculated and summed to get the total energy introduced into the system by the mass and configuration of each link.

Kinetic energy (KE) is one half of mass times velocity squared:

\textrm{KE} = \frac{1}{2} \; \dot{\textbf{x}}^T \textbf{M}_\textbf{x}(\textbf{q}) \; \dot{\textbf{x}},

where \textbf{M}_\textbf{x} is the mass matrix of the system, with the subscript \textbf{x} denoting that it is defined in Cartesian space, and \dot{\textbf{x}} is a velocity vector, where \dot{\textbf{x}} is of the form

\dot{\textbf{x}} = \left[ \begin{array}{c} \dot{x} \\ \dot{y} \\ \dot{z} \\ \dot{\omega_x} \\ \dot{\omega_y} \\ \dot{\omega_z} \end{array} \right],

and the mass matrix is structured

\textbf{M}_{\textbf{x}_i} (\textbf{q})= \left[ \begin{array}{cccccc} m_i & 0 & 0 & 0 & 0 & 0 \\ 0 & m_i & 0 & 0 & 0 & 0 \\ 0 & 0 & m_i & 0 & 0 & 0 \\ 0 & 0 & 0 & I_{xx} & I_{xy} & I_{xz} \\ 0 & 0 & 0 & I_{yx} & I_{yy} & I_{yz} \\ 0 & 0 & 0 & I_{zx} & I_{zy} & I_{zz} \end{array} \right],

where m_i is the mass of COM i, and the I_{ij} terms are the moments of inertia, which define the object’s resistance to change in angular velocity about the axes, the same way that the mass element defines the object’s resistance to changes in linear velocity.

As mentioned above, the mass matrix for the COM of each link is defined in Cartesian coordinates in its respective arm segment’s reference frame. The effects of mass need to be found in joint angle space, however, because that is where the controller operates. Looking at the summation of the KE introduced by each COM:

\textrm{KE} = \frac{1}{2} \; \Sigma_{i=0}^n ( \dot{\textbf{x}}_i^T \textbf{M}_{\textbf{x}_i}(\textbf{q}) \; \dot{\textbf{x}}_i),

and substituting in \dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}},

\textrm{KE}_i \ \frac{1}{2} \; \Sigma_{i=0}^n (\dot{\textbf{q}}^T \; \textbf{J}_i^T \textbf{M}_{\textbf{x}_i}(\textbf{q})\textbf{J}_i \; \dot{\textbf{q}}),

 

and moving the \dot{\textbf{q}} terms outside the summation,

\textrm{KE}_i = \frac{1}{2} \; \dot{\textbf{q}}^T \; \Sigma_{i=0}^n (\textbf{J}_i^T \textbf{M}_{\textbf{x}_i}(\textbf{q}) \textbf{J}_i) \; \dot{\textbf{q}}.

Defining

\textbf{M}(\textbf{q}) = \Sigma_{i=0}^n \; \textbf{J}_i^T(\textbf{q}) \textbf{M}_{\textbf{x}_i}(\textbf{q}) \; \textbf{J}_i(\textbf{q}),

gives

\textrm{KE} = \frac{1}{2} \; \dot{\textbf{q}}^T \; \textbf{M}(\textbf{q}) \; \dot{\textbf{q}},

which is the equation for calculating kinetic energy in joint space. Thus, \textbf{M}(\textbf{q}) denotes the inertia matrix in joint space.

Now that we’ve successfully calculated the mass matrix of the system in joint space, we can incorporate it into our control signal and cancel out its effects on the system dynamics! On to the next problem!

Accounting for gravity

With the forces of inertia accounted for, we can now address the problem of gravity. To compensate for gravity the concept of conservation of energy (i.e. the work done by gravity is the same in all coordinate systems) will once again be pulled out. The controller operates by applying torque on joints, so it is necessary to be able to calculate the effect of gravity in joint space to cancel it out.

While the effect of gravity in joint space isn’t obvious, it is quite easily defined in Cartesian coordinates in the base frame of reference. Here, the work done by gravity is simply the summation of the distance each link’s center of mass has moved multiplied by the force of gravity. Where the force of gravity in Cartesian space is the mass of the object multiplied by -9.8m/s^2 along the z axis, the equation for the work done by gravity is written:

\textbf{W}_g = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \dot{\textbf{x}}_i),

where \textbf{F}_{g_i} is the force of gravity on the ith arm segment. Because of the conservation of energy, the equation for work is equivalent when calculated in joint space, substituting into the above equation with the equation for work:

\textbf{F}_\textbf{q}^T \dot{\textbf{q}} = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \dot{\textbf{x}}_i),

and then substitute in using \dot{\textbf{x}}_i = \textbf{J}_i(\textbf{q}) \; \dot{\textbf{q}},

\textbf{F}_\textbf{q}^T \dot{\textbf{q}} = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \textbf{J}_i(\textbf{q}) \; \dot{\textbf{q}}),

and cancelling out the \dot{\textbf{q}} terms on both sides,

\textbf{F}_\textbf{q}^T = \Sigma^n_{i=0} (\textbf{F}_{g_i}^T \textbf{J}_i(\textbf{q})),

\textbf{F}_\textbf{q} = \Sigma^n_{i=0} (\textbf{J}_i^T(\textbf{q}) \textbf{F}_{g_i}) = \textbf{g}(\textbf{q}),

which says that to find the effect of gravity in joint space simply multiply the mass of each link by its Jacobian, multiplied by the force of gravity in (x,y,z) space, and sum over each link. This summation gives the total effect of the gravity on the system.

Making a PD controller in joint space

We are now able to account for the energy in the system caused by inertia and gravity, great! Let’s use this to build a simple PD controller in joint space. Control should be very straight forward because once we cancel out the effects of gravity and inertia then we can almost pretend that the system behaves linearly. This means that we can also treat control of each of the joints independently, since their movements no longer affect one another. So in our control system we’re actually going to have a PD controller for each joint.

The above-mentioned nonlinearity that’s left in the system dynamics is due to the Coriolis and centrifugal effects. Now, these can be accounted for, but they require highly accurate model of the moments of inertia. If the moments are incorrect then the controller can actually introduce instability into the system, so it’s better if we just don’t address them.

Rewriting the system dynamics presented at the very top, in terms of acceleration gives

\ddot{\textbf{q}} = \textbf{M}^{-1}(\textbf{q}) (\textbf{u} - \textbf{C}(\textbf{q}, \dot{\textbf{q}}) - \textbf{g}(\textbf{q})).

Ideally, the control signal would be constructed

\textbf{u} = (\textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} + \textbf{C}(\textbf{q}, \dot{\textbf{q}}) + \textbf{g}(\textbf{q})),

where \ddot{\textbf{q}}_\textrm{des} is the desired acceleration of the system. This would result in system acceleration

\ddot{\textbf{q}} = \textbf{M}^{-1}(\textbf{q})((\textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} + \textbf{C}(\textbf{q}, \dot{\textbf{q}}) + \textbf{g}(\textbf{q})) - \textbf{C}(\textbf{q}, \dot{\textbf{q}}) - \textbf{g}(\textbf{q})),

\ddot{\textbf{q}} = \textbf{M}^{-1}(\textbf{q}) \textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} ,

\ddot{\textbf{q}} = \ddot{\textbf{q}}_\textrm{des},

which would be ideal. As mentioned, because the Coriolis and centrifugal effects are tricky to account for we’ll leave them out, so the instead the control signal is

\textbf{u} = (\textbf{M}(\textbf{q}) \; \ddot{\textbf{q}}_\textrm{des} + \textbf{g}(\textbf{q})).

Using a standard PD control formula to generate the desired acceleration:

\ddot{\textbf{q}}_\textrm{des} = k_p \; (\textbf{q}_{\textrm{des}} - \textbf{q}) + k_v \; (\dot{\textbf{q}}_{\textrm{des}} - \dot{\textbf{q}}),

where k_p and k_v are our gain values, and the control signal has been fully defined:

\textbf{u} = (\textbf{M}(\textbf{q}) \; (k_p \; (\textbf{q}_{\textrm{des}} - \textbf{q}) + k_v \; (\dot{\textbf{q}}_{\textrm{des}} - \dot{\textbf{q}})) + \textbf{g}(\textbf{q})),

and we’ve successfully build an effective PD controller in joint space!

Conclusions

Here we looked at building a PD controller that operates in the joint space of a robotic arm that can cancel out the effects of inertia and gravity. By cancelling out the effects of inertia, we can treat control of each of the joints independently, effectively orthogonalizing their control. This makes PD control super easy, we just set up a simple controller for each joint. Also a neat thing is that all of the required calculations can be performed with algorithms of linear complexity, so it’s not a problem to do all of this super fast.

One of the finer points was that we ignored the Coriolis and centrifugal effects on the robot’s dynamics. This is because in the mass matrix model of the moments of inertia are notoriously hard to accurately capture on actual robots. Often you go based off of a CAD model of your robot and then have to do some fine-tuning by hand. So they will be unaccounted for in our control signal, but most of the time as long as you have a very short feedback loop you’ll be fine.

I am really enjoying working through this, as things build on each other so well here and we’re starting to be able to do some really interesting things with the relatively forward transformation matrices and Jacobians that we learned how to build in the previous posts. This was for a very simple robot, but excitingly the next step after this is moving on to operational space control! At last. From there, we’ll go on to look at more complex robotic situations where things like configuration redundancy are introduced and it’s not quite so straightforward.

Advertisements
Tagged , , , , ,

Robot control part 2: Jacobians, velocity, and force

Jacobian matrices are a super useful tool, and heavily used throughout robotics and control theory. Basically, a Jacobian defines the dynamic relationship between two different representations of a system. For example, if we have a 2-link robotic arm, there are two obvious ways to describe its current position: 1) the end-effector position and orientation (which we will denote \textbf{x}), and 2) as the set of joint angles (which we will denote \textbf{q}). The Jacobian for this system relates how movement of the elements of \textbf{q} causes movement of the elements of \textbf{x}. You can think of a Jacobian as a transform matrix for velocity.

Formally, a Jacobian is a set of partial differential equations:

\textbf{J} = \frac{\partial \textbf{x}}{\partial \textbf{q}}.

With a bit of manipulation we can get a neat result:

\textbf{J} = \frac{\partial \textbf{x}}{\partial t} \; \frac{\partial t}{\partial \textbf{q}} \rightarrow \frac{\partial \textbf{x}}{\partial \textbf{t}} = \textbf{J} \frac{\partial \textbf{q}}{\partial t},

or

\dot{\textbf{x}} = \textbf{J} \; \dot{\textbf{q}},

where \dot{\textbf{x}} and \dot{\textbf{q}} represent the time derivatives of \textbf{x} and \textbf{q}. This tells us that the end-effector velocity is equal to the Jacobian, \textbf{J}, multiplied by the joint angle velocity.

Why is this important? Well, this goes back to our desire to control in operational (or task) space. We’re interested in planning a trajectory in a different space than the one that we can control directly. Iin our robot arm, control is effected through a set of motors that apply torque to the joint angles, BUT what we’d like is to plan our trajectory in terms of end-effector position (and possibly orientation), generating control signals in terms of forces to apply in (x,y,z) space. Jacobians allow us a direct way to calculate what the control signal is in the space that we control (torques), given a control signal in one we don’t (end-effector forces). The above equivalence is a first step along the path to operational space control. As just mentioned, though, what we’re really interested in isn’t relating velocities, but forces. How can we do this?

Energy equivalence and Jacobians
Conservation of energy is a property of all physical systems where the amount of energy expended is the same no matter how the system in question is being represented. The planar two-link robot arm shown below will be used for illustration.

RR robot arm

Let the joint angle positions be denoted \textbf{q} = [q_0, q_1]^T, and end-effector position be denoted \textbf{x} = [x, y, 0]^T.

Work is the application of force over a distance

\textbf{W} = \int \textbf{F}^T \textbf{v} \; dt,

where \textbf{W} is work, \textbf{F} is force, and \textbf{v} is velocity.

Power is the rate at which work is performed

\textbf{P} = \frac{\textbf{W}}{t},

where \textbf{P} is power.
Substituting in the equation for work into the equation for power gives:

\textbf{P} = \frac{\textbf{W}}{t} = \frac{\textbf{F}^T \textbf{d}}{t} = \textbf{F}^T \frac{\textbf{d}}{t} = \textbf{F}^T\textbf{v}.

Because of energy equivalence, work is performed at the same rate regardless of the characterization of the system. Rewriting this terms of end-effector space gives:

\textbf{P} = \textbf{F}_\textbf{x}^T \dot{\textbf{x}},

where \textbf{F}_\textbf{x} is the force applied to the hand, and \dot{\textbf{x}} is the velocity of the hand. Rewriting the above in terms of joint-space gives:

\textbf{P} = \textbf{F}_\textbf{q}^T \dot{\textbf{q}},

where \textbf{F}_\textbf{q} is the torque applied to the joints, and \dot{\textbf{q}} is the angular velocity of the joints. Setting these two equations (in end-effector and joint space) equal to each other and substituting in our equation for the Jacobian gives:

\textbf{F}_{q_{hand}}^T \dot{\textbf{q}} = \textbf{F}_\textbf{x}^T \dot{\textbf{x}},

\textbf{F}_{q_{hand}}^T \dot{\textbf{q}} = \textbf{F}_\textbf{x}^T \textbf{J}_{ee}(\textbf{q}) \; \dot{\textbf{q}},

\textbf{F}_{q_{hand}}^T \textbf{F}_\textbf{x}^T \textbf{J}_{ee}(\textbf{q}),

\textbf{F}_{q_{hand}} = \textbf{J}_{ee}^T(\textbf{q}) \textbf{F}_\textbf{x}.

where \textbf{J}_{ee}(\textbf{q}) is the Jacobian for the end-effector of the robot, and \textbf{F}_{q_{hand}} represents the forces in joint-space that affect movement of the hand. This says that not only does the Jacobian relate velocity from one state-space representation to another, it can also be used to calculate what the forces in joint space should be to effect a desired set of forces in end-effector space.

Building the Jacobian

First, we need to define the relationship between the (x,y,z) position of the end-effector and the robot’s joint angles, (q_0, q_1). However will we do it? Well, we know the distances from the shoulder to the elbow, and elbow to the wrist, as well as the joint angles, and we’re interested in finding out where the end-effector is relative to a base coordinate frame…OH MAYBE we should use those forward transformation matrices from the previous post. Let’s do it!

The forward transformation matrix

Recall that transformation matrices allow a given point to be transformed between different reference frames. In this case, the position of the end-effector relative to the second joint of the robot arm is known, but where it is relative to the base reference frame (the first joint reference frame in this case) is of interest. This means that only one transformation matrix is needed, transforming from the reference frame attached to the second joint back to the base.

The rotation part of this matrix is straight-forward to define, as in the previous section:

^1_0\textbf{R} = \left[ \begin{array}{ccc} cos(q_0) & -sin(q_0) & 0 \\ sin(q_0) & cos(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right].

The translation part of the transformation matrices is a little different than before because reference frame 1 changes as a function of the angle of the previous joint’s angles. From trigonometry, given a vector of length r and an angle q the x position of the end point is defined r \; cos(q), and the y position is r \; sin(q). The arm is operating in the (x,y) plane, so the z position will always be 0.

Using this knowledge, the translation part of the transformation matrix is defined:

^1_0\textbf{D} = \left[ \begin{array}{c} L_0 cos(q_0) \\ L_0 sin(q_0) \\ 0 \end{array} \right].

Giving the forward transformation matrix:

^1_0\textbf{T} = \left[ \begin{array}{cc} ^1_0\textbf{R} & ^1_0\textbf{D} \\ \textbf{0} & \textbf{1} \end{array} \right] = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & L_0 cos(q_0) \\ sin(q_0) & cos(q_0) & 0 & L_0 sin(q_0)\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right],

which transforms a point from reference frame 1 (elbow joint) to reference frame 0 (shoulder joint / base).

The point of interest is the end-effector which is defined in reference frame 1 as a function of joint angle, q_1 and the length of second arm segment, L_1:

\textbf{x} = \left[ \begin{array}{c} L_1 cos(q_1) \\ L_1 sin(q_1) \\ 0 \\ 1 \end{array} \right].

To find the position of our end-effector in terms of the origin reference frame multiply the point \textbf{x} by the transformation ^1_0\textbf{T}:

^1_0\textbf{T} \; \textbf{x} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & L_0 cos(q_0) \\ sin(q_0) & cos(q_0) & 0 & L_0 sin(q_0)\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] \; \left[ \begin{array}{c} L_1 cos(q_1) \\ L_1 sin(q_1) \\ 0 \\ 1 \end{array} \right],

^1_0\textbf{T} \textbf{x} = \left[ \begin{array}{c} L_1 cos(q_0) cos(q_1) - L_1 sin(q_0) sin(q_1) + L_0 cos(q_0) \\ L_1 sin(q_0) cos(q_1) + L_1 cos(q_0) sin(q_1) + L_0 sin(q_0) \\ 0 \\ 1 \end{array} \right]

where, by pulling out the L_1 term and using the trig identities

cos(\alpha)cos(\beta) - sin(\alpha)sin(\beta) = cos(\alpha + \beta),

and

sin(\alpha)cos(\beta) + cos(\alpha)sin(\beta) = sin(\alpha + \beta),

the position of our end-effector can be rewritten:

\left[ \begin{array}{c} L_0 cos(q_0) + L_1 cos(q_0 + q_1) \\ L_0 sin(q_0) + L_1 sin(q_0 + q_1) \\ 0 \end{array} \right],

which is the position of the end-effector in terms of joint angles. As mentioned above, however, both the position of the end-effector and its orientation are needed; the rotation of the end-effector relative to the base frame must also be defined.

Accounting for orientation

Fortunately, defining orientation is simple, especially for systems with only revolute and prismatic joints (spherical joints will not be considered here). With prismatic joints, which are linear and move in a single plane, the rotation introduced is 0. With revolute joints, the rotation of the end-effector in each axis is simply a sum of rotations of each joint in their respective axes of rotation.

In the example case, the joints are rotating around the z axis, so the rotation part of our end-effector state is

\left[ \begin{array}{c} \omega_x \\ \omega_y \\ \omega_z \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ q_0 + q_1 \end{array} \right],

where \omega denotes angular rotation. If the first joint had been rotating in a different plane, e.g. in the (x, z) plane around the y axis instead, then the orientation would be

\left[ \begin{array}{c} \omega_x \\ \omega_y \\ \omega_z \end{array} \right] = \left[ \begin{array}{c} 0 \\ q_0 \\ q_1 \end{array} \right].

Partial differentiation

Once the position and orientation of the end-effector have been calculated, the partial derivative of these equations need to be calculated with respect to the elements of \textbf{q}. For simplicity, the Jacobian will be broken up into two parts, J_v and J_\omega, representing the linear and angular velocity, respectively, of the end-effector.

The linear velocity part of our Jacobian is:

\textbf{J}_v(\textbf{q}) = \left[ \begin{array}{cc} \frac{\partial x}{\partial q_0} & \frac{\partial x}{\partial q_1} \\ \frac{\partial y}{\partial q_0} & \frac{\partial y}{\partial q_1} \\ \frac{\partial z}{\partial q_0} & \frac{\partial z}{\partial q_1} \end{array} \right] = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ L_0 cos(q_0) + L_1 cos(q_0 + q_1) & L_1 cos(q_0 + q_1) \\ 0 & 0 \end{array} \right].

The angular velocity part of our Jacobian is:

\textbf{J}_\omega(\textbf{q}) = \left[ \begin{array}{cc} \frac{\partial \omega_x}{\partial q_0} & \frac{\partial \omega_x}{\partial q_1} \\ \frac{\partial \omega_y}{\partial q_0} & \frac{\partial \omega_y}{\partial q_1} \\ \frac{\partial \omega_z}{\partial q_0} & \frac{\partial \omega_z}{\partial q_1} \end{array} \right] = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right].

The full Jacobian for the end-effector is then:

\textbf{J}_{ee}(\textbf{q}) = \left[ \begin{array}{c} \textbf{J}_v(\textbf{q}) \\ \textbf{J}_\omega(\textbf{q}) \end{array} \right] = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ L_0 cos(q_0) + L_1 cos(q_0 + q_1) & L_1 cos(q_0 + q_1) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right].

Analyzing the Jacobian

Once the Jacobian is built, it can be analysed for insight about the relationship between \dot{\textbf{x}} and \dot{\textbf{q}}.

For example, there is a large block of zeros in the middle of the Jacobian defined above, along the row corresponding to linear velocity along the z axis, and the rows corresponding to the angular velocity around the x and y axes. This means that the z position, and rotations \omega_x and \omega_y are not controllable. This can be seen by going back to the first Jacobian equation:

\dot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q})\;\dot{\textbf{q}}.

No matter what the values of \dot{\textbf{q}}, it is impossible to affect \omega_x, \omega_y, or z, because the corresponding rows during the above multiplication with the Jacobian are \textbf{0}. These rows of zeros in the Jacobian are referred to as its `null space’. Because these variables can’t be controlled, they will be dropped from both \textbf{F}_\textbf{x} and \textbf{J}(\textbf{q}).

Looking at the variables that can be affected it can be seen that given any two of x, y, \omega_z the third can be calculated because the robot only has 2 degrees of freedom (the shoulder and elbow). This means that only two of the end-effector variables can actually be controlled. In the situation of controlling a robot arm, it is most useful to control the (x,y) coordinates, so \omega_z will be dropped from the force vector and Jacobian.

After removing the redundant term, the force vector representing the controllable end-effector forces is

\textbf{F}_\textbf{x} = \left[ \begin{array}{c}f_x \\ f_y\end{array} \right],

where f_x is force along the x axis, f_y is force along the y axis, and the Jacobian is written

\textbf{J}_{ee}(\textbf{q}) = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ L_0 cos(q_0) + L_1 cos(q_0 + q_1) & L_1 cos(q_0 + q_1) \end{array} \right].

If instead f_{\omega_z}, i.e. torque around the z axis, were chosen as a controlled force then the force vector and Jacobian would be (assuming force along the x axis was also chosen):

\textbf{F}_\textbf{x} = \left[ \begin{array}{c} f_x \\ f_{\omega_z}\end{array} \right],
\textbf{J}_{ee}(\textbf{q}) = \left[ \begin{array}{cc} -L_0 sin(q_0) - L_1 sin(q_0 + q_1) & - L_1 sin(q_0 + q_1) \\ 1 & 1 \end{array} \right].

But we’ll stick with control of the x and y forces instead, as it’s a little more straightforward.

Using the Jacobian

With our Jacobian, we can find out what different joint angle velocities will cause in terms of the end-effector linear and angular velocities, and we can also transform desired (x,y) forces into (\theta_0, \theta_1) torques. Let’s do a couple of examples. Note that in the former case we’ll be using the full Jacobian, while in the latter case we can use the simplified Jacobian specified just above.

Example 1

Given known joint angle velocities with arm configuration

\textbf{q} = \left[ \begin{array}{c} \frac{\pi}{4} \\ \frac{3 \pi}{8} \end{array}\right] \;\;\;\; \dot{\textbf{q}} = \left[ \begin{array}{c} \frac{\pi}{10} \\ \frac{\pi}{10} \end{array} \right]

and arm segment lengths L_i = 1, the (x,y) velocities of the end-effector can be calculated by substituting in the system state at the current time into the equation for the Jacobian:

\dot{\textbf{x}} = \textbf{J}_{ee}(\textbf{q}) \; \dot{\textbf{q}},

\dot{\textbf{x}} = \left[ \begin{array}{cc} - sin(\frac{\pi}{4}) - sin(\frac{\pi}{4} + \frac{3\pi}{8}) & - sin(\frac{\pi}{4} + \frac{3\pi}{8}) \\ cos(\frac{\pi}{4}) + cos(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4} + \frac{3\pi}{8}) \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 1 \end{array} \right] \; \left[ \begin{array}{c} \frac{\pi}{10} \\ \frac{\pi}{10} \end{array} \right],

\dot{\textbf{x}} = \left[ -0.8026, -0.01830, 0, 0, 0, \frac{\pi}{5} \right]^T.

And so the end-effector is linear velocity is (-0.8026, -0.01830, 0)^T, with angular velocity is (0, 0, \frac{\pi}{5})^T.

Example 2

Given the same system and configuration as in the previous example as well as a trajectory planned in (x,y) space, the goal is to calculate the torques required to get the end-effector to move as desired. The controlled variables will be forces along the x and y axes, and so the reduced Jacobian from the previous section will be used. Let the desired (x,y) forces be

\textbf{F}_\textbf{x} = \left[ \begin{array}{c} 1 \\ 1 \end{array}\right],

to calculate the corresponding joint torques the desired end-effector forces and current system state parameters are substituted into the equation relating forces in in end-effector and joint space:

\textbf{F}_\textbf{q} = \textbf{J}^T_{ee}(\textbf{q}) \textbf{F}_\textbf{x},

\textbf{F}_\textbf{q} = \left[ \begin{array}{cc} -sin(\frac{\pi}{4}) -sin(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4}) + cos(\frac{\pi}{4} + \frac{3\pi}{8}) \\ -sin(\frac{\pi}{4} + \frac{3\pi}{8}) & cos(\frac{\pi}{4} + \frac{3\pi}{8}) \end{array} \right] \left[ \begin{array}{c} 1 \\ 1 \end{array} \right],

\textbf{F}_\textbf{q} = \left[ \begin{array}{c} -1.3066 \\ -1.3066 \end{array}\right].

So given the current configuration to get the end-effector to move as desired, without accounting for the effects of inertia and gravity, the torques to apply to the system are \textbf{F}_\textbf{q} = [-1.3066, -1.3066]^T.

And now we are able to transform end-effector forces into torques, and joint angle velocities into end-effector velocities! What a wonderful, wonderful tool to have at our disposal! Hurrah for Jacobians!

Conclusions

In this post I’ve gone through how to use Jacobians to relate the movement of joint angle and end-effector system state characterizations, but Jacobians can be used to relate any two characterizations. All you need to do is define one in terms of the other and do some partial differentiation. The above example scenarios were of course very simple, and didn’t worry about compensating for anything like gravity. But don’t worry, that’s exactly what we’re going to look at in our exciting next chapter!

Something that I found interesting to consider is the need for the orientation of the end-effector and finding the angular velocities. Often in simpler robot arms, we’re only interested in the position of the end-effector, so it’s easy to write off orientation. But if we had a situation where there was a gripper attached to the end-effector, then suddenly the orientation becomes very important, often determining whether or not an object can be picked up or not.

And finally, if you’re interested in reading more about all this, I recommend checking out ‘Velocity kinematics – The manipulator Jacobian’ available online, it’s a great resource.

Tagged , , , , , ,

Robot control part 1: Forward transformation matrices

I’m doing a tour of learning down at the Brains in Silicon lab run by Dr. Kwabena Boahen for the next month or so working on learning a bunch about building and controlling robots and some other things, and one of the super interesting things that I’m reading about is effective methods for force control of robots. I’ve mentioned operational space (or task space) control of robotic systems before, in the context of learning the inverse kinematic transformation, but down here the approach is to analytically derive the dynamics of the system (as opposed to learning them) and use these to explicitly calculate control signals to move in task space that take advantage of the passive dynamics of the system.

In case you don’t remember what those words mean, operational (or task) space refers to a different configuration space than the basic / default robot configuration space. FOR EXAMPLE: If we have a robot arm with three degrees of freedom (DOF), that looks something like this:

robot-arm1
where two joints rotate, and are referred to as q_1, and q_2, respectively, then the most obvious representation for the system state, \textbf{q}, is the joint-angle space \textbf{q} = [q_1, q_2]. So that’s great, but often when we’re using this robot we’re going to be more interested in controlling the position of the end-effector rather than the angles of the joints. We would like to operate to complete our task in terms of \textbf{q} = [x, y], where x, y are the Cartesian coordinates of our hand in 2D space. So then \textbf{q} = [x, y] is our operational (or task) space.

I also mentioned the phrase ‘passive dynamics’. It’s true, go back and check if you don’t believe me. Passive dynamics refer to how the system moves from a given initial condition when no control signal is applied. For example, passive dynamics incorporate the effects of gravity on the system. If we put our arm up in the air and remove any control signal, it falls down by our side. The reason that we’re interested in passive dynamics is because they’re movement for free. So if my goal is to move my arm to be down by my side, I want to take advantage of the fact that the system naturally moves there on it’s own simply by removing my control signal, rather than using a bunch of energy to force my arm to move down.

There are a bunch of steps leading up to building controllers that can plan trajectories in a task space. As a first step, it’s important that we characterize the relationship of each of reference coordinate frames of the robot’s links to the origin, or base, of the robot. The characterization of these relationships are done using what are called forward transformation matrices, and they will be the focus of the remainder of this post.

Forward transformation matrices in 2D

As I mentioned mere sentences ago, forward transformation matrices capture the relationship between the reference frames of different links of the robot. A good question might be ‘what is a reference frame?’ A reference frame is basically the point of view of each of the robotic links, where if you were an arm joint yourself what you would consider ‘looking forward’. To get a feel for these and why it’s necessary to be able to move between them, let’s look at the reference frames of each of the links from the above drawn robot:

robot_coordinate_frames1
We know that from q_2 our end-effector point \textbf{p} is length d_2 away along its x-axis. Similarly, we know that q_2 is length d_1 away from q_1 along its x-axis, that q_1 is length d_0 away from the origin along its y-axis. The question is, then, in terms of the origin’s coordinate frame, where is our point \textbf{p}?

In this configuration pictured above it’s pretty straightforward to figure out, it’s simply (x = d_2, y = d_0 + d_1). So you’re feeling pretty cocky, this stuff is easy. OK hotshot, what about NOW:

rotated_robot_coordinate_frames1
It’s not as straightforward once rotations start being introduced. So what we’re looking for is a method of automatically accounting for the rotations and translations of points between different coordinate frames, such that if we know the current angles of the robot joints and the relative positions of the coordinate frames we can quickly calculate the position of the point of interest in terms of the origin coordinate frame.

Accounting for rotation

So let’s just look quick at rotating axes. Here’s a picture:

rotation

The above image displays two frames of reference with the same origin rotated from each other by q degrees. Imagine a point \textbf{p} = (p_{x_1}, p_{y_1}) specified in reference frame 1, to find its coordinates in terms of of the origin reference frame, or (x_0, y_0) coordinates, it is necessary to find out the contributions of the x_1 and y_1 axes to the x_0 and y_0 axes. The contributions to the x_0 axis from p_{x_1} are calculated

cos(q) p_{x_1}.

To include p_{y_1}‘s effect on position, we add the term

cos(90 + q) p_{y_1},

which is equivalent to -cos(90 - q), as shown above. This term can be rewritten as -sin(q) because sin and cos are phase shifted 90 degrees from one another.

The total contributions of a point defined in the (x_1, y_1) axes to the x_0 axis are

p_{0_x} = cos(q) p_{x_1} - sin(q) p_{y_1}.

Similarly for the y_0 axis contributions we have

p_{0_y} = sin(q) p_{x_1} + sin(90 - q) p_{y_1},

p_{0_y} = sin(q) p_{x_1} + cos(q) p_{y_1}.

Rewriting the above equations in matrix form gives:

^1_0\textbf{R} \; \textbf{p}_1 = \left[ \begin{array}{cc} cos(q_0) & -sin(q_0) \\ sin(q_0) & cos(q_0) \end{array} \right] \left[ \begin{array}{c} p_{x_1} \\ p_{y_1} \end{array} \right],

where ^1_0\textbf{R} is called a rotation matrix.
The notation used here for these matrices is that the reference frame number being rotated from is denoted in the superscript before, and the reference frame being rotated into is in the subscript. ^1_0\textbf{R} denotes a rotation from reference frame 1 into reference frame 0 (using the same notation as described here.

To find the location of a point defined in reference frame 1 in reference frame 0 coordinates, we then multiply by the rotation matrix ^1_0\textbf{R}.

Accounting for translation

Alrighty, rotation is great, but as you may have noticed our robots joints are not all right on top of each other. The second part of transformation is translation, and so it is also necessary to account for distances between reference frame origins.

rotation_and_distance
Let’s look at the the reference frames 1 and 0 shown in the above figure, where point \textbf{p} = (2,2) in reference frame 1. Reference frame 1 is rotated 45 degrees from and located at (3, 2) in reference frame 0. To account for this translation and rotation a new matrix will be created that includes both rotation and translation. It is generated by appending distances, denoted \textbf{D}, to the rotation matrix ^1_0\textbf{R} along with a row of zeros ending in a 1 to get a transformation matrix:

^1_0\textbf{T} = \left[ \begin{array}{cc} ^1_0\textbf{R} & ^1_0\textbf{D} \\ \textbf{0} & \textbf{1} \end{array} \right],
^1_0\textbf{T} = \left[ \begin{array}{ccc} cos(q_0) & -sin(q_0) & d_{x_0} \\ sin(q_0) & cos(q_0) & d_{y_0} \\ 0 & 0 & 1 \end{array} \right].

To make the matrix-vector multiplications work out, a homogeneous representation must be used, which adds an extra row with a 1 to the end of the vector \textbf{p} to give

\textbf{p} = \left[ \begin{array}{c} p_x \\ p_y \\ 1 \end{array} \right].

When position vector \textbf{p} is multiplied by the transformation matrix ^1_0\textbf{T} the answer should be somewhere around (3, 5) from visual inspection, and indeed:

^1_0\textbf{T} \; \textbf{p} = \left[ \begin{array}{ccc} cos(45) & -sin(45) & 3 \\ sin(45) & cos(45) & 2 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{c} 2 \\ 2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} 3 \\ 4.8285 \\ 1 \end{array} \right].

To get the coordinates of \textbf{p} in reference frame 0 now simply take the first two elements of the resulting vector \textbf{p} = (3, 4.8285).

Applying multiple transformations

We can also string these things together! What if we have a 3 link, planar (i.e. rotating on the (x,y) plane) robot arm? A setup like this:

rotation and distance2
We know that our end-effector is at point (1,2) in reference frame 2, which is at an 80 degree angle from reference frame 1 and located at (x_1 = 2.5, y_1 = 4). That gives us a transformation matrix

^2_1\textbf{T} = \left[ \begin{array}{ccc} cos(80) & -sin(80) & 2.5 \\ sin(80) & cos(80) & 4 \\ 0 & 0 & 1 \end{array} \right].

To get our point in terms of reference frame 0 we account for the transform from reference frame 1 into reference frame 2 with ^2_1\textbf{T} and then account for the transform from reference frame 0 into reference frame 1 with our previously defined transformation matrix

^1_0\textbf{T} = \left[ \begin{array}{ccc} cos(45) & -sin(45) & 3 \\ sin(45) & cos(45) & 2 \\ 0 & 0 & 1 \end{array} \right].

So let’s give it a shot! By eyeballing it we should expect our answer to be somewhere around (7,0) or so, I would say.

\textbf{p}_0 = ^1_0\textbf{T} \; ^2_1\textbf{T} \; \textbf{p}_2 = \\ \\ \left[ \begin{array}{ccc} cos(45) & -sin(45) & 3 \\ sin(45) & cos(45) & 2 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{ccc} cos(80) & -sin(80) & 2.5 \\ sin(80) & cos(80) & 4 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{c} 1 \\ 2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} -0.273 \\ 6.268 \\ 1 \end{array} \right].

And it’s a good thing we didn’t just eyeball it! Accurate drawings might have helped but the math gives us an exact answer. Super!

And one more note, if we’re often performing this computation, then instead of performing 2 matrix multiplications every time we can work out

^2_0\textbf{T} = ^1_0\textbf{T} \; ^2_1\textbf{T}

and simply multiply our point in reference frame 2 by this new transformation matrix ^2_0\textbf{T} to calculate the coordinates in reference frame 0.

Forward transform matrices in 3D

The example here is taken from Samir Menon’s RPP control tutorial.

It turns out it’s trivial to add in the z dimension and start accounting for 3D transformations. Let’s say we have a standard revolute-prismatic-prismatic robot, which looks exactly like this, or roughly like this:

robot_coordinate_frames_3D where the base rotates around the z axis, and the distance from reference frame 0 to reference frame 1 is 1 unit, also along the z axis. The rotation matrix from reference frame 0 to reference frame 1 is:

^1_0\textbf{R} = \left[ \begin{array}{ccc} cos(q_0) & -sin(q_0) & 0 \\ sin(q_0) & cos(q_0) & 0 \\ 0 & 0 & 1 \end{array} \right]

and the translation vector is

^1_0\textbf{D} = [0, 0, 1]^T.

The transformation matrix from reference frame 0 to reference frame 1 is then:

^1_0\textbf{T} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & 0 \\ sin(q_0) & cos(q_0) & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right],

where the third column indicates that there was no rotation around the z axis in moving between reference frames, and the forth (translation) column shows that we move 1 unit along the z axis. The fourth row is again then only present to make the multiplications work out and provides no information.

For transformation from the reference frame 1 to reference frame 2, there is no rotation (because it is a prismatic joint), and there is translation along the y axis of reference frame 1 equal to .5 + q_1. This gives a transformation matrix:

^2_1\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0.5 + q_1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right].

The final transformation, from the origin of reference frame 2 to the end-effector position is similarly another transformation with no rotation (because this joint is also prismatic), that translates along the z axis:

^{ee}_2\textbf{T} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & -0.2 - q_2 \\ 0 & 0 & 0 & 1 \end{array} \right].

The full transformation from reference frame 0 to the end-effector is found by combining all of the above transformation matrices:

^{ee}_0\textbf{T} = ^1_0\textbf{T} \; ^2_1\textbf{T} \; ^{ee}_2\textbf{T} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & -sin(q_0)(0.5 + q_1) \\ sin(q_0) & cos(q_0) & 0 & cos(q_0) (0.5 + q_1) \\ 0 & 0 & 1 & 0.8 - q_2 \\ 0 & 0 & 0 & 1 \end{array} \right].

To transform a point from the end-effector reference frame into terms of the origin reference frame, simply multiply the transformation matrix by the point of interest relative to the end-effector. If it is the end-effector position that is of interest to us, p = [0, 0, 0, 1]^T. For example, let q_0 = \frac{\pi}{3}, q_1 = .3, and q_2 = .4, then the end-effector location is:

^{ee}_0\textbf{T} \; \textbf{p} = \left[ \begin{array}{cccc} cos(q_0) & -sin(q_0) & 0 & -sin(q_0)(0.5 + q_1) \\ sin(q_0) & cos(q_0) & 0 & cos(q_0) (0.5 + q_1) \\ 0 & 0 & 1 & .8 + q_2 \\ 0 & 0 & 0 & 1 \end{array} \right] \; \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array} \right] = \left[ \begin{array}{c} -0.693 \\ 0.4 \\ 0.8 \\ 1 \end{array} \right].

Inverting our transformation matrices

What if we know where a point is defined in reference frame 0, but we want to know where it is relative to our end-effector’s reference frame? Fortunately this is straightforward thanks to the way that we’ve defined our transform matrices. Continuing the same robot example and configuration as above, and denoting the rotation part of the transform matrix \textbf{R} and the translation part \textbf{D}, the inverse transform is defined:

(^{ee}_0\textbf{T})^{-1} = \left[ \begin{array}{cc} (^{ee}_0\textbf{R})^T & -(^{ee}_0\textbf{R})^T \; ^{ee}_0\textbf{D} \\ 0 & 1 \end{array} \right].

If we have a point that’s at \textbf{p}_0 = [1, 1, .5, 1]^T in reference frame 0, then we can calculate that relative to the end-effector it is at:

\textbf{p} = (^{ee}_0\textbf{T})^{-1} \; \textbf{p}_0 = [1.37, -1.17, -0.3, 1]^T.

Conclusions

These are, of course, just the basics with forward transformation matrices. There are numerous ways to go about this, but this method is fairly straightforward. If you’re interested in more, there are a bunch of youtube videos and detailed tutorials all over the web. There’s a bunch of neat stuff about why the matrices are set up like they are (search: homogeneous transformations) and more complex examples.

The robot example for the 3D case here didn’t have any spherical joints, each joint only moved in 2 dimensions, but it is also possible to derive the forward transformation matrix in this case, it’s just more complex and not necessary to move onward here since they’re not used in the robots I’ll be looking at. This introduction is enough to get started and move on to some more exciting things, so let’s do that!

Tagged , , , ,

Inverse kinematics of 3-link arm with constrained minimization in Python

Inverse kinematics is a common topic in robotics control; one that most anyone working with a robotic arm needs to address at some point. The use case I’ll be looking at here is the situation where we want to specify an (x,y) coordinate for the end-effector, and solve for the appropriate joint angle configuration that best achieves this. It’s surprisingly straightforward to handle with the right approach, so it’s a nice little exercise in Python optimization (and we’ll even work in some graphical display!) And per usual, the code is all up on my github.

Most of the time the servos that anyone is working with are position controlled, meaning that you provide them with a set of desired angles for them to move to, rather than force controlled, where you provide them with a torque signal that directly moves them. In this case, inverse kinematics is all you need. You simply work out the set of joint angles you want the servos to move to and send those out directly. In biological systems this clearly isn’t the end of the work, because even if you know where you want to go you still have to provide a force (muscle activation) signal that will move your system there. But! as mentioned in the motor babbling paper review post I wrote a while back, having the target specified in the most basic space (i.e. joint angles instead of (x,y) coordinates for a servo based arm, or muscle lengths instead of joint angles for a muscle based arm) can make calculating, or learning, the appropriate force signal much easier; so something that can do the inverse kinematics calculation is worth having.

Inverse kinematics

Now then, how do we go about finding the joint angle configuration that best achieves a desired (x,y) coordinate? Constrained minimization! Our constraint is that the tip of the arm must be at the specified (x,y) position, and we need to specify some cost function for the system to minimize to choose among all the possible configurations that accomplish this. The cost function that we’re going to use here is a very intuitive one, that I first encountered in the paper ‘Learning operational space control’ by Jan Peters. What we’re going to do is specify a default, or resting, joint state configuration, and minimize our distance from that. Which makes a lot of sense.

I’m going to post the code for calculating the inverse kinematics of a 3-link arm, and then we’ll work through it.

import math
import numpy as np
import scipy.optimize  

class Arm3Link:
    
    def __init__(self, q=None, q0=None, L=None):
        """Set up the basic parameters of the arm.
        All lists are in order [shoulder, elbow, wrist].
        
        :param list q: the initial joint angles of the arm
        :param list q0: the default (resting state) joint configuration
        :param list L: the arm segment lengths
        """
        # initial joint angles
        if q is None: q = [.3, .3, 0]
        self.q = q
        # some default arm positions
        if q0 is None: q0 = np.array([math.pi/4, math.pi/4, math.pi/4]) 
        self.q0 = q0
        # arm segment lengths
        if L is None: L = np.array([1, 1, 1]) 
        self.L = L
        
        self.max_angles = [math.pi, math.pi, math.pi/4]
        self.min_angles = [0, 0, -math.pi/4]

    def get_xy(self, q=None):
        if q is None: q = self.q

        x = self.L[0]*np.cos(q[0]) + \
            self.L[1]*np.cos(q[0]+q[1]) + \
            self.L[2]*np.cos(np.sum(q))

        y = self.L[0]*np.sin(q[0]) + \
            self.L[1]*np.sin(q[0]+q[1]) + \
            self.L[2]*np.sin(np.sum(q))

        return [x, y]

    def inv_kin(self, xy):

        def distance_to_default(q, *args): 
            # weights found with trial and error, get some wrist bend, but not much
            weight = [1, 1, 1.3] 
            return np.sqrt(np.sum([(qi - q0i)**2 * wi
                for qi,q0i,wi in zip(q, self.q0, weight)]))

        def x_constraint(q, xy):
            x = ( self.L[0]*np.cos(q[0]) + self.L[1]*np.cos(q[0]+q[1]) + 
                self.L[2]*np.cos(np.sum(q)) ) - xy[0]
            return x

        def y_constraint(q, xy): 
            y = ( self.L[0]*np.sin(q[0]) + self.L[1]*np.sin(q[0]+q[1]) + 
                self.L[2]*np.sin(np.sum(q)) ) - xy[1]
            return y
        
        return scipy.optimize.fmin_slsqp( func=distance_to_default, 
            x0=self.q, eqcons=[x_constraint, y_constraint], 
            args=[xy], iprint=0) # iprint=0 suppresses output


I’ve taken out most of the comments for compactness, but there are plenty in the code on github, don’t you worry. Alright, let’s take a look!

First, we go through and set up the parameters of our arm. If nothing was passed in during construction, then we’re going to make an arm that is initially at it’s resting position, with shoulder and elbow joint angles = \frac{\pi}{4}, and wrist angle = 0. We also set up the arm segment lengths to all be 1, and the minimum and maximum joint angles.

Next, there’s the get_xy function, which just uses some simple trig to calculate the current (x,y) position of the end-effector. This is going to be a handy thing to have when we want to see how well the joint configurations we calculate achieve our desired hand position.

And then after this, we have the actual inverse kinematics function. inv_kin takes in a desired (x,y) position, and returns to us a list of the joint angles that will best achieve this. We define three methods inside inv_kin that define our constrained optimization problem, distance_to_default, which is the function we are minimizing, and x_constraint and y_constraint, which are the constraints that must be met for a solution to be valid. These are all pretty straight forward functions, the distance_to_default function calculates the Euclidean distance in joint space to the resting state, and the constraint functions calculate the difference between the actual and desired end-effector positions for a given set of joint angles.

There is one point of interest in the distance_to_default method, and that is the use of the weights therein. What is the point of applying a gain to these values, you ask? Why, simply that it lets us weight the relative importance of each of these joints remaining close to their default configuration! If you think about moving your arm on a horizontal plane, the wrist actually bends very little. To mimic this we can set the weight applied to the distance of the wrist from its resting state higher than those of the other joints, so that it becomes a higher priority to keep the wrist near its resting state. I suggest playing around with once the graphical display (below) is built up, it’s fun to see the effects different weightings can give on joint configurations.

With these three functions defined, we can now call up the scipy.optimize.fmin_slsqp function, which performs the sequential least squares method to arrive at a solution.

Really, that’s all there is to it!

To test this now, we have the following method:

def test():
    ############Test it!##################

    arm = Arm3Link()

    # set of desired (x,y) hand positions
    x = np.arange(-.75, .75, .05)
    y = np.arange(0, .75, .05)

    # threshold for printing out information, to find trouble spots
    thresh = .025

    count = 0
    total_error = 0
    # test it across the range of specified x and y values
    for xi in range(len(x)):
        for yi in range(len(y)):
            # test the inv_kin function on a range of different targets
            xy = [x[xi], y[yi]]
            # run the inv_kin function, get the optimal joint angles
            q = arm.inv_kin(xy=xy)
            # find the (x,y) position of the hand given these angles
            actual_xy = arm.get_xy(q)
            # calculate the root squared error
            error = np.sqrt((np.array(xy) - np.array(actual_xy))**2)
            # total the error 
            total_error += error
            
            # if the error was high, print out more information
            if np.sum(error) > thresh:
                print '-------------------------'
                print 'Initial joint angles', arm.q 
                print 'Final joint angles: ', q
                print 'Desired hand position: ', xy
                print 'Actual hand position: ', actual_xy
                print 'Error: ', error
                print '-------------------------'
            
            count += 1

    print '\n---------Results---------'
    print 'Total number of trials: ', count
    print 'Total error: ', total_error
    print '-------------------------'

Which is simply working through a set of target (x,y) points, and calculating the total error across all of them. If any particular point gives an error above the threshold level, print out the information for that point so we can take a closer look if so desired. Fortunately, the threshold error isn’t even approached in this test, and for output we get:

---------Results---------
Total number of trials:  450
Total error:  [  3.33831421e-05   2.89667496e-05]
-------------------------

Not bad!

Visualization

So that’s all great and dandy, but it’s always nice to be able to really visualize these things, so I wrote another method that uses Pyglet to help visualize. This is a really easy to use graphics program that I tested out as a means here, and I was very happy with it. I ended up writing a method that pops up a window with a 3-link arm drawn on it, and the arm uses the above inverse kinematics function in the Arm class written above to calculate the appropriate joint angles for the arm to be at to follow the mouse. Once the joint angles are calculated, then their (x,y) locations are also calculated, and the arm is drawn. Pleasingly, the inv_kin above is fast enough to work in real time, so it’s a kind of fun toy example. Again, I’ll show the code and then we’ll work through it below.

import numpy as np
import pyglet
import time

import Arm

def plot(): 
    """A function for plotting an arm, and having it calculate the 
    inverse kinematics such that given the mouse (x, y) position it 
    finds the appropriate joint angles to reach that point."""

    # create an instance of the arm
    arm = Arm.Arm3Link(L = np.array([300,200,100]))

    # make our window for drawin'
    window = pyglet.window.Window()

    label = pyglet.text.Label('Mouse (x,y)', font_name='Times New Roman', 
        font_size=36, x=window.width//2, y=window.height//2,
        anchor_x='center', anchor_y='center')

    def get_joint_positions():
        """This method finds the (x,y) coordinates of each joint"""

        x = np.array([ 0, 
            arm.L[0]*np.cos(arm.q[0]),
            arm.L[0]*np.cos(arm.q[0]) + arm.L[1]*np.cos(arm.q[0]+arm.q[1]),
            arm.L[0]*np.cos(arm.q[0]) + arm.L[1]*np.cos(arm.q[0]+arm.q[1]) + 
                arm.L[2]*np.cos(np.sum(arm.q)) ]) + window.width/2

        y = np.array([ 0, 
            arm.L[0]*np.sin(arm.q[0]),
            arm.L[0]*np.sin(arm.q[0]) + arm.L[1]*np.sin(arm.q[0]+arm.q[1]),
            arm.L[0]*np.sin(arm.q[0]) + arm.L[1]*np.sin(arm.q[0]+arm.q[1]) + 
                arm.L[2]*np.sin(np.sum(arm.q)) ])

        return np.array([x, y]).astype('int')
    
    window.jps = get_joint_positions()

    @window.event
    def on_draw():
        window.clear()
        label.draw()
        for i in range(3): 
            pyglet.graphics.draw(2, pyglet.gl.GL_LINES, ('v2i', 
                (window.jps[0][i], window.jps[1][i], 
                 window.jps[0][i+1], window.jps[1][i+1])))

    @window.event
    def on_mouse_motion(x, y, dx, dy):
        # call the inverse kinematics function of the arm
        # to find the joint angles optimal for pointing at 
        # this position of the mouse 
        label.text = '(x,y) = (%.3f, %.3f)'%(x,y)
        arm.q = arm.inv_kin([x - window.width/2, y]) # get new arm angles
        window.jps = get_joint_positions() # get new joint (x,y) positions

    pyglet.app.run()

plot()

There are really only a few things that we’re doing in this method: We create an arm with somewhat normal segment lengths and a window for plotting; we have a function for calculating the (x,y) coordinates of each of the joints for plotting; we have a function that tracks the position of the mouse and updates our arm joint angles by calling the arm.inv_kin function we wrote above; and lastly we plot it! Easy. The functions aren’t quite in that order, and that’s because I chose alphabetical ordering over conceptual. Here’s a picture! armplot

I think the only kind of weird thing going on in here is that my variables that I use in both this method and in the overwritten window methods are defined as belonging to window. Aside from that, it’s straightforward trig to calculate the joint angle Cartesian coordinates, and then there’s also a label that displays the mouse (x,y) position for the hell of it. Honestly, the label was in a Pyglet tutorial example and I just left it in.

It’s not going to win any awards for graphics by any means, but it’s simple and it works! And, assuming you have Pyglet installed (which is very easy to do), you should straight up be able to run it and be amazed. Or a little impressed. Or not, whatever. But in any case now you have your very own inverse kinematics example on a 3-link arm! Moving to any other number of joints is exactly the same, you just have to add the corresponding number of default joint configurations and everything should work out as is.

From here, the plan is to look at bringing in an actual force driven arm model to the mix, and then building up a system that can track the mouse, but with a control signal that specifies forces to apply to the arm, rather than positions. Which I think will be pretty neat. And in case you missed it at the top, here’s a link to all the code on my github.

Tagged , , ,

Likelihood calculus paper series review part 3 – Distributed control of uncertain systems using superpositions of linear operators

The third (and final, at the moment) paper in the likelihood calculus series from Dr. Terrence Sanger is Distributed control of uncertain systems using superpositions of linear operators. Carrying the torch for the series right along, here Dr. Sanger continues investigating the development of an effective, general method of controlling systems operating under uncertainty. This is the paper that delivers on all the promises of building a controller out of a system described by the stochastic differential operators we’ve been learning about in the previous papers. In addition to describing the theory, there are examples of system simulation with code provided! Which is a wonderful, and sadly uncommon, thing in academic papers, so I’m excited. We’ll go through a comparison of Bayes’ rule and Markov processes (described by our stochastic differential equations), go quickly over the stochastic differential operator description, and then dive into the control of systems. The examples and code run-through I’m going to have to save for another post, though, just to keep the size of this post reasonable.

The form our system model equation will take is the very general

dx = f(x)dt + \sum g_i(x)u_i dt + \sum h_i(x, u_i)dB_i,

where f(x) represents the environment dynamics, previously also referred to as the unforced or passive dynamics of the system, g_i(x) describes how the control signal u_i affects the system state x, h_i(x, u_i) describes the state and control dependent additive noise, and dB_i is a set of independent noise processes such as Brownian motion. Our control goal is to find a set of functions of time, u_i(t), such that the dynamics of our system behave according to a desired set of dynamics that we have specified.

In prevalent methods in control theory, uncertainty is a difficult problem, and often can only be effectively handled with a number of simplifications, such as linear systems of Gaussian added noise. In biological systems that we want to model, however, uncertainty is ubiquitous. There is noise on outgoing and incoming signals, there are unobserved controllers simultaneously exerting influence over the body, complicated and often unmodeled highly non-linear dynamics of the system and its interactions with the environment, etc. In the brain especially, the effect of unobserved controllers is a particular problem. Multiple areas of the brain will simultaneously be sending out control signals to the body, and the results of these signals tends to be only partially known or known only after a delay to the other areas. So for modeling, we need an effective means of controlling distributed systems operating under uncertainty. And that’s exactly what this paper presents: ‘a mathematical framework that allows modeling of the effect of actions deep within a system on the stochastic behaviour of the global system.’ Importantly, the stochastic differential operators that Dr. Sanger uses to do this are linear operators, which opens up a whole world of linear methods to us.

Bayes’ rule and Markov processes

Bayesian estimation is often used in sensory processing to model the effects of state uncertainty, combining prior knowledge of state with a measurement update. Because we’re dealing with modeling various types of system uncertainty, it seems like a good idea to consider Bayes’ rule. Here, Dr. Sanger shows that Bayes’ rule is in fact insufficient in this application, and Markov processes must be used. There are a number of neat insights that come from this comparison, so it’s worth going through.

Let’s start by writing Bayes’ rule and Markov processes using matrix equations. Bayes’ rule is the familiar equation

p(x|y) = \frac{p(y|x)}{p(y)}p(x),

where p(x) represents our probability density or distribution, so p(x) \geq 0 and \sum_i p(x = i) = 1. This equation maps a prior density p(x) to the posterior density p(x|y). Essentially, this tells us the effect a given measurement of y has on the probability of being in state x. To write this in matrix notation, we assume that x takes on a finite number of states, so p(x) is a vector, which gives

p_x' = Ap_x,

where p_x and p_x' are the prior and posterior distributions, respectively, and A in a diagonal matrix with elements A_{ii} = \frac{p(y|x)}{p(y)}.

Now, the matrix equation for a discrete-time, finite-state Markov process is written

p_x(t+1) = Mp_x(t).

So where in Bayes’ rule the matrix (our linear operator) A maps the prior distribution into the posterior distribution, in Markov processes the linear operator M maps the probability density of the state at time t into the density at time t+1. The differences come in through the form of the linear operators. The major difference being that A is a diagonal matrix, while there is no such restriction for M. The implication of this is that in Bayes’ rule the update of a given state x depends only on the prior likelihood of being in that state, whereas in Markov processes the likelihood of a given state at time t+1 can depend on the probability of being in other states at time t. The off diagonal elements of M allow us to represent the probability of state transition, which is critical for capturing the behavior of dynamical systems. This is the reason why Bayes’ method is insufficient for our current application.

Stochastic differential operators

This derivation of stochastic differential operators is by now relatively familiar grounds, so I’ll be quick here. Starting with a stochastic differential equation

dx = f(x)dt + g(x)dB,

where dB is our noise source, the differential of unit variance Brownian motion. The noise term introduces randomness into the state variable x, so we describe x with a probability density p(x) that evolves through time. This change of the probability density through time is captured by the Fokker-Planck partial differential equation

\frac{\partial}{\partial t}p(x,t) = - \frac{\partial}{\partial x}(f(x)p(x,t)) + \frac{1}{2} \frac{\partial^2}{\partial x^2}(g(x)g^T(x)p(x,t)),

which can be rewritten as

\dot{p} = \mathcal{L}p,

where \mathcal{L} is the linear operator

\mathcal{L} = - \frac{\partial}{\partial x}f(x) + \frac{1}{2} \frac{\partial^2}{\partial x^2}g(x)g^T(x).

\mathcal{L} is referred to as our stochastic differential operator. Because the Fokker-Planck equation is proven to preserve probability densities (non-negativity and sum to 1), applying \mathcal{L} to update our density will maintain its validity.

What is so exciting about these operators is that even though they describe nonlinear systems, they themselves are linear operators. What this means is that if we have two independent components of a system that affect it’s dynamics, described by \mathcal{L}_1 and \mathcal{L}_2, we can determine their combined effects on the overall system dynamics through a simple summation, i.e. \dot{p} = (\mathcal{L}_1 + \mathcal{L}_2)p.

Controlling with stochastic differential operators

Last time, we saw that control can be introduced by attaching a weighting term to the superposition of controller dynamics, giving

\dot{p} = \sum_i u_i \mathcal{L}_i p,

where \mathcal{L}_i is the stochastic differential operator of controller i, and u_i is the input control signal to that controller. In the context of a neural system, this equation describes a set of subsystems whose weighted dynamics give rise to the overall behavior of the system. By introducing our control signals u_i, we’ve made the dynamics of the overall system flexible. As mentioned in the previous review post, our control goal is to drive the system to behave according to a desired set of dynamics. Formally, we want to specify u_i such that the actual system dynamics, \hat{\mathcal{L}}, match some desired set of dynamics, \mathcal{L}^*. In equation form, we want u_i such that

\mathcal{L}^* \approx \hat{\mathcal{L}} = \sum_i u_i \mathcal{L}_i.

It’s worth noting also here that the resulting \hat{\mathcal{L}} still preserves the validity of densities that it is applied to.

How well can the system approximate a set of desired dynamics?

In this next section of the paper, Dr. Sanger talks about reworking the stochastic operators of a system into an orthogonal set, which can then be used to easily approximate a desired set of dynamics. It’s my guess that the motivation behind doing this is to see how close the given system is able to come to reproducing the desired dynamics. This is my guess because this exercise doesn’t really generate control information that can be used to directly control the system, unless we translate the weights calculated by doing this back into term of the actual set of actions that we have available. But it can help you to understand what your system is capable of.

To do this, we’ll use Gram-Schmidt orthogonalization, which I describe in a recent post. To actually follow this orthogonalization process we’ll need to define an inner product and normalization operator appropriate for our task. A suitable inner product will be one that lets us compare the similarity between two of our operators, L_1 and L_2, in terms of their effects on an initial state probability density, p_0. So define

\langle L_1, L_2 \rangle_{p_0} = \langle L_1 p_0, L_2 p_0 \rangle = \langle \dot{p}_{L_1} , \dot{p}_{L_2} \rangle

for the discrete-space case, and similarly

\langle \mathcal{L}_1, \mathcal{L}_2 \rangle_{p_0} = \int (\mathcal{L}_1 p_0)(\mathcal{L}_2 p_0)dx = \int \dot{p}_{L_1} \dot{p}_{L_2} dx

in the continuous-state space. So this inner product calculates the change in probability density resulting from applying these two operators to this initial condition, and finds the amount which they move the system in the same direction as the measure of similarity.
The induced norm that we’ll use is the 2-norm,

||L||_{p_0} = \frac{||L p_0||_2}{||p_0||_2}.

With the above inner product and normalization operators, we can now take our initial state, p_0, and create a new orthogonal set of stochastic differential operators that span the same space as original set through the Gram-Schmidt orthogonalization method. Let’s denote the orthonormal basis set vectors as \Lambda_i. Now, to approximate a desired operator L^*, generate a set of weights, \alpha, over our orthonormal basis set using a standard inner product: \alpha_i = \langle L^*, \Lambda_i \rangle. Once we have the \alpha_i, the desired operator can be recreated (as best as possible given this basis set),

L^* \approx \hat{L} = \sum \alpha_i \Lambda_i.

This could then be used as a comparison measure as the best approximation to a desired set of dynamics that a given system can achieve with its set of operators.

Calculating a control signal using feedback

Up to now, there’s been a lot of dancing around the control signal, including it in equations and discussing the types of control a neural system could plausibly implement. Here, finally, we actually get into how to go about generating this control signal. Let’s start with a basic case where we have the system

\dot{p} = (\mathcal{L}_1 + u\mathcal{L}_2)p,

where \mathcal{L}_1 describes the unforced/passive dynamics of the system, \mathcal{L}_2 describes the control-dependent dynamics, and u is our control signal.

Define a cost function V(x) that we wish to minimize. The expected value of this cost function at a given point in time is

E[V] = \int V(x)p(x)dx,

which can be read as the cost of each state weighted by the current likelihood of being in that state.
To reduce the cost over time, we want the derivative of our expected value with respect to time to decrease. Written in an equation, we want

\frac{d}{dt}E[V] = \int V(x)\dot{p}(x)dx < 0.

Note that we can sub in for \dot{p} to give

\frac{d}{dt}E[V] = \int V(x)[\mathcal{L}_1p(x) + u\mathcal{L}_2p(x)]dx.

Since our control is effected through u, at a given point in time where we have a fixed and known p(x,t), we can calculate the effect of our control signal on the change in expected value over time, \frac{d}{dt}E[V], by taking the partial differential with respect to u. This gives

\frac{\partial}{\partial u}\left[\frac{d}{dt}E[V]\right] = \int V(x)\mathcal{L}_2p(x)dx,

which is intuitively read: The effect that the control signal has on the instantaneous change in expected value over time is equal to the change in probability of each state x weighted by the cost of that state. To reduce \frac{d}{dt}E[V], all we need to know now is the sign of the right-hand side of this equation, which tells us if we should increase or decrease u. Neat!

Although we only need to know the sign, it’s nice to include slope information that gives some idea of how far away the minimum might be. At this point, we can simply calculate our control signal in a gradient descent fashion, by setting u = - k \int V(x)\mathcal{L}_2p(x)dx. The standard gradient descent interpretation of this is that we’re calculating the effect that u has on our function \frac{d}{dt}E[V], and assuming that for some small range, k, our function is approximately linear. So we can follow the negative of the function’s slope at that point to find a new point on that function that evaluates to a smaller value.

This similarly extends to multiple controllers, where if there is a system of a large number of controllers, described by

\dot{p} = \sum_i u_i \mathcal{L}_i p,

then we can set

u_i = - k_i \int V(x)\mathcal{L}_ip(x)dx.

Dr. Sanger notes that, as mentioned before, neural systems will often not permit negative control signals, so where u_i < 0 we set u_i = 0. The value of u_i for a given controller is proportional to the ability of that controller to be reduce the expected cost at that point in time. If all of the u_i = 0, then it is not possible for the available controllers to reduce the expected cost of the system.

Comparing classical control and stochastic operator control

Now that we finally have a means of generating a control signal with our stochastic differential operators, let’s compare the structure of our stochastic operator control with classical control. Here’s a picture:

comparison

The most obvious differences are that a cost function V(x) has replaced the desired trajectory \dot{x} and the state x has been replaced by a probability density over the possible states, p(x). Additionally, the feedback in classical control is used to find the difference between the desired and actual state, which is then multiplied by a gain, to generate a corrective signal, i.e. u = k * (x^* - x), whereas in stochastic operator control signal is calculated as specified above, by following the gradient of the expected value of the cost function, i.e. u = -k \int V(x)p(x)dx.

Right away there is a crazy difference between our two control systems already. In classical control case, we’re following a desired trajectory, several things are implied. First, we’ve somehow come up with a desired trajectory. Second, we’re necessarily assuming that regardless of what is actually going on in the system, this is the trajectory that we want to follow. This means that the system is not robust to changes in the dynamics, such as outside forces or perturbations applied during movement. In the stochastic operator control case, we’re not following a desired path, instead the system is looking to minimize the cost function at every point in time. This means that it doesn’t matter where we start from or where we’re going, the stochastic operator controller looks at the effect that each controller will have on the expected value of the cost function and generates a control signal accordingly. If the system is thrown off course to the target, it will recover by itself, making it far more robust than classical control theory. Additionally, we can easily change the cost function input to the system and see a change in the behaviour of the system, whereas in classical control a change in the cost function requires that we regenerate our desired trajectory \dot{x} before our controller will act appropriately.

While these are impressive points, it should also be pointed out that stochastic operator controllers are not the first to attack these issues. The robustness and behaviour business is similarly handled very well, for specific system (either linear or affine, meaning linear in terms of the dynamics of the control signal applied) and cost function forms (usually quadratic), by optimal feedback controllers. Optimal feedback controllers regenerate the desired trajectory online, based on system feedback. This leads to a far more robust control system that classical control provides. However, as mentioned, this is only for specific system and cost function forms. In the stochastic operator control any type of cost function be applied, and the controller dynamics described by L_i can be linear or nonlinear. This is a very important difference, making stochastic operator control far more powerful.

Additionally, stochastic operator controllers operate under uncertainty, by employing a probability density to generate a control signal. All in all, stochastic operator controllers provide an impressive and novel amount of flexibility in control.

Conclusions

Here, Dr. Sanger has taken stochastic differential operators, looked at their relation to Bayes’ rule, and developed a method for controlling uncertain systems robustly. This is done through the observation that these operators have linear properties that lets the effects of distributed controllers on a stochastic system be described through a simple superposition of terms. By introducing a cost function, the effect of each controller on the expected cost of the system at each point in time can be calculated, which can then be used to generate a control signal that robustly minimizes the cost of the system through time.

Stochastic differential operators can often be inferred from the problem description; their generation is something that I’ll examine more closely in the next post going through examples. Using these operators time-varying expected costs associated with state-dependent cost functions can be calculated. Stochastic operator controllers introduce a significant amount more freedom in choice of cost function than has previously been available in control. Dr. Sanger notes that an important area for future research will be in the development of methods for optimal control of systems described by stochastic differential operators.

The downside of the stochastic operator controllers is that they are very computationally intensive, due to the fact that they must propagate a joint-density function forward in time at each timestep, rather than a single system state. One of the areas Dr. Sanger notes of particular importance for future work is the development of parallel algorithms, both for increasing simulation speed and examining possible neural implementations of such an algorithms.

And finally, stochastic differential operators exact a paradigm shift from classical control on the way control is considered. Instead of driving the system to a certain target state, the goal is to have the system behave according to a desired set of dynamics. The effect of a controller is then the difference between the behavior of the system with and without the activity of the controller.

Comments and thoughts

This paper was particularly exciting because it discussed the calculation of the control signals for systems which we’ve described through the stochastic differential operators that have been developed through the last several papers. I admit confusion regarding the aside about developing an orthogonal equivalent set of operators, it seemed a bit of a red herring in the middle of the paper. I left out the example and code discussion from this post because it’s already very long, but I’m looking forward to working through them. Also worth pointing out is that I’ve been playing fast and loose moving back and forth between continuous and discrete, just in the interest in simplifying for understanding, but Dr. Sanger explicitly handles each case.

I’m excited to explore the potential applications and implementations of this technique in neural systems, especially in models of areas of the brain that perform a ‘look-ahead’ type function. The example that comes to mind is that of the rat reaching an intersection in a T-maze, and the neural activity recorded from place cells in the hippocampus shows the rat simulating the result of going left of going right. This seems a particularly apt application of these stochastic differential operators, as a sequence of actions and the resulting state can then be simulated and evaluated, providing that you have an accurate representation of the system dynamics in your stochastic operators.

To that end, I’m also very interested by possible means of learning stochastic differential operators for an action set. Internal models are an integral parts of motor control system models, and this seems like a potentially plausible analogue. Additionally, for modeling biological systems, the complexity of dynamics is something that is often infeasible to determine analytically. All in all, I think this is a really exciting road for exploring the neural control of movement, and I’m looking forward to seeing where it leads.

ResearchBlogging.org
Sanger, T. (2011). Distributed Control of Uncertain Systems Using Superpositions of Linear Operators Neural Computation, 23 (8), 1911-1934 DOI: 10.1162/NECO_a_00151

Gram-Schmidt orthogonalization

The context here is that we have some desired vector v^* that we want to build out of a set of basis vectors v_i through weighted summation. The case where this is easiest is when all of our vectors v_i are orthogonal with respect to each other. Recalling that a dot product of two vectors gives us a measure of their similarity, two vectors are orthogonal if their dot product is 0. A basic example of this is the set [1,0],[0,1], or the rotation of these vectors 45 degrees, [.7071, .7071],[-.7071, .7071].

If we have an orthogonal basis set of vectors, then to generate the weights for each of the basis vectors we simply take the dot product between each v_i and our desired vector v^*. For example, with our basis sets from above, the weights to generate the vector [.45 -.8] can be found as

w_1 = \langle [.45, -.8] , [1, 0] \rangle = .45 \\ w_2 = [.45, -.8] \cdot [0, 1] = -.8,

where \langle \rangle denotes dot (or inner) product, and leads to

w_1 = \langle [.45, -.8] , [.7071, .7071] \rangle = -0.2475 \\ w_2 = \langle [.45, -.8] , [-.7071, .7071] \rangle = -0.8839.

And now we have weights w_i such that for each basis set \sum_i w_i v_i = v^*. Written generally, to find the weights we have w_i = \frac{v_i \cdot v^*}{||v_i||}. The denominator here is the norm of v_i, introduced for generality. In the example set our basis sets were composed of unit vectors (vectors with magnitude = 1), but in general normalization is required.

Now, what if we don’t have an orthogonal basis set? Trouble, that’s what. With a non-orthogonal basis set, such as [1, .4], [-.1, 1], when we try our dot product business to find our coefficients looks what happens

w_1 = \frac{\langle [.45, -.8] , [1, .4] \rangle}{||[1,.4]||} = .3682 \\ w_2 = \frac{\langle [.45, -.8] , [-.1, 1] \rangle}{||[-.1, 1]||} = -.8408,

and

.1207 \cdot [1,.4] + -.8408 \cdot [-.1, 1] = [0.2048, -0.7925],

which is not a good reconstruction of our desired vector, [.45, -.8]. And the more the cross contribution to the same dimensions between different basis vectors, the worse this becomes. Of course, we could use a least squares method to find our basis set coefficients, but that involves matrix multiplications and inverses, and generally becomes more complex than we want.

So, let’s say we have a basis set of two different, but non-orthogonal vectors, v_1 and v_2. We instead want two vectors u_1 and u_2 which describe the same space, but are orthogonal. By describing the same space, I mean that their span is the same. And by span I mean the set of values that can be generated through weighted summation of the two vectors. So we set u_1 = v_1, and the task is now to find the appropriate u_2. As a conceptual description, we want u_2 to be equal to v_2, but only covering the area of space that u_1 isn’t covering already. To do this, we can calculate at the overlap between u_1 and v_2, then subtract out that area from v_2. The result should then give us the same area of state space covered by v_1 and v_2, but in a set of orthogonal vectors u_1 and u_2.

Mathematically, we calculate the overlap between u_1 and v_2 with a dot product, \langle u_1, v_2 \rangle, normalized by the magnitude of u_1, ||u_1||, and then subtract from v_2. All together we have

u_2 = v_2 - \frac{\langle u_1, v_2 \rangle}{||u_1||}.

Using our non-orthogonal example above,

u_1 = [1, .4]

u_2 = [-.1, 1] - \frac{\langle [-.1, 1] , [1, .4] \rangle}{||[-.1, 1]||} = [-0.3785, 0.7215].

Due to roundoff during the calculation of u_2, the dot product between u_1 and u_2 isn’t exactly zero, but it’s close enough for our purposes.

OK, great. But what about if we have 3 vectors in our basis set and want 3 orthogonal vectors (assuming we’ve moved to a 3-dimensional space) that span the same space? In this case, how do we calculate u_3? Well, carrying on with our intuitive description, you might assume that the calculation would be the same as for u_2, but now you must subtract out from v_3 that is covered by u_1 and u_2. And you would be correct:

u_3 = v_3 - \frac{\langle u_1, v_3 \rangle}{||u_1||} - \frac{\langle u_2, v_3 \rangle}{||u_2||}.

In general, we have

u_i = v_i - \sum_j \frac{\langle u_j, v_i \rangle}{||u_j||}.

And now we know how to take a given set of basis vectors and generate a set of orthogonal vectors that span the same space, which we can then use to easily calculate the weights over our new basis set of u_i vectors to generate the desired vector, v^*.

Tagged , , ,
Advertisements