To just get the code you can copy / paste from below or get the code from my github: low_pass_derivative_filter.py.

In the course of building models in Nengo, I recently came in to need for a neural implementation of a low pass derivative filter. I scripted up a sub-network in Nengo (www.nengo.ca) that does this, and until we get the model database / repository up and running for Nengo scripts I’ll keep working through building these things here, because it goes over some basic methods that can be useful when you’re building up your models.

Basically there are three parts to this model: Derivative calculation, absolute value calculation, and an inhibitory projection with a threshold activation that projects to the output population. Here’s a picture:

Here’s the idea: The population input projects both directly to the output population and to a population that calculates the derivative of the sum across dimensions of the input signal. The derivative population passes on the derivative of the input signal to an absolute value calculating population, which passes the absolute value of the derivative on to a population that isn’t activated for values under a threshold level. This threshold population then projects very strong inhibition to the output population, so that when the absolute value of the derivative is above the threshold level, no output is projected, and otherwise the system just relays the input signal straight through. Here’s the code:

def make_dlowpass(name, neurons, dimensions, radius=10, tau_inhib=0.005, inhib_scale=10): dlowpass = nef.Network(name) dlowpass.make('input', neurons=1, dimensions=dimensions, mode='direct') # create input relay output = dlowpass.make('output', neurons=dimensions*neurons, dimensions=dimensions) # create output relay # now we track the derivative of sum, and only let output relay the input # if the derivative is below a given threshold dlowpass.make('derivative', neurons=radius*neurons, dimensions=2, radius=radius) # create population to calculate the derivative dlowpass.connect('derivative', 'derivative', index_pre=0, index_post=1, pstc=0.1) # set up recurrent connection dlowpass.add(make_abs_val(name='abs_val', neurons=neurons, dimensions=1, intercept=(.2,1))) # create a subnetwork to calculate the absolute value # connect it up! dlowpass.connect('input', 'output') # set up communication channel dlowpass.connect('input', 'derivative', index_post=0) def sub(x): return [x[0] - x[1]] dlowpass.connect('derivative', 'abs_val.input', func=sub) # set up inhibitory matrix inhib_matrix = [[-inhib_scale]] * neurons * dimensions output.addTermination('inhibition', inhib_matrix, tau_inhib, False) dlowpass.connect('abs_val.output', output.getTermination('inhibition')) return dlowpass.network

First off, this code is taking advantage of the absolute value function that was written a couple blog posts ago. You can either go check out that post or I’ll also have that function included in the code at the end for completeness. Aside from that, there are a lot of things going on that you won’t come across if you’re just coding up simple examples, so let’s look at them.

At the top, we’re assigning our `output`

network a handle, which I try to avoid in general for neatness, since most of the times you can reference it by simply referring to it’s name, `'output'`

. The reason that I assign it a handle here is because we’re going to be calling upon some Java API features that (to my knowledge) aren’t handled in the Python API yet, and although we could call up the `output`

node as a Java object with `dlowpass.get('output')`

using only its assigned name, it will just be cleaner in the end to have a handle for it. We’ll come back to this.

The next interesting thing that happens, is that when we’re creating our `derivative`

population, we set the number of neurons to `10*neurons`

, and `radius=10`

. This is to because in the `derivative`

we’re representing the sum of all of the input dimensions. How are all the input dimensions being summed up in `derivative`

, you ask? Just below, on line 17. When we connect up the `input`

relay to `derivative`

we set `index_post=0`

, which means that all of the `input`

dimensions are going to to project to the same dimension of the `derivative`

population. The default weight for this connection is 1, so then dimension 0 of `derivative`

is equal to ` 1 * value for value in input_dimensions `

. Super.

But why do we set `radius=10`

? This is because the `radius`

parameter specifies the range of values represented by this population. The default is `(-1,1)`

, but when we specify `radius`

, the new range of represented values becomes `(-radius, radius)`

. We’re making a bit of an assumption that this value won’t go outside of the range `(-10,10)`

here, but that should be OK for most of the situations we’re going to come across. In the specific model I’m using this for it’s definitely the case, so that’s why I’ve set the default value to 10. And because we don’t want the accuracy in representation to decrease, we also scale up the number of neurons in this population by `radius*neurons`

.

And there’s still more happening in this `derivative`

population! On line 11 I specify a recurrent connection that projects into a second dimension represented in `derivative`

. So now what’s going to happen is that the sum of the input signals is projected into the first dimension of `derivative`

, and through a recurrent connection the value of the sum of the input signals from time `t-pstc`

will be represented in the second dimension of the `derivative`

population.

To calculate the derivative then, it’s a simple matter of subtracting the previous signal from the current signal, which is what happens in the function `sub`

that I define on line 19. To implement this function, when connecting up `derivative`

to `abs_val`

, just set the parameter `func=sub`

. Easy.

Now, when the `abs_val`

population is made, we set and `intercept=(.2,1)`

. This is the same trick we used in the previous absolute value function model, but it’s acting as a threshold here. Basically, this population won’t respond if the value being projected into it is between (-.2, .2).

So, up to this point, what we have is a summation of the input dimensions, the derivative being calculated and passed to an absolute value function, and this population only responds if the derivative is greater than .2.

The last part is hooking up this `abs_val`

population to the output relay, to suppress output whenever it’s activated (i.e. when the derivative is greater than .2). This is where we need to pull into the Java API, and this is why we specified a handle for our `output`

population. In lines 25-26, what’s going on is that instead of using the NEF neural compiler functionality to set up our connection weights to compute some function, we’re specifying them ourselves. And we’re specifying them to prevent the neurons in `output`

from firing. Now, the activation of the neurons in `abs_val`

reduces the voltage values being sent into the `output`

population, inhibiting their activity.

And that’s it! Here’s the complete code to run an example of this network (which can also be found on my github low_pass_derivative_filter.py):

import nef # constants / parameter setup etc N = 50 # number of neurons D = 3 # number of dimensions def make_abs_val(name, neurons, dimensions, intercept=[0]): def mult_neg_one(x): return x[0] * -1 abs_val = nef.Network(name) abs_val.make('input', neurons=1, dimensions=dimensions, mode='direct') # create input relay abs_val.make('output', neurons=1, dimensions=dimensions, mode='direct') # create output relay for d in range(dimensions): # create a positive and negative population for each dimension in the input signal abs_val.make('abs_pos%d'%d, neurons=neurons, dimensions=1, encoders=[[1]], intercept=intercept) abs_val.make('abs_neg%d'%d, neurons=neurons, dimensions=1, encoders=[[-1]], intercept=intercept) abs_val.connect('input', 'abs_pos%d'%d, index_pre=d) abs_val.connect('input', 'abs_neg%d'%d, index_pre=d) abs_val.connect('abs_pos%d'%d, 'output', index_post=d) abs_val.connect('abs_neg%d'%d, 'output', index_post=d, func=mult_neg_one) return abs_val.network def make_dlowpass(name, neurons, dimensions, radius=10, tau_inhib=0.005, inhib_scale=10): dlowpass = nef.Network(name) dlowpass.make('input', neurons=1, dimensions=dimensions, mode='direct') # create input relay output = dlowpass.make('output', neurons=dimensions*neurons, dimensions=dimensions) # create output relay # now we track the derivative of sum, and only let output relay the input # if the derivative is below a given threshold dlowpass.make('derivative', neurons=radius*neurons, dimensions=2, radius=radius) # create population to calculate the derivative dlowpass.connect('derivative', 'derivative', index_pre=0, index_post=1, pstc=0.1) # set up recurrent connection dlowpass.add(make_abs_val(name='abs_val', neurons=neurons, dimensions=1, intercept=(.2,1))) # create a subnetwork to calculate the absolute value # connect it up! dlowpass.connect('input', 'output') # set up communication channel dlowpass.connect('input', 'derivative', index_post=0) def sub(x): return [x[0] - x[1]] dlowpass.connect('derivative', 'abs_val.input', func=sub) # set up inhibitory matrix inhib_matrix = [[-inhib_scale]] * neurons * dimensions output.addTermination('inhibition', inhib_matrix, tau_inhib, False) dlowpass.connect('abs_val.output', output.getTermination('inhibition')) return dlowpass.network # Create network net = nef.Network('net') # Create / add low pass derivative filter net.add(make_dlowpass(name='dlowpass', neurons=N, dimensions=D)) # Make function input net.make_input('input_function', values=[0]*D) # Connect up function input to filter net.connect('input_function', 'dlowpass.input') # Add it all to Nengo net.add_to_nengo()

And here’s a picture of it running. What you can see is that anytime the input changes quickly, the system input drops to zero, but when the input is holding constant or is changing slowly the output is allowed to pass through. Great! Just what we wanted.

Do you have a tutorial that introduces Dyna with Q-Learning?

I do not, sorry! You’re referring to Dyna-Q from Sutton? I’ve been wanting to do another post on RL, this would be an interesting topic. I’ll let you know if I write one!