Nengo scripting: absolute value

To just get the code you can copy / paste from below or get the code from my github: absolute_value.py

This is a simple script for performing the absolute value function in Nengo. The most efficient way I’ve found to implement this is to use two separate populations for each dimension of the input signal, one to represent the signal when it’s greater than zero and simply relay it to the output node, and one to represent the signal when it’s less than zero, and project x * -1 to the output node. Here’s the code, and I’ll step through it below.

def make_abs_val(name, neurons, dimensions, intercept=[0]):
    def mult_neg_one(x):
        return x[0] * -1 

    abs_val = nef.Network(name)

    abs_val.make('input', neurons=1, dimensions=dimensions, mode='direct') # create input relay
    abs_val.make('output', neurons=1, dimensions=dimensions, mode='direct') # create output relay
    
    for d in range(dimensions): # create a positive and negative population for each dimension in the input signal
        abs_val.make('abs_pos%d'%d, neurons=neurons, dimensions=1, encoders=[[1]], intercept=intercept)
        abs_val.make('abs_neg%d'%d, neurons=neurons, dimensions=1, encoders=[[-1]], intercept=intercept)

        abs_val.connect('input', 'abs_pos%d'%d, index_pre=d)
        abs_val.connect('input', 'abs_neg%d'%d, index_pre=d)
    
        abs_val.connect('abs_pos%d'%d, 'output', index_post=d)
        abs_val.connect('abs_neg%d'%d, 'output', index_post=d, func=mult_neg_one)

    return abs_val.network

First off, the function takes in parameters specifying the number of dimensions, the number of neurons for each population generated, a name, and optionally an intercept value. I’ll come back to why the intercept value is an option in a bit.

Inside the make_abs_val function another function that multiplies the first dimension of its input by -1 is specified. This mult_neg_one function is going to be used by our population representing negative values of the input signal.

Next, we create the network and call it abs_val. Input and output relay nodes are then created, with one neuron, of the specified dimension number, and the populations are set to direct mode. These are the populations that will be connected to from populations outside of the abs_val network.

Now there is a loop for each dimension of the input signal. Inside, two populations are created, where the only difference is their encoder values. Their intercepts specify the start of the range of values they represent. The default is 0, so when it’s not specified these populations will represent values from 0 to 1 (1 is the default end value of the range). For abs_neg, the encoders=[[-1]] line changes the range of values represented from (0,1) to (-1,0). Now we have two populations for dimension d, one that represents only positive values (between 0 and 1), and one that represents only negative values (between -1 and 0). And we’re almost done!

The only thing left to do is to hook up the populations to the input and output appropriately and incorporate the mult_neg_one function into the connection between each of the abs_neg populations and the output relay node. We want each set of populations representing a single dimension to receive and project back into the appropriate dimension of the output relay function, so we employ the index_pre and index_post parameters. Because we want each set to receive only dimension d from the input, on that connection specification we set index_pre=d. When setting up the projections to the output relay node, we similarly only want each population to project to the appropriate output dimension d, so we set index_post=d.

By default, the connect call sets up a communication channel, that is to say no computation is performed on the signal passed from the pre to the post population. This is what we want for abs_pos population, but for the abs_neg population we want the mult_neg_one function to be applied, so that any negative values are multiplied by -1, and give us positive values. This can be done by using the func parameter, and so we call it and set it func=mult_neg_one. Now the connection from abs_neg to the output node will be transformed by the mult_neg_one function.

And that’s it! Here is a script that gets it running (which can also be found on my github: absolute_value.py):

import nef
import random

# constants / parameter setup etc
N = 50 # number of neurons
D = 3 # number of dimensions

def make_abs_val(name, neurons, dimensions, intercept=[0]):
    def mult_neg_one(x):
        return x[0] * -1 

    abs_val = nef.Network(name)

    abs_val.make('input', neurons=1, dimensions=dimensions, mode='direct') # create input relay
    abs_val.make('output', neurons=1, dimensions=dimensions, mode='direct') # create output relay
    
    for d in range(dimensions): # create a positive and negative population for each dimension in the input signal
        abs_val.make('abs_pos%d'%d, neurons=neurons, dimensions=1, encoders=[[1]], intercept=intercept)
        abs_val.make('abs_neg%d'%d, neurons=neurons, dimensions=1, encoders=[[-1]], intercept=intercept)

        abs_val.connect('input', 'abs_pos%d'%d, index_pre=d)
        abs_val.connect('input', 'abs_neg%d'%d, index_pre=d)
    
        abs_val.connect('abs_pos%d'%d, 'output', index_post=d)
        abs_val.connect('abs_neg%d'%d, 'output', index_post=d, func=mult_neg_one)

    return abs_val.network

net = nef.Network('network')

# Create absolute value subnetwork and add it to net
net.add(make_abs_val(name='abs_val', dimensions=D, neurons=N))

# Create function input
net.make_input('input', values=[random.random() for d in range(D)])

# Connect things up
net.connect('input', 'abs_val.input')

# Add it all to the Nengo world
net.add_to_nengo()

And here’s a picture of it running.

About these ads
Tagged , , ,

6 thoughts on “Nengo scripting: absolute value

  1. JaRda says:

    Hi, nice script over here, I’ve tried to run it, but Nengo says:

    Traceback (most recent call last):
    File “../../models/nengo/absolute_value.py”, line 38, in
    net.connect(‘input’, ‘abs_val.input’)
    File “__pyclasspath__/nef/nef_core.py”, line 579, in connect
    at ca.nengo.model.impl.NetworkImpl.getNode(NetworkImpl.java:348)
    at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)

    ca.nengo.model.StructuralException: ca.nengo.model.StructuralException: No Node named abs_val.input in this Network

    It seems that subnetwork’s input (network) cannot be found, it may be necessary to “expose input outside the network”?

    Thanks for advice
    Jarda

  2. travisdewolf says:

    Hmm, are you using Nengo v1.4? http://ctnsrv.uwaterloo.ca:8080/jenkins/job/Nengo%201.4/lastSuccessfulBuild/artifact/nengo-1.4.zip

    Let me know if that doesn’t work, hopefully it should! One of the updates to Nengo in the last year allows for accessing nodes in subnetworks without having to expose the nodes. A feature I find very useful!

    • JaRda says:

      You are right, I just updated Nengo to the latest dev. version and both your scripts now work.
      I also find this feature useful, this is how I found your blog.
      Anyway, thank you for your reply!

      • travisdewolf says:

        Glad to hear that it works! I’ll be putting up some more Nengo posts in the next couple months, if there’s anything specific that would be helpful don’t hesitate to suggest topics!

  3. JaRda says:

    Hi, OK. For example recently I’ve tried to simulate fully-connected (classical, non-NEF) ANN defined by weight matrix (tuned are recurrent,input and output weights). Inspired by your scripts, I managed to make something like this: http://goo.gl/FGl1u . Two things I am not sure about:
    -is this the simplest way how to make independent IO connections?
    -I noticed that visualizers of connection weights take into account also gain and bias of target ensemble. Is there a way how to visualize absolute values of connection weights?

    So, this could be one interesting topic and I will try to come up with something other. Thanks for your blog.

    • travisdewolf says:

      Hi Jarda,

      sorry about the delay in responding. Could you rephrase your questions for me? I’m afraid I’m not quite sure what you’re asking, but I think we just need to iron out definitions for the terms we’re using.

      For visualizing the connection weights, in Interactive Mode in Nengo there is an option if you right click on the post-synaptic population to show the connection weights, it might be what you’re looking for there. What exactly do you mean by independent IO connections?

      If you can send me your script to look at I might be able to be more help!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 38 other followers

%d bloggers like this: