Reinforcement Learning part 3: Egocentric learning

When an agent is learning how to move, it’s learning a set of values to associate with possible actions for each situation it encounters. We say that it’s learning the action values. When the agent learns to associate the values of taking different actions in different states, it is learning the state-action values, also referred to as the Q-values. How the agent identifies ‘a situation’ can be handled in a couple of different ways.

If the agent is performing egocentric learning, then it distinguishes situations based on it’s immediately surrounding environment. If the agent is perform allocentric learning, then it has an internal map of the environment, and each different position on the map is considered a different situation. Whether an agent is using allocentric or egocentric learning ends up influencing its behavior quite a bit, and both have their pros and cons. We’ve some of both ego and allo-centric in the previous RL posts; in this post we’re going to examine egocentric learning specifically a little closer. And the code for this post can be found up on my GitHub.

Difference between ego and allo-centric

Let’s look at when a new set of action values is learned in these two cases. Going back to the ever wonderful mouse and cheese scenario, we’ll create a map for our mouse to learn to navigate.

map
In egocentric learning, each situation is determined by what is near the mouse, and so new action values for each of the possible movements ({W,NW,N,NE,E,SE,S,SW}) have to be learned whenever the immediate environment is not something the agent has seen before. For example, the mouse keeps a different set of action values for each of the following environments:

mouse states
On the other hand, in the case of allocentric learning, each situation is determined by the (x,y) coordinates of the mouse’s current position. That means that the mouse keeps a different set of action values for every possible (x,y) position:

mouse allo
so even if the environment in each position is the same:

mouse states2
with different (x,y) coordinates the mouse maintains different action value maps for all of them.

With egocentric learning, the agent tends to develop immediately beneficial behavior more quickly, to avoid running off cliffs and to move towards cheese when it’s in sight. The agent can even learn to move towards cheese that’s outside of the range of its sensors in simple maps, essentially by learning heuristics like ‘always moving right leads to reward.’ But this falls apart if there are several areas in the map where the immediate environment is the same, as in the case above with states 2 and 3.

With allocentric learning, the agent develops action value mappings that are much better suited to the overall environment and long-term goals (e.g. if the cheese is always in the center of the map, if you are in the bottom-right move up and left, and if you are in the top-left move down and right), but you can’t generalize across local situations (for example when there is a cliff one space to the right), the consequences of movement at each (x,y) position must be learned independently.

Clearly a combination of these two learning types is desirable, but that is for a future post! There is still a lot to examine in ego-centric learning, and so we’re going to dig deeper.

Thinking about the future

In allocentric learning, you don’t often perform an action, successfully move, and then still wind up in the same state. This makes transmitting information about rewards received in the future backwards to states that are still far away fairly stable, because a specific state is some constant distance away from getting reward. In egocentric learning, however, this isn’t the case. So when you start trying to propagate reward information backwards you can get some unstable behavior; let’s look at a very basic scenario:

cliffs
There’s the mouse in blue, on the left, and the reward, in green, on the right, and red represents cliffs / certain death. So now, whenever the mouse isn’t beside one of the walls he will be in one of three states, cliffs above, nothing around him, or cliffs below:

3states

OK. Now let’s look at the update formula for Q-learning again:

Q(s, a) += alpha * (reward(s') + gamma * max(Q(s')) - Q(s, a))

where s is the previous system state, a is the action just taken, s' is the current state, reward(s') is the reward for being in the current state, and max(Q(s')) is the maximum reward obtainable by performing an action in the current state.

Informally, it says that the change to make to the Q-value for the action just taken, Q(s, a), is equal to your learning rate, alpha, multiplied by the reward for the state that you moved into,reward(s'), plus the lookahead reward,max(Q(s')), which relays the expected reward now that you’re in state s', minus the previous Q-value,Q(s, a).

Let’s break this down a bit more, because this can be complicated to think about still. If we set gamma = 0, then our update is

Q(s, a) += alpha * (reward(s') - Q(s, a)).

Here it’s more apparent what the dynamics are doing, they’re driving the Q-value to the reward value of the state you end up in. So you take an action a in state s, you find the reward for where you ended up, and you learn the reward for performing action a in state s. Now if gamma is not equal to 0, then you can think of the Q-value dynamics as trying to drive the Q-values to

reward(s') + gamma * max(Q(s'))

The larger that gamma is, the farther into the future you look. This can lead to unstable / non-convergent systems when you repeatedly move into the same state again and again. For example, if you’re in State 2 from above, and you move right often enough you’ll end up in a state that leads to getting a reward. This boosts the state before the reward, and now the next time you happen to move to a state where the reward is next to you, the Q-values of State 2 are now updated. And so you get your reward and you reset. Now you move into State 2 again and hey! Moving right leads to a reward. So you move right, but now you’re back in State 2 again! And so the reward for moving right decreases a bit. But it’s still the best decision, so you move right. And you’re in State 2 again, and so the reward for moving right goes down again, etc etc. Eventually moving right brings you to the target though and the Q-value for moving right gets a boost. So it’s going to forever be fluctuating.

In this example, it doesn’t seem like a big deal, because the mouse still learns to move towards the cheese and the Q-values have entered a kind of rhythmic limit cycle. There are a few problems that arise, however, when counting the future to heavily in egocentric learning.

Problem: Death-wish

Basically, the mouse decides that certain death is a better option than wandering around in uncertainty forever and starts deciding to commit suicide. Let’s say that to encourage exploration, the reward for being in an unoccupied square is -1, and to encourage not dying the reward for jumping off a cliff is -10. The problem arises in that jumping off the cliff is a constant punishment, -10. It’s always -10 and it will never be any worse than -10. There is no further future past death to influence the reward. When the mouse is just exploring, however, it will explore, and get -1, and then explore more, and get -1, etc etc. This causes the expected reward of actions that lead you into the same state keep dropping. If the weight of gamma is too large, then without all that much exploring the reward for exploring has suddenly dropped to less than -10, and the mouse decides that continuing to explore is worse than death, and jumps.

By setting gamma = 5 (which is pretty high) we then get this behavior very quickly, here are some simulation results, where the first column is the timestep, the second column W denotes how many times the mouse found the cheese, the third column L denotes how many times the mouse chose to jump off a cliff (as dictated by its Q-values), and the last column A denotes how many times random exploration led the mouse to its accidental death:

10000, W: 0, L: 3138, A: 51
20000, W: 0, L: 3207, A: 40
30000, W: 0, L: 3204, A: 40
40000, W: 0, L: 3205, A: 45
50000, W: 0, L: 3208, A: 43
60000, W: 0, L: 3198, A: 49
70000, W: 0, L: 3200, A: 49
80000, W: 0, L: 3207, A: 46
90000, W: 0, L: 3204, A: 46
100000, W: 0, L: 3214, A: 38
110000, W: 0, L: 3211, A: 37
120000, W: 0, L: 3203, A: 44
130000, W: 0, L: 3199, A: 45
140000, W: 0, L: 3225, A: 28
150000, W: 0, L: 3211, A: 39
160000, W: 0, L: 3196, A: 45
170000, W: 0, L: 3208, A: 46
180000, W: 0, L: 3203, A: 47
190000, W: 0, L: 3209, A: 38
200000, W: 0, L: 3216, A: 38

So pretty immediately the mouse gets into a dark place, and starts choosing to jump, without ever having found the cheese even once.

Problem: Confusion

Another problem is that by heavily weighting the immediate future the mouse can get easily confused again about what was good or how it actually found the cheese and got the big reward in the first place. Once State 2 has rewarded moving right, and the mouse moves left, there is now a new max(Q(s')) value for State 2 (the reward for moving right), and so the reward for moving left is now brought up. At the same time, the reward for moving right is dropping every time it moves right and doesn’t get rewarded (as described above). Same with moving up and down, so very quickly the Q-values can be flooded with high values and the optimal direction of movement is lost.

We can see this case by setting gamma = .8. Now, the mouse will find the cheese and learns to move right, but there are also these blips of confusion where it forgets how to find the cheese,

10000, W: 0, L: 4, A: 78
20000, W: 0, L: 0, A: 78
30000, W: 87, L: 0, A: 67
40000, W: 810, L: 0, A: 16
50000, W: 841, L: 0, A: 1
60000, W: 843, L: 0, A: 2
70000, W: 847, L: 0, A: 3
80000, W: 816, L: 0, A: 10
90000, W: 667, L: 0, A: 17
100000, W: 0, L: 0, A: 68
110000, W: 0, L: 0, A: 68
120000, W: 34, L: 0, A: 68
130000, W: 748, L: 0, A: 43
140000, W: 853, L: 0, A: 0
150000, W: 846, L: 0, A: 2
160000, W: 847, L: 0, A: 3
170000, W: 856, L: 0, A: 2
180000, W: 853, L: 0, A: 2
190000, W: 120, L: 0, A: 62
200000, W: 0, L: 0, A: 70

So it’s getting there, but then it gets confused and forgets how to find it, then it remembers, then it forgets…etc.

Solution: Have a healthy fear of death

If instead now we set gamma = .05, the mouse has a much stronger repulsion to imminent death than to ennui for a much longer time, and will go around exploring things (note that this is the same as changing the reward for dying to be a much higher penalty). As a result the mouse will find the cheese faster. As a bonus, we also don’t have the fits of forgetfulness seem in the previous example:

10000, W: 313, L: 4, A: 50
20000, W: 845, L: 0, A: 6
30000, W: 848, L: 0, A: 1
40000, W: 850, L: 0, A: 1
50000, W: 850, L: 0, A: 1
60000, W: 846, L: 0, A: 2
70000, W: 844, L: 0, A: 6
80000, W: 843, L: 0, A: 2
90000, W: 846, L: 0, A: 2
100000, W: 847, L: 0, A: 5
110000, W: 844, L: 0, A: 4
120000, W: 849, L: 0, A: 3
130000, W: 846, L: 0, A: 5
140000, W: 842, L: 0, A: 4
150000, W: 845, L: 0, A: 4
160000, W: 847, L: 0, A: 5
170000, W: 848, L: 0, A: 10
180000, W: 844, L: 0, A: 7
190000, W: 840, L: 0, A: 8
200000, W: 848, L: 0, A: 1

And so in this case now we’ve developed a pretty good, stable, egocentric RL mouse, that can learn to avoid jumping off cliffs and find the cheese in this very simple map.

We can still, of course, drop the gamma value down to the extreme, 0, such that our egocentric learner would have no knowledge of any future further than the reward it gets for taking the next step. In this case it would know that going off a cliff is bad, but it won’t know that moving to the right will eventually lead to cheese / a reward.

However, that can be OK! Warding off any immediate danger is and capitalizing on any immediate reward is the perfect role for this kind of short-sighted system. Trying to make it learn how to run through a complicated maze without a lot of very specific landmarks is just not something that’s going to happen.

Conclusions

So, recapping, the egocentric system learns Q-values for states that are based on the environment relative to the agent. This has some important consequences, in that it allows it to learn very quickly not to move forward if there’s a cliff in front of you, and that if there’s a reward in the immediate vicinity to go get it. Setting the weight on the expected reward in the Q-values update can lead to some unstable behavior, and it’s important to have a good concept of how the behavior of the agent changes with different gamma values. These undesirable behaviors can be avoided by setting gamma = 0, but the agent will then only learn the consequences of its immediate action, and be unable to look into the future at all to see how to bring about a reward or avoid getting trapped.

The egocentric learner has strengths where the allocentric learner fails, and the allocentric learner has strengths where the egocentric learner fails. The natural thought is to combine these two learners, and that will be the subject of the next RL post!

Tagged , , ,

2 thoughts on “Reinforcement Learning part 3: Egocentric learning

  1. […] incorporate anything other than the immediate reward into it’s learning, as discussed in the previous post), we expect that it will only rarely find the goal. Also, as the map gets larger, we expect that […]

  2. Taeyoung says:

    Reblogged this on Taeyoung Kim and commented:
    Things to learn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: