Spatial Pooling

Spatial Pooling is a process that extracts semantic information from input to provide a controlled space to perform further operations. Additionally, input is converted into a sparse distributed representation (SDR), which provides further computational benefits (citation needed). Even though information is lost during this transformation, stability is gained and semantics are preserved through redundancy.

The neocortex is a homogenous sheet of neurons. It is separated into individual processing units called "cortical columns". Each cortical column performs essentially the same computations, and is separated into many layers of different types of neurons. Different layers perform different processes. They can be wired to receive input from different locations in the brain, or sensory input.

Spatial Pooling is a process that occurs in at least one of these cortical layers, throughout the neocortex, in every cortical column. A layer performing Spatial Pooling receives feed forward input to a population of neurons. This feed forward input may be sensory input, or input from other cortical areas. This input drives or causes neurons in the layer to activate.

Minicolumns

In cortical layers performing Spatial Pooling, there are structures called minicolumns. These structures group together neurons and force them to pay attention to the same subset of the input. The feed forward input space for a layer of cortex performing Spatial Pooling is the complete set of neurons that it may be connected to. This input space contains a massive amount of information. Each minicolumn receieves a unique subset of the input. We'll refer to this subset of the input as a minicolumn's potential pool.

Neurons simulated by Spatial Pooling can be either excitatory or inhibitory. Excitatory neurons activate to represent semantic information. Inhibitory neurons enforce minicolumn groupings for the Spatial Pooling process.

There may be thousands of minicolumn structures within a layer of a cortical column. Spatial Pooling is a competition between minicolumns to represent the information in the input space. As neuronal activations in the input space change, different minicolumns represent different input.

Input Space

Let's imagine a single scalar value changing over time. Based on previous examples of encodings, we might encoding this value in different semantic ways. For example, the scalar value could be encoded separately from the time semantics, as visualized below.

These semantics can be combined into one encoding that spans the entire input space for a population of neurons performing Spatial Pooling.

Combined Encoding

Figure 1: Combined encoding.

As you can see by toggling (), many different semantics of information are being encoded into the input space. However, the Spatial Pooling operation has no knowledge of these semantics or where the input comes from. Spatial Pooling uses overlapping potential pools of different minicolumns to extract the semantics of the input without prior knowledge of its structure.

Potential Pools

Each minicolumn has a unique potential pool of connections to the input space. Its neurons will only ever connect to input cells that fall within this potential pool. In the diagrams below, the percent of input minicolumns could connect to is . In the diagram below, click on different minicolumns on the left to display their different potential pools of connections on the right. As input passes through the input space, you can see how each minicolumn is restricted to seeing only a portion of the input information. Notice the green checkmarks and white x's in the input space. These indicate input that is observed and ignored by the selected minicolumn, respectively. As you decrease the , you should notice that more input is ignored by each minicolumn.

Figure 2: Potential Pools.

Setting up minicolumn potential pools is not complicated. Upon initialization, each minicolumn's potential pool of connections is established using a simple random number generator. For each cell in the input space, a minicolumn either has a possibility of connecting, or not. The code below defines a minicolumn's potential pool as an array of indices of the input space to which it might connect.

Code Example 1: Establishing minicolumn potential pools.
1
2
3
4
5
6
7
8
9
10
11
let pools = []
for (let i = 0; i < minicolumnCount; i++) {
	let pool = []
	for (let inputIndex = 0; inputIndex < inputCount; inputIndex++) {
		if (Math.random() < connectedPercent) {
			pool.push(inputIndex)
		}
	}
	pools.push(pool)
}
return pools

Permanences

The memory of all neural networks is stored in the connections between cells, called synapses. We model synapses as scalar permanence values. If they breach a connection threshold, they are connected.

Within each minicolumn's potential pool, we must establish an initial state for each connection. This represents the strength of a synapse. In the diagram below, connection permanences are displayed in a "heat map" where green is less connected and red is more connected.

Figure 3.1: Permanence values.

If a permanence breaches a connection threshold (), we say that the connection is established, and the neuron is "connected" to the input cell.

Figure 3.2: Permanence values and connections.

In the diagrams shown above, connections are initially established in a normal disribution around a center point (). For the initial permanences, the connection threshold () should be near the distribution center. This ensures that synapses are primed to either connect or disconnect quickly when learning, ensuring more entropy in the initial state of the system.

There are many ways we might establish initial permanences, but the important thing is to establish most of the permanences close to the connection threshold. We will use a Bates distribution, which gives us a variable to change the intensity of the distribution curve. As you increase the number of "independent variables" in the Bates distribution, the peak of the curve becomes more prominent. See this in actino by changing this value here: . (See also kurtosis.)

Initial permanence values are established once, when the Spatial Pooler is initialized. These values will only change if learning is enabled (more on this later). The logic below uses the D3JS randomBates function to establish the values.

Code Example 2: Establishing minicolumn initial permanence values using a Random Bates distribution.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const pools = getPotentialPools()
const allPerms = []
// Permanence arrays match the potential pool
pools.forEach(pool => {
	const perms = []
	pool.forEach(_ => {
		let perm = randomBates()
		// Permence must be between 0 and 1
		if (perm > 1) perm = 1
		if (perm < 0) perm = 0
		perms.push(perm)
	})
	allPerms.push(perms)
})
return allPerms

This function returns an array of permanence values for each minicolumn. Each array defines how connected that minicolumn is toward the input space. These arrays do not cover the entire input space, but only the minicolumn's potential pool. To get a permanence value's input space cell, you must access the minicolumn's potential pool.

Figure 3.3:Permanence values, connections, and permanence distributions.

Minicolumn Competition

For each input, minicolumns compete to represent the semantics of the input. They do this by comparing their overlap scores.

An overlap score denotes how strongly a minicolumn matches the current input. Press this pause button above and inspect the diagram below by selecting minicolumns in the left chart. The redder minicolumns have more overlap with the current input value than the green minicolumns. As you click around the space, notice the overlap score changing. This is the number of connected synapses that overlap the on bits in the input space at this time step. You can verify this score is correct by counting the number of solid blue circles in the input space. Notice they are all on top of the a grey input box, which represents an on bit. Connected synapses that do not overlap the current input (empty circles) are not counted in the overlap score.

Figure 4.1: Minicolumn competition.

Here is another view of the minicolumns, ordered by their overlap scores. Those with higher overlap scores are at the left of the diagram. During the competition, minicolumns with the highest overlap scores should represent the input data. To choose the "winners" of the competition, we decide how many minicolumns we want to represent the data, and cut the stack at that point. In machine learning terms, this is called a k-winners-take-all operation. We can easily control the sparsity of this new representation by changing k. Try changing k here: .

Figure 4.2: Stack ranking of minicolumns by overlap score.
Figure 4.3: Minicolumn competition.
  • Learning is
  • Permanence Increment:
  • Permanence Decrement:
  • Connection threshold:
  • Connection distribution:
  • Center of disribution:
  • Duty Cycle Period:
Selected minicolumn overlap:

Duty Cycles

Active Duty Cycles

Figure 5: Active duty cycles.
Selected minicolumn ADC: %

Overlap Duty Cycles

Figure 6: Overlap duty cycles.
Selected minicolumn ODC: