A blog of language, neuroscience, and deep learning

Do layers of a network of neurons reduce neuron-wise entropy?

- Higher entropy means that the probability among several possibilities is more evenly distributed
- The entropy of some binary choices is given by:

- The table below shows the
**entropy**of some situations where there are two possibilities,**a**and**b**.- When we are more confident in one possibility, the entropy is lower

P(a) | P(b) | E(a, b) |
---|---|---|

50% | 50% | 1 |

75% | 25% | 0.81 |

25% | 75% | 0.81 |

10% | 90% | 0.47 |

- Let’s consider the network below (click on each neuron to see the network’s response):
- The excitatory connections from S to A and S to B have a 75% chance of making A or B spike when S spikes
- The excitatory connections from A to C and B to D have a 100% chance of making C spike when A spikes and making D spike when B spikes
- The inhibitory connections from A to D and B to C have a 33% chance of stopping D from spiking when A spikes or stopping C from spiking when B spikes (if they would have spiked)
- The excitatory connections from C to O and D to O make O spike whenever C or D spikes

- Run the simulation to collect data, and compute experimentally the entropy of each neuron:

Neuron | Count | Entropy | Expected |
---|---|---|---|

S | 0 | 1 | 0.00 |

A | 0 | 1 | 0.81 |

B | 0 | 1 | 0.81 |

C | 0 | 1 | 1.00 |

D | 0 | 1 | 1.00 |

O | 0 | 1 | 0.54 |

- The entropy of C and D is
*larger*than the entropy of A, B and O- Additionally, the entropy of O is lower than the entropy of A and B

- Even though we
*know*that the stimuli are causing the neurons to activate in a characteristic way, if we just looked at mutual information between neuron C or D and the stimulus, we would conclude that there is none