Abstract
In this article we intoduce a novel stochastic Hebb-like learning rule for neural networks that is ueurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the preand postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.
Original language | English |
---|---|
Pages (from-to) | 1501-1520 |
Number of pages | 20 |
Journal | International Journal of Modern Physics C |
Volume | 17 |
Issue number | 10 |
DOIs | |
Publication status | Published - Oct 2006 |
Externally published | Yes |
Publication type | A1 Journal article-refereed |
Keywords
- Biological reinforcement learning
- Hebb-like learning
- Heterosynaptic plasticity
- Neural networks
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Computer Science Applications
- Mathematical Physics
- General Physics and Astronomy
- Statistical and Nonlinear Physics