Hebbing learning

and no gradient descent in Devlog #11

Published:
Published:

Sensor has a receptive field and some dendrites that come from the higher neurons back. When something appears in the receptive field, the neuron starts firing at a high rate. The signal goes through some modifications when it crosses layers above and above. Then some blocks try to find it in memory. If no plasticity is needed, then the inhibitory signal comes back, back to the sensor, and together with this slow down command the rate slows down. And the signal is not going to the memory. No extra plasticity is activated over and over.


Hebbian learning and gradient descent are two distinct mechanisms that are used in neural networks. Here's an overview of the differences between Hebbian learning and gradient descent:

Hebbian learning is a biologically inspired rule that suggests that synaptic connections between neurons are strengthened when those neurons are activated simultaneously. It is associated with processes such as long-term potentiation (LTP) and long-term depression (LTD), which are believed to be involved in memory formation and synaptic modification.

Gradient descent is an optimization algorithm used to train neural networks by iteratively adjusting the parameters to minimize a specific cost or loss function. It is employed to find the optimal set of parameters that result in the best network performance.

Rate this page