In the most general definition, a GL network consists of a countable number of elements (idealized neurons) that interact by sporadic nearly-instantaneous discrete events (spikes or firings). At each moment, each neuron N fires independently, with a probability that depends on the history of the firings of all neurons since the last time N last fired. Thus each neuron "forgets" all previous spikes, including its own, whenever it fires. This property is a defining feature of the GL model.
In specific versions of the GL model, the past network spike history since the last firing of a neuron N may be summarized by an internal variable, the potential of that neuron, that is a weighted sum of those spikes. The potential may include the spikes of only a finite subset of other neurons, thus modeling arbitrary synapse topologies. In particular, the GL model includes as a special case the general leaky integrate-and-fire neuron model.
The GL model has been formalized in several different ways. The notations below are borrowed from several of those sources.
The GL network model consists of a countable set of neurons with some set of indices. The state is defined only at discrete sampling times, represented by integers, with some fixed time step . For simplicity, let's assume that these times extend to infinity in both directions, implying that the network has existed since forever.
In the GL model, all neurons are assumed evolve synchronously and atomically between successive sampling times. In particular, within each time step, each neuron may fire at most once. A Boolean variable denotes whether the neuron fired () or not () between sampling times and .
Let denote the matrix whose rows are the histories of all neuron firings from time to time inclusive, that is
and let be defined similarly, but extending infinitely in the past. Let be the time before the last firing of neuron before time , that is
Then the general GL model says that
Moreover, the firings in the same time step are conditionally independent, given the past network history, with the above probabilities. That is, for each finite subset and any configuration we have
In a common special case of the GL model, the part of the past firing history that is relevant to each neuron at each sampling time is summarized by a real-valued internal state variable or potential (that corresponds to the membrane potential of a biological neuron), and is basically a weighted sum of the past spike indicators, since the last firing of neuron . That is,
In this formula, is a numeric weight, that corresponds to the total weight or strength of the synapses from the axon of neuron to the dentrites of neuron . The term , the external input, represents some additional contribution to the potential that may arrive between times and from other sources besides the firings of other neurons. The factor is a history weight function that modulates the contributions of firings that happened whole steps after the last firing of neuron and whole steps before the current time.
Then one defines
where is a monotonically non-decreasing function from into the interval .
If the synaptic weight is negative, each firing of neuron causes the potential to decrease. This is the way inhibitory synapses are approximated in the GL model. The absence of a synapse between those two neurons is modeled by setting .
Leaky integrate and fire variants
In an even more specific case of the GL model, the potential is defined to be a decaying weighted sum of the firings of other neurons. Namely, when a neuron fires, its potential is reset to zero. Until its next firing, a spike from any neuron increments by the constant amount . Apart from those contributions, during each time step, the potential decays by a fixed recharge factor towards zero.
In this variant, the evolution of the potential can be expressed by a recurrence formula
Or, more compactly,
This special case results from taking the history weight factor of the general potential-based variant to be . It is very similar to the leaky integrate and fire model.
If, between times and , neuron fires (that is, ), no other neuron fires ( for all ),and there is no external input (), then will be . This self-weight therefore represents the reset potential that the neuron assumes just after firing, apart from other contributions. The potential evolution formula therefore can be written also as
where is the reset potential. Or, more compactly,
These formulas imply that the potential decays towards zero with time, when there are no external or synaptic inputs and the neuron itself does not fire. Under these conditions, the membrane potential of a biological neuron will tend towards some negative value, the resting or baseline potential , on the order of −40 to −80 millivolts.
However, this apparent discrepancy exists only because it is customary in neurobiology to measure electric potentials relative to that of the extracellular medium. That discrepancy disappears if one chooses the baseline potential of the neuron as the reference for potential measurements. Since the potential has no influence outside of the neuron, its zero level can be chosen independently for each neuron.
Variant with refractory period
Some authors use a slightly different refractory variant of the integrate-and-fire GL neuron, which ignores all external and synaptic inputs (except possibly the self-synapse ) during the time step immediately after its own firing. The equation for this variant is
or, more compactly,
Even more specific sub-variants of the integrate-and-fire GL neuron are obtained by setting the recharge factor to zero. In the resulting neuron model, the potential (and hence the firing probability) depends only on the inputs in the previous time step; all earlier firings of the network, including of the same neuron, are ignored. That is, the neuron does not have any internal state, and is essentially a (stochastic) function block.
The evolution equations then simplify to
for the variant without refractory step, and
for the variant with refractory step.
In these sub-variants, while the individual neurons do not store any information from one step to the next, the network as a whole still can have persistent memory because of the implicit one-step delay between the synaptic inputs and the resulting firing of the neuron. In other words, the state of a network with neurons is a list of bits, namely the value of for each neuron, which can be assumed to be stored in its axon in the form of a traveling depolarization zone.
The GL model was defined in 2013 by mathematicians Antonio Galves and Eva Löcherbach. Its inspirations included Frank Spitzer's interacting particle system and Jorma Rissanen's notion of stochastic chain with memory of variable length. Another work that influenced this model was Bruno Cessac's study on the leaky integrate-and-fire model, who himself was influenced by Hédi Soula. Galves and Löcherbach referred to the process that Cessac described as "a version in a finite dimension" of their own probabilistic model.
Prior integrate-and-fire models with stochastic characteristics relied on including a noise to simulate stochasticity. The Galves–Löcherbach model distinguishes itself because it is inherently stochastic, incorporating probabilistic measures directly in the calculation of spikes. It is also a model that may be applied relatively easily, from a computational standpoint, with a good ratio between cost and efficiency. It remains a non-Markovian model, since the probability of a given neuronal spike depends on the accumulated activity of the system since the last spike.
Contributions to the model were made, considering the hydrodynamic limit of the interacting neuronal system, the long-range behavior and aspects pertaining to the process in the sense of predicting and classifying behaviors according to a fonction of parameters, and the generalization of the model to the continuous time.
- A. Galves, E. Löcherbach, "Infinite Systems of Interacting Chains with Memory of Variable Length — A Stochastic Model for Biological Neural Nets". Journal of Statistical Physics, vol. 151, n. 5, pp. 896–921, Jun. 2013
- François Baccelli, Thibaud Taillefumier (2019): "Replica-mean-field limits for intensity-based neural networks". ArXiv paper 1902.03504v1 [math.DS]; 40 pages.
- Ludmila Brochini, Ariadne de Andrade Costa, Miguel Abadi, Antônio C. Roque, Jorge Stolfi, and Osame Kinouchi (2016): "Phase transitions and self-organized criticality in networks of stochastic spiking neurons." Scientific Reports, volume 6, article 35831. doi:10.1038/srep35831
- B. Cessac, "A discrete time neural network model with spiking neurons: II: Dynamics with noise". Journal of Mathematical Biology, Vol. 62, nº 6, pg 863–900. Jun. 2011
- H. E. Plesser, W. Gerstner. "Noise in Integrate-and-Fire Neurons: From Stochastic Input to Escape Rates". Neural Computation. Feb 2000, Vol. 12, No. 2, Pg 367–384
- A. De Masi, A. Galves, E. Löcherbach, E. Presutti, "Hydrodynamic limit for interacting neurons". Journal of Statistical Physics, 158(4), 866–902, 2015.
- A. Duarte, G. Ost, "A model for neural activity in the absence of external stimuli", arXiv preprint arXiv:1410.6086 (2014).
- N. Fournier, E. Löcherbach, "On a toy model of interacting neurons", arXiv preprint arXiv:1410.3263 (2014).
- K. Yaginuma, "A stochastic system with infinite interacting components to model the time evolution of the membrane potentials of a population of neurons", arXiv preprint arXiv:1505.00045 (2015).
- "Modelos matemáticos do cérebro", Fernanda Teixeira Ribeiro, Mente e Cérebro, Jun. 2014