## Abstract

Dynamical criticality has been shown to enhance information processing in dynamical systems, and there is evidence for self-organized criticality in neural networks. A plausible mechanism for such self-organization is activity-dependent synaptic plasticity. Here, we model neurons as discrete-state nodes on an adaptive network following stochastic dynamics. At a threshold connectivity, this system undergoes a dynamical phase transition at which persistent activity sets in. In a low-dimensional representation of the macroscopic dynamics, this corresponds to a transcritical bifurcation. We show analytically that adding activity-dependent rewiring rules, inspired by homeostatic plasticity, leads to the emergence of an attractive steady state at criticality and present numerical evidence for the system's evolution to such a state.

## 1. Introduction

Information-processing systems often exhibit optimal computational capabilities when their parameters are tuned to *critical states* associated with phase transitions [1–4]. It therefore appears probable that our brains operate at criticality [5]. Although still hotly debated in neuroscience, the hypothesis of neural criticality is supported by recent experiments. Power-law distributions indicative of critical behaviour were observed in slices of rat cortex [6–8] as well as electroencephalography [9–11], functional magnetic resonance imaging [12] and electrocorticography [11] measurements in humans.

In the light of the experimental corroboration of neural criticality, it is interesting to ask how a biological system can robustly self-tune its parameters to a critical state. A probable answer is found in the study of adaptive networks [13,14], a class of models in which the dynamics on a network coevolves with the network structure. In 1998, it was noted that adaptive networks with slowly evolving topology can self-organize to a state where the dynamics on the network are critical [15]. Adaptive self-organized criticality (aSOC) was subsequently demonstrated conclusively in a simple Boolean network model [16] and then studied in detail in neural models [17–23].

We note that insights into aSOC not only may advance our understanding of natural neural networks, but also may reveal an important design principle for electronic computers: within the next 10 years, current manufacturing processes will hit fundamental boundaries [24]. Continued progress will require using nanoscale components (e.g. nanotubes, nanowires, biomolecules) that cannot be positioned precisely with present-day techniques for manufacturing large-scale integrated systems. It is thus conceivable that future computers may consist of active (nano-)elements that are deposited randomly and then left to self-tune to a critical state, where meaningful information processing is possible. A promising mechanism for such self-tuning is provided by aSOC. Importantly, this mechanism does not require the rewiring of physical interconnections, but can be achieved by local changes of conductivity between elements [22] that have recently been demonstrated experimentally [25].

Despite the prominent role of aSOC for information-processing systems in both biology and technology, our understanding of the phenomenon is still limited. In many aSOC models, the critical state is identified by showing that certain quantities follow power-law distributions. However, power laws can appear as a result of other mechanisms besides criticality. It is thus desirable to make the corresponding dynamical phase transition directly accessible to analytical investigation.

The aim of this paper is to develop a better understanding of aSOC by means of a simple conceptual model. We start in §2 with a brief review of the biological background. In §3, we introduce a simple neuron model. In §4, we use a moment closure approximation [26] for deriving a low-dimensional system of ordinary differential equations (ODEs) that capture the emergent dynamics. By analysing the bifurcation structure of these ODEs, we show in §5 that the model exhibits a non-equilibrium phase transition. In §6, we discuss the conditions under which the chosen topological update rules drive the system towards the phase transition. Finally, in §7, we show analytically and numerically that the model indeed exhibits aSOC.

## 2. Biological background

In biology, transmission of information between neurons occurs via cell contacts known as synapses. Up to the order of 10^{5} such connections (with different partners) can exist on a single neuron. Synapses allow a pre-synaptic neuron to depolarize a post-synaptic neuron, similar to polar devices that transmit current only in one direction. The topology of interconnections can thus be captured by a directed network, in which the nodes correspond to neurons and the directed links correspond to synapses.

On a short time scale, neurons encode information in an electric potential across their cell membrane. In the absence of inputs a neuron approaches a *resting state* with a characteristic membrane voltage. Depolarization of the membrane owing to input from other neurons can lead to a *firing state* in which active mechanisms are used to emit a strong voltage pulse, which in turn excites connected neurons. After firing, the neuron enters a *refractory state* in which no excitation is possible before it finally returns to the resting state.

On a longer time scale, the strength of synapses changes depending on the activity of the connected neurons. Thus, from an engineering perspective the synapse is a memristive element [27]. In biological terms the processes affecting synaptic strength are collectively known as synaptic plasticity. Here, we specifically consider *homeostatic synaptic plasticity* (HSP) [28,29], which decreases the strength of synapses if the activity of a neuron is high, and increases the strength if the activity is low.

## 3. Discrete neural model

In this paper, we consider a directed network of *N* nodes. At any time a given node is in one of three different states: resting (inactive, *I*), firing (*F*) or refractory (*R*). We start from a random network in which the majority of the nodes are in the inactive state, whereas a small fraction are in the firing state. The node states are then evolved according to the following rules: firing nodes become refractory at a rate *i*, refractory nodes become inactive at rate *r*, and, for every link pointing from a firing neuron to an inactive neuron, the target neuron is set to the firing state at rate *p*.

The network topology is evolved according to a rule modelling synaptic plasticity: a firing node loses an incoming link at the rate *l* whereas new links are established between nodes at the rate *g*. The creation and deletion of links is reminiscent of the formation of synaptic contacts in the developing brain, but is used here as a discretized model of the continuous changes of synaptic weight in the adult brain. By formulating the model in terms of discrete linking and unlinking events we avoid additional complications caused by real-valued link dynamics.

The conceptual model described above is one of the simplest conceivable settings that allows the interplay between the spreading of excitation and homeostatic topological evolution to be studied. Excitation dynamics are modelled by stochastic transitions with constant rates. As a consequence, a node can be excited by a single active neighbour, whereas, in real neural systems, multiple inputs are needed. Moreover, the time a node spends in the firing state is exponentially distributed, while real action potentials have a well-defined length. However, as known from epidemiological modelling, none of these simplifications impair the validity of the model predictions [30].

## 4. Low-dimensional approximation

Let us now derive a low-dimensional description for the time evolution of macroscopic quantities in the limit of infinite *N*. To this end, we employ a moment closure approximation (MCA) [26,31,32]. We denote the densities of nodes in a certain state by [*F*], [*I*], [*R*], respectively. Similarly, we denote the per-neuron density of links from a node in state X to a node in state Y by [*XY*].

Consider [*F*], the density of nodes in the firing state: it increases when a firing neuron excites an inactive node, which happens at rate *p*[*FI*], and decreases when a firing node becomes refractory, which happens at rate *i*[*F*]. By such reasoning, we obtain equations for the time evolution of [*F*], [*I*] and [*R*], which, however, do not form a closed system as they depend on the link density [*FI*].

One possibility to close the system of equations is to approximate the link density by the densities of the nodes involved, usually by assuming [*FI*] ≈ *k*[*F*][*I*], where *k* is the mean degree of the network. Note that, in such a *mean-field approximation*, correlations between neighbouring nodes are neglected.

Alternatively, it is possible to treat the link densities themselves as dynamical variables and derive evolution equations for them. In general, these equations depend on triplet densities, which can then be approximated by link and node densities. This approach is often referred to as the *pair approximation*; it is described in detail in appendix A.

Using the pair approximation and assuming a Poissonian degree distribution, we obtain the following ODE description of the system:
4.1a
4.1b
4.1c
4.1d
4.1e
4.1f
4.1g
4.1h
4.1i
4.1jand
4.1kwhere the last equation explicitly captures the change of the mean degree *k* of the network. Writing the equation for *k* saves us from writing the longer equation for IF, owing to
4.2
Similarly [*I*] follows from the conservation relation for nodes
4.3

## 5. Non-equilibrium phase transition

Before we address the dynamics of the full system, let us first focus on the case *l* = *g* = 0, where the network is static. In this case, the right-hand side of the equation of motion for *k* vanishes such that *k* becomes a control parameter. Below, we will refer to the system without topological evolution as the *static network model* and to the complete system as the *adaptive network model*.

The static network has a trivial steady state **x**^{0} = ([*F*]^{0}, … [*RR*]^{0}) in which all nodes are inactive, i.e [*I*]^{0} = 1,[*II*]^{0} = *k*,[*F*]^{0} = [*R*]^{0} = [*FF*]^{0} = … = [*RR*]^{0} = 0. Depending on parameters, it may moreover have a non-trivial, active steady state, in which [*I*]< 1. The stability of the trivial steady state is determined by the spectrum of the Jacobian matrix **J**|_{x0} ∈ ℝ^{10×10}, where . The steady state is asymptotically stable if all eigenvalues of **J**|_{x0} have a negative real part [33].

If the variables *x*_{i} are ordered as in equations (4.1), the non-vanishing entries of **J**|_{x0} are

In the following, we assume *i* > 0, *r* > 0 and *p* > 0. The characteristic polynomial can then be factored into seven linear factors with negative real roots and a remaining third-order polynomial
5.1where
5.2

In order to assess the stability of **x**^{0} without explicitly calculating the roots of *f*(*λ*), we use the Routh–Hurwitz theorem [34]. It states that the roots of a polynomial *p*(*x*) = *x*^{n} + *b*_{1}*x*^{n −1} + … + *b*_{n −1}*x* + *b*_{n} all have negative real parts if the *Hurwitz determinants*
5.3with *k* = 1 … *n*, are all positive. Calculating the Hurwitz determinants of the Jacobian matrix **J**|_{x0} and evaluating the positivity conditions reveals that the trivial steady state is stable if and only if
5.4

At *k* = *k*_{c} the trivial steady state becomes unstable as the system undergoes a transcritical bifurcation. In this bifurcation, a non-trivial steady state enters the positive cone of the state space, becoming a physical solution. The transcritical bifurcation thus marks a transition between the trivial inactive state and an active state in which ongoing activity is observed. This transition is in the same universality class as directed percolation.

The role of the transcritical bifurcation is illustrated in a representative bifurcation diagram in figure 1. The diagram shows that the analytically predicted bifurcation point is in very good agreement with numerical results from agent-based simulations of the system. Further, numerical continuation of the ODEs (4.1) shows good agreement close to the bifurcation point. By contrast, closing the MCA at mean-field level yields [*F*]_{mf}^{0} = *r*(1 − *i*/(*pk*))/(*i* + *r*) for the non-trivial steady state. This can be seen to be a much poorer approximation and underestimates the bifurcation point.

## 6. Local information and time-scale separation

The previous section showed that the static network model exhibits a transcritical bifurcation. In the language of statistical physics, this bifurcation constitutes a phase transition. To establish that the adaptive network model shows SOC, we have to show that the evolution of the connectivity drives the system to the critical point.

In the discussion leading up to figure 1, we treated *k* as a parameter of the system. For studying the evolution of the connectivity, we now consider *k* as a dynamical variable that evolves according to equation (4.1*k*). By introducing dynamics in *k* we change the dynamical system, which can potentially lead to a changed bifurcation diagram. Indeed, in general, a diagram analogous to figure 1 cannot be drawn for the adaptive network model because *k* is not available as a parameter axis anymore.

The pitfall described above is inherent in the concept of SOC: criticality can only be cleanly defined in a different system, where the self-organization is absent. To circumvent this pitfall one has to demand that the self-organization acts on a slower time scale. In the example studied here, this implies
6.1In this case, the bifurcation diagram of the static network model reappears as the bifurcation diagram of the fast subsystem of the adaptive network model, where *k* can again be treated as a parameter.

The genesis of aSOC in our model requires additionally a second time-scale separation. To see this, let us revisit a widely held plausibility argument for aSOC [13,18]. It is argued that in adaptive networks robust SOC is possible because the dynamics on the networks make information on the global topology available in every node. On the basis of this information, the nodes can then infer the global phase and adjust the local topology accordingly. Indeed, in previous publications [16,18] the network nodes extracted the global phase information by integrating their dynamics over a long time. While some biological processes are known to enable such temporal averaging [36,37], we explore an alternative explanation.

In our model, the local accessibility of the global phase information is limited, as both the dynamics on the static network as well as the topological updates are memoryless processes that depend only on the current state of a single link or node. Hence, neurons in the resting state do not possess any information about the global phase, as they occur in both phases. By contrast, neurons in the firing state can infer the global phase from their local state, as their occurrence is restricted to the active phase. To achieve aSOC despite the limited local accessibility of the global phase, links have to be created *tentatively* as long as no definitive information is known, but destroyed *decisively* once information about the global phase is available. This corresponds to a separation of time scale between the link creation and the link deletion process
6.2Note, that the twofold separation of time scales constituted by equation (6.1) and equation (6.2) is typical for SOC models [18,19,22], although it is sometimes hidden in a non-local model definition [38,39]. Our analysis reveals that it is indeed a necessary ingredient to achieve criticality without centralized control.

## 7. Self-organized criticality

Let us now show that the adaptive network model has a stable steady state, which in the double limit *l* → 0, *ε* → 0 coincides with the bifurcation of the static network model. We start by defining **x*** = ([*F*]*, … , [*RR*]*, *k**) as a steady state of the adaptive network model (4.1). Evaluating the stationarity conditions and , we can immediately read off the steady-state values
7.1

The remaining equations ) then become linear and can be solved by a computer algebra system. Owing to the time-scale separation, higher order terms in *ε* and *l* can be dropped, and we obtain
7.2a
7.2b
7.2c
7.2d
7.2e
7.2fand
7.2g

For *ε*, *l* ≪ 1, the steady state **x*** is always stable, which can be shown by calculating the characteristic polynomial of **J**|_{x*} and then checking the Hurwitz determinants for positivity (again, dropping higher order terms in *ε* and *l*). Moreover, equations (7.2) reveal that, in the double limit *l* → 0, *ε* → 0, [*II*]* → *k*_{c}, while [*F*]*, …, [*RR*]* → 0, i.e. **x*** converges towards the transcritical bifurcation of the static network model.

Recall that the ODE system describes the model system in the limit *N* → ∞; as observed recently, this is also the limit in which criticality may be expected for non-conserving dynamics [40,41]. We thus conclude that for *N* → ∞, and *l* → 0, *ε* → 0, the HSP-inspired update rule self-organizes the adaptive system to criticality.

To confirm aSOC in the discussed model, we ran individual-based simulations. We start with a network of *N* nodes, each of which has an outgoing connection to any other node with probability *k*_{0} /*N*. We initialize 5 per cent of all nodes in the firing state, all others in the inactive state. Then, nodes and links are evolved according to the rules described in §3 using the Gillespie algorithm [42].

For finite *N*, the simulations tend towards the absorbing inactive state owing to demographic stochasticity. To compensate for this finite size effect, we include an additional process: inactive nodes fire spontaneously at rate *s*. This process has an immediate biological interpretation as it reminds us of the spontaneous activity observed in neurons. To reconcile the simulations with the low-dimensional description (4.1), *s* has to vanish in the *N* → ∞ limit. A plausible assumption is *s* = *c*/*N*, where *c* can be chosen arbitrarily. We have verified that for sufficiently large system sizes the particular choice of *c* does not significantly influence the evolving connectivity (figure 2).

We start in figure 3 by plotting the time evolution of the mean degree *k* in networks with different initial configurations. All networks approach the same connectivity, which corroborates that the adaptive network model has exactly one stable steady state to which it converges irrespective of initial conditions.

The inset of figure 3 shows that the in-degree distribution stays Poissonian throughout topological evolution and thus retrospectively corroborates the assumption made in the derivation of the MCA.

So far we have shown that the network self-organizes to a unique value of the connectivity. It remains to be tested whether this value is indeed the critical connectivity. As a first test, we compare the connectivity of the networks evolved in numerical simulations with the critical connectivity *k*_{c} predicted by the analytical approximation from equation (5.4). As shown in figure 4, the numerical result agrees with the analytical estimate of the critical state except for a small but systematic deviation. This discrepancy can be understood considering equation (7.2*g*). The second and third terms of this equation are positive for all *i*, *r*, *k*_{c}, *l*, *ε* > 0. Hence, the systematic deviation *k** > *k*_{c} is a consequence of the networks' topologies being evolved with small but finite rates *ε* and *l*.

As a second test, we directly probe the dynamics on the evolved networks. For this purpose, we first evolve the topology until the mean degree has reached a stationary state. Then, we deactivate the adaptive addition and deletion of links but manually add (delete) links from the network. After every addition (deletion), we let the dynamics on the network reach a stationary level and record the average number of active neurons. As shown in figure 5, this procedure recreates the phase diagram of the system numerically. It thereby provides direct evidence of the criticality of the evolved state. The slight displacement from the critical point can be understood by recalling the analytical results given above: According to equation (7.1), [*F*]* = *ε* in the evolved network. Hence, the displacement towards the active regime can be attributed to the small but finite rates *ɛ* and *l* used in the simulations.

## 8. Discussion

In summary, we have shown that activity-dependent synaptic plasticity self-organizes a neural network to criticality, provided that driving is slow and there is an appropriate separation of time scales between potentiation and depression.

We have considered a simple discrete neural network model and used MCA to derive a low-dimensional ODE representation, in which the dynamical phase transition becomes manifest as a transcritical bifurcation. By adding activity-dependent rewiring rules, we obtained an adaptive network model that has one attractive steady state. We have shown that, in the limit of infinitesimally slow topological adaptation, this steady state coincides with the bifurcation of the static network model, and, thus, that the adaptive model displays aSOC.

We emphasize that the particular choice of the parameters *p*, *r* and *i* does not affect the qualitative model behaviour but only the quantitative predictions, in particular the evolved connectivity: the probability that a firing node excites a specific neighbouring node is *p*/*i*. It is apparent from equation 5.4 that choosing higher values for *p*/*i* will result in lower evolved connectivities *k*_{c}. In order to keep simulations computationally feasible, which specifically means maintaining a low number of links, we have chosen unphysiologically high values for *p*/*i*. It can be seen from our analytical treatment that this affects neither the existence of a steady state at criticality nor the stability of this state.

Our work provides a conceptual angle on SOC of neurally inspired models. Although the phenomenon has been demonstrated in several earlier works [19,43], the simplified model studied here is the first system in which it is analytically tractable. Using dynamical systems and bifurcation theory, aSOC can be established more rigorously than in more realistic models and experiments, where evidence for criticality is mainly provided by the numerical observation of power laws. We believe that this approach will prove to be useful in the context of other, more complex dynamical phase transitions. For example, it is conceivable that a similar strategy may help us to elucidate the recently observed self-organization to the onset of synchronous activity [22]. We hope that the approach can thus contribute to settling an ongoing debate regarding the existence and function of criticality in the brain [5] and potentially to the development of self-organizing electronic circuits.

## Appendix A. Moment-closure approximation

As described in §4, the time evolution of node densities is given by A1aand A1b

For an expansion beyond the mean-field level, we need to derive equations for the time evolution of link densities. There are four ways in which a link of a certain type can be created or destroyed: (i) one of the link's nodes changes independently of others, (ii) the link is added or removed as a result of topological evolution, (iii) one of the nodes is excited by the other via the link in question, or (iv) one of the nodes is excited via a *different* link. Processes of type (i), (ii) and (iii) can be understood on the level of nodes and links. To understand (iv), consider that activity in a given focal link, say, excitation along an FI link, not only affects the links itself but also affects the links connecting to it; here, for instance, other links connected to the I-node. Thus, processes of type (iv) can only be captured if the densities of subgraphs with two links, so-called *triplets*, are taken into account.

Consider the change in the density of FI links,
A2where [*X* > *Y* > *Z*] denotes the density of directed triplets, with the directionality of links indicated by ‘ < ’ and ‘ > ’, respectively. The three rightmost terms are of type (i) and (ii). The others can be understood as follows: a triplet turns into a triplet at rate *p*, increasing the number of FI links by one (the underline denotes the link on which the update rule is applied). At the same rate, the F in links excites the I, which decreases the number of FI links by 1. Finally, triplets change to at rate *p*, destroying the FI link *not* underlined. In the last transition, there are actually *two* FI links being destroyed, but we have already counted one of them as a process of type (iii). Evolution equations for other types of links are obtained using the same reasoning.

The pair approximation closes the system of equations by approximating the occurring triplet densities in terms of link densities. Let us start with a triplet of the type X > Y > Z. It consists of one XY link, which we know occurs with densities [*XY*]. If we assume that the states of next-nearest neighbours are uncorrelated, we can calculate the expected number of links from the Y node to a Z node as [*YZ*]/[*Y*]. We can thus approximate the triplet density as
A3

Triplets of the type X > Y < Z are approximated similarly: again, the triplet contains an XY link, which occurs with density [*XY*]. To calculate the expected number of ZY links connected to the central Y node, we now need to take into account that we already know that one incoming link is an XY. Thus, we need to consider the *mean excess degree* *q*, i.e. the expected number of incoming links that the Y has *in addition* to the XY link. Assuming that each of these links is a ZY link with probability [*ZY*]/(*k*[*Y*]), we can write
A4

Note that, for symmetric triplets (X > Y < X), equation (A 4) has to be modified to avoid double-counting, A5

Assuming that the in-degree distribution of the evolving network is Poissonian, we can simplify the triplet approximations further. For Poissonian degree distributions, *q* = *k* [44], which leads to the expressions in equation (4.1). Note that the assumption is confirmed by the numerical results in §7.

- Received July 13, 2012.
- Accepted August 22, 2012.

- © 2012 The Author(s) Published by the Royal Society. All rights reserved.