Royal Society Publishing

Boolean network models of cellular regulation: prospects and limitations

Stefan Bornholdt

Abstract

Computer models are valuable tools towards an understanding of the cell's biochemical regulatory machinery. Possible levels of description of such models range from modelling the underlying biochemical details to top-down approaches, using tools from the theory of complex networks. The latter, coarse-grained approach is taken where regulatory circuits are classified in graph-theoretical terms, with the elements of the regulatory networks being reduced to simply nodes and links, in order to obtain architectural information about the network. Further, considering dynamics on networks at such an abstract level seems rather unlikely to match dynamical regulatory activity of biological cells. Therefore, it came as a surprise when recently examples of discrete dynamical network models based on very simplistic dynamical elements emerged which in fact do match sequences of regulatory patterns of their biological counterparts. Here I will review such discrete dynamical network models, or Boolean networks, of biological regulatory networks. Further, we will take a look at such models extended with stochastic noise, which allow studying the role of network topology in providing robustness against noise. In the end, we will discuss the interesting question of why at all such simple models can describe aspects of biology despite their simplicity. Finally, prospects of Boolean models in exploratory dynamical models for biological circuits and their mutants will be discussed.

Keywords:

1. Introduction

When, as a theoretical physicist by training, I became interested in modelling biological phenomena, I was fascinated when watching biologists on the blackboard discussing a particular signal transduction network. The circles and boxes on the blackboard, connected by arrows and lines, were much simpler than what I had learned as mathematical models of the dynamics of an actual biochemical network. A full mathematical differential equations model, with a large number of indispensable kinetic constants and parameters, would predict the time course of a certain regulatory pattern, in accordance with experiment. Yet, over the seemingly simple draft on the blackboard, the biologists were confidently discussing about the dynamics going on in the network. This contrast was most fascinating to me, raising the question: what is the minimal model one needs to get a meaningful idea about the dynamics in a regulatory network? If such a model would even be simple to use, without the need of too much mathematical knowledge, it would fill the gap of a simple tool for exploratory and simple dynamical modelling of regulatory circuits (Lazebnik 2002).

To follow this line of thought in a systematic way, let us here consider the regulatory machinery of the living cell from a computational perspective. How do cells compute? And what can we learn from this exercise for how to model the relevant dynamics of cellular control circuits? We focus on the remarkable fact that cells compute reliably, despite the massive presence of molecular stochasticity, and ask whether this may give us hints for modelling. When sloppy machinery is at work in the cell, cannot we use ‘sloppy’ modelling techniques, as well? This could provide a path to simplify computer models of regulatory networks, given that one knows which are the relevant aspects of the system to keep in the model and which are the irrelevant aspects that can be neglected. In the light of our knowledge of vast amounts of molecular details, this may be a difficult task in itself.

As one class of much simplified models for cellular regulation, we discuss discrete dynamical (or Boolean) networks. As a biological example, we summarize their application to modelling yeast cell cycle control. Adding stochastic noise to these models allows one to discuss questions of when network dynamics is robust against noise—and against simplifications in computer model implementations. A perspective that helps us in identifying the relevant aspects of computation in the cell is the analogy of a computer.

2. Computers and the living cell

A living cell is as different as it could be from what comes to our mind if we think of a computer; however, a living cell has ample need for computation and control in its routine processes. A prominent property is the continuous adaptation and reaction to environmental inputs as stress, food or damage, by movement, growth or repair. Adaptation and regulation can be called analogue computation, computation with real numbers, as opposed to digital computation as used in modern computers. Some aspects of computation in the cell, however, are of digital character as, for example, the control of sequences of events, as the cell cycle, or of multicellular development. Such critical processes have to be controlled in a highly reliable way, and while this would be a trivial task for a computer made of deterministic silicon switching elements, it surely is a much harder task to realize with the cellular components of molecules and water. However, many molecular regulatory elements indeed show binary characteristics, and even bistable switches are frequently observed in molecular circuits (Tyson et al. 2003). Therefore, digital variables can be represented in the cell and elements for digital computation exist, and it is natural to ask whether there is any digital computation in the cell.

Considering the main architecture of a digital computer, there are few similarities with a regulatory network in a living cell. A foremost feature of a digital computer is that it works in subsequent steps, where the desired sequence of actions is controlled by a program. In early computers, for example, the sequence was stored on a punched tape with the commands recorded in the varying combinations of punched holes. Sequences of events in the cell, in contrast, are not as easily controlled. There is no simple memory for a time sequence in the cell and, most importantly, cellular processes are not controlled by a centralized clock as in a digital computer. Therefore, while for the engineer a punched tape would be the easiest option for generating a sequence (as a pre-recorded sequence), if all that a system has to deal with are molecules and water, this is not a feasible option. The alternative is the dynamical systems approach, where a network (or circuit) of molecular elements generates a dynamical output signal that then serves as the desired control sequence. Thus, the sequence emerges as a dynamical trajectory of the system, determined by the circuitry of the system. Only, if all you have is molecular noisy, and thus unreliable, building blocks, how can you generate a reliable sequence of actions from them?

To illustrate this analogy between a cellular molecular circuit and a computer, let us briefly consider, for example, the engineering example of controlling washing machines, which like many other everyday appliances have little computers (or microcontrollers) that control the sequences of their functions. Switches, temperature probes, water level probes, etc. provide input signals to the control circuit, which from these data generates an output, i.e. a sequence of events in response to the input parameters, as the selected program, temperature, water level, etc. The software of the computer determines the sequence of switching events, controlling pumps, valves, motors, heaters and so on. The hardware of such a control circuit is rather similar to a punched tape computer (early washing machines had switching discs mounted on a common axle, synchronously driven by a motor).

The biological side of our analogy is the task of controlling the cell cycle in the living cell. From a sequence control perspective, this problem is on a similar scale with a similar number of variables and dynamical stages as in our engineering example above. Only, the ‘hardware’ is radically different, with a small control circuit of genes and proteins which generate the central timing and control sequence in the cell. The ‘software’ here is the desired sequence of gene/protein activation states along the cell cycle. The output of the control circuit is a sequence of molecular activation patterns, in response to external and internal signals such as cell size, temperature, food supply, etc. The hardware is a molecular network, with an analogue, autonomous dynamics. It is continuously updated (no computer clock cycle) with many elements with a tendency to binary states.

While these two systems are fundamentally different in almost all aspects of their hardware, they share the central requirement of sequence coordination and generation to keep their system running. In the following let us consider this core problem for a biochemical system from this engineering perspective and ask how dynamical networks of switching elements can generate a dynamical sequence of signals. For this purpose we first consider dynamics of networks of simple switches as an extreme simplification of biochemical networks, which may teach us basic principles about control pattern generation. Our central idea is to drop the requirement of a model to predict all exact times in the biochemical network's dynamics (which state-of-the-art differential equation models usually aim at doing). As the central requirement, we keep the requirement to predict ordered sequences of activation patterns. This is the software in the analogy picture. An interesting question now is whether engineering knowledge is applicable to this ‘software layer’ of biochemical networks. Can we construct models of the ‘digital’ aspects of molecular network dynamics, possibly even simple models?

3. Discrete networks as models for cellular computation

One of the most condensed, and impressive, windows into the digital character of a cell is granted by the relatively recent experimental technique of microarrays. Providing a snapshot of most gene states in a cell at a time, they allow one to watch the cellular machinery at work, e.g. under changing external conditions. Considering, for example, a simple Escherichia coli heat shock experiment, it is amazing how deterministic are changes of genetic activity under temperature change (Richmond et al. 1999), which appears to glimpse invariably through the layer of experimental noise of the method.

The projection of gene states to a simple ON/OFF pattern of binary states, as often derived from microarray data, is encouraged by such experiments and often catches well the invariable aspects of repeated experiments. For the modelling perspective, this encourages the use of binary variables for representing gene activity. Also, on the micro-level, this picture is supported by the available accurate mathematical models of single pathways, where, for example, gene activities often operate with expressed protein concentrations varying over many orders of magnitude, mostly either in a saturation regime or in a regime of insignificantly small concentrations, again suggesting the binary simplification of states. When constructing a dynamical mathematical model, the choice of the type of state variables is one aspect, the second being the character of their dynamics in time. A prominent feature of molecular concentration changes is their often rapid change, when compared with the typical metastable character in between changes. In combination, the typical observation of steep flanks and plateaus in cellular protein concentrations suggests that it may not be too unrealistic to represent gene or protein activity by a switch-like dynamics. An example of this extreme simplification of states and interactions in a physicist's view of the yeast transcriptional regulatory network (Maslov et al. 2003) is shown in figure 1.

Figure 1

Simplified representation of the yeast regulatory network. Interactions are classified into the two types of being activated (green) or repressed (red), and the dynamical elements representing the gene states are taken to be binary with values ON (1) or OFF (0). Adapted from Maslov et al. (2003).

Let us now, in this framework, follow the idea of a mathematical model that keeps the requirement to predict ordered sequences of activation patterns, without predicting the exact timing of a biomolecular network. Discrete dynamical networks (also called switching networks, or Boolean networks in a general mathematical terminology) have long been discussed as models for genetic regulation (Kauffman 1969; Thomas 1973). However, as until recently full architectural information about gene regulation networks was scarce, it was mostly random Boolean networks that served as surrogate models for gene regulation (Kauffman 1990, 1993; Aldana-Gonzalez et al. 2003; Drossel 2008). Simulating such dynamical networks with an architecture identical to natural regulatory networks has entered the scene only recently.

Let us, as an example, consider a particularly simple subset of Boolean networks, the so-called threshold networks (or threshold Boolean networks; Derrida 1987; Kürten 1988a; Rohlf & Bornholdt 2002). They are a subset of all Boolean networks with the Boolean function of each node depending on the sum of its input signals only. They are particularly simple variants of the full Boolean networks and can easily be implemented in the computer. Nevertheless, they are very well suited for representing regulatory networks, as we will see below. Furthermore, the characteristic dynamical features of Boolean networks are found as well in threshold networks (Kürten 1988b). In these networks each node is taking one of two discrete values, Si=0 or 1, which at each time step is a function of the value of some fixed set of other nodes. The links that provide input to node i take discrete values Jij=±1 for activating (+) and repressing (−) links, and Jij=0 if i does not receive any signal from j. The dynamics of the network of N nodes is then defined by a simple sum rule for every node, which are synchronously iterated in discrete time steps t,Embedded Image(3.1)Embedded Image(3.2)with some threshold parameter h. The natural choice is a threshold of h=0, such that the genetic switch is inactive if there is no input signal, and switches on when signals are present. When a node needs more than one incoming signal to be activated, a corresponding value of h can represent this fact in the model. Starting from a given initial condition, the network then produces a dynamical sequence of network states, eventually reaching a periodic attractor (limit cycle) or a fixed point (figure 2). The attractor length depends on the topology of the network. Earlier studies on random networks found that below a critical connectivity K<Kc (average number of incoming links per node), the network decouples into many disconnected regions, resulting in short transients and attractors. Above Kc any local signal will initiate an avalanche of activity that may propagate throughout most of the system and transients as well as attractor cycles tend to become quite long. While the notion of criticality is only well defined for random networks, it has long been argued that the intermediate range of activity is particularly suitable for efficient information processing. A prominent feature of the dynamics of such networks is the relatively small number of attractors compared with the 2N possible states of the network (figure 3). This feature motivated the hypothesis that a similar mechanism potentially could stabilize macrostates of cellular regulation as, for example, cell types (Kauffman 1993).

Figure 2

Basin of attraction of a dynamical attractor of a random Boolean network. Network states (circles) and transitions between them are shown, which eventually reach a periodic attractor cycle. Some network states do not have any precursor state (garden-of-Eden states). Most states are transient states and form tree-like patterns of transient flows towards the attractor (adapted from Wuensche (1994)).

Figure 3

The full state space of a random Boolean network with N=13 nodes: 213=4192 initial states each flow into one of 15 attractors (adapted from Wuensche (1994)). The basin of attraction marked with an arrow is the one shown in figure 2.

Boolean models for regulatory networks remained at this speculative level for many years and have become more than anecdotal only very recently when applied to modelling actual biological regulatory networks.

Thieffry and co-workers constructed early logical models of regulatory circuits in Drosophila development (Sanchez et al. 1997; Sanchez & Thieffry 2001). Albert & Othmer developed a Boolean network that accurately predicts the dynamics of a developmental module in Drosophila (Albert & Othmer 2003). This came as a true surprise, as nobody expected such a dramatically simplified dynamical system to predict anything close to the dynamics of the original biological counterpart. It turned out, however, that essential features of the dynamics remain intact, allowing one to predict the developmental pattern formation, while only details of the dynamics are lost as, for example, the exact timing. In a subsequent model by Li et al. (2004), a Boolean network of 11 nodes is used to predict the Saccharomyces cerevisiae cell-cycle dynamics, yielding accurate predictions of the sequential events of the cell cycle.

Further applications of this model class to modelling real biological genetic circuits show that they can predict sequence patterns of protein and gene activity with much less input (e.g. parameters) to the model as the classical differential equations approach. Examples are models of the genetic network underlying flower development in Arabidopsis thaliana (Mendoza et al. 1999; Espinosa-Soto et al. 2004), the signal transduction network for abscisic acid-induced stomatal closure (Li et al. 2006), the mammalian cell cycle (Faure et al. 2006) and the Schizosaccharomyces pombe cell cycle network (Davidich & Bornholdt 2008). Let us take a closer look at the Boolean cell cycle model of S. cerevisiae as one prototypical example.

4. A biological example: the yeast cell cycle

The cell cycle of budding yeast (S. cerevisiae) is a widely studied example of a robust dynamical process in the cell (Mendenhall & Hodge 1998; Chen et al. 2004). The yeast cell cycle control circuit is probably one of the best understood molecular control networks, with accurate biochemical kinetic models available (Chen et al. 2005). It thus provides an ideal test bed for validating a Boolean network model version of it. Such a model has been proposed by Li et al. (2004), modelling a network of 11 proteins or genes as binary nodes Si, each with two possible states Si∈{0,1} (figure 4).

Figure 4

Boolean network model for the yeast (S. cerevisiae) cell cycle control network as defined by Li et al. (2004).

Their states depend on signals they receive from each other via discrete links Jij=±1 for activating (+) and repressing (−) links (green/red arrows in the figure). The dynamics of the network is then given by a slightly modified threshold network sum rule for every node, which again are synchronously iterated in discrete time steps t,Embedded Image(4.1)Embedded Image(4.2)Embedded Image(4.3)with threshold parameter h=0. The only difference to standard threshold networks is the separate rule if no input is received by a node. It keeps its current state unless it is actively regulated. Only so-called self-degrading nodes (indicated by loops in figure 4) go to the inactive state in this case according toEmbedded Image(4.4)Note that there are no kinetic constants and other continuous variables entering this model. It is solely based on the wiring diagram of the network defined by the interaction links Jij and their sign (+/−). This ‘wiring’ diagram is inferred from the qualitative knowledge about who interacts with whom in this regulatory module. Accordingly, the predictive power of this model does not lie in accurate quantitative predictions of concentrations and timings. Instead, it is able to provide a bird's eye view on the space of all possible network states, and how they are related through dynamical transitions. This is the attractor picture of dynamical flows in the network.

In this example, any dynamics on the network eventually gets stuck in one of seven fixed points, one of which has a large basin of attraction; in fact, 1764 of the 211=2048 possible initial states of the network end up in this state (figure 5). Surprisingly, this unusual end state corresponds to the biologically stable final state (G1) at the end of the cell cycle. Furthermore, preparing the network with the known protein states at the start of the cell cycle, the dynamical trajectory of the network follows the exact trajectory of 12 subsequent phases as known from the yeast cell cycle before reaching the G1 fixed point (arrows). This is remarkable as it is extremely unlikely to obtain such a perfect match by chance. No previous knowledge about the actual dynamics of the cell cycle has been put in.

Figure 5

Every dot is a state of the network (with a specific ON or OFF state for every node), and the arrows denote the sequence of network states in time. Of seven attractors in total, the largest attractor has a basin of 1764 states, which all flow into the G1 fixed point.

Beyond the prediction of the biological trajectory, the attractor map provides further information about the dynamical flows in the space of possible states. A fan-like convergence of the non-biological dynamical paths towards the correct trajectory may be interpreted as some form of error-correction ability, and artificial knockout experiments in the computer point towards an unusual stability of the correct biological trajectory (Li et al. 2004).

This is an example for using a simple threshold Boolean network to predict the sequence of states of a small biological regulatory network. But how general is this method? A recent independent study on the different regulatory module of the fission yeast cell cycle network indicates that application of the method is quite straightforward and does not require tuning of any sort, at least in this example (Davidich & Bornholdt 2008). Again, a network of interactions (figure 6) has been constructed from known interaction data, yielding a state space map, which again shows one prominent attractor that corresponds to the biological trajectory and fixed point of the cell cycle (figure 7).

Figure 6

Boolean network model for the fission yeast (S. pombe) cell cycle control network (Davidich & Bornholdt 2008).

Figure 7

Fission yeast model state space with 210=1024 states in 18 attractors (fixed points). The largest attractor (722 states) corresponds to the biologically stable final state. The trajectory after the start signal follows the biological time sequence (adapted from Davidich & Bornholdt (2008)).

Let us step back for a moment and view these models in our sequencing-computer perspective. We now have a picture of a switching network representation that generates a sequence of actions in a computer-like reliability. What is different is that it is not stored on a tape, but generated intrinsically by the dynamics of the network.

What can we learn from the fact that the major course of the dynamics of a real biochemical network can be represented in such a simple way? A simple interpretation is that the sequence of actions can be viewed as the ‘blueprint of the dynamics’ of the control of the cell cycle. The inner dynamical workings of a cellular sequence control network could be this simple, if it would not have to be implemented by biochemical means. However, as the elements of the network are of biochemical nature, with signals transmitted by small and fluctuating numbers of molecules, the observed dynamics in the cell is more complicated than the underlying digital layer as modelled with the Boolean network.

The question remains as to when and under what conditions can biochemical networks in fact implement a ‘dynamical blueprint’. Proteins and genes are ‘noisy’, with fluctuating activity (McAdams & Arkin 1997), which sometimes even shows in the macroscopic phenotype (Pedraza & van Oudenaarden 2005). How does the molecular network achieve a clockwork-like reliability despite the fluctuating molecular building blocks (Rao et al. 2002)? These questions can be explored in an extended version of Boolean networks, with added stochasticity.

5. Discrete network models and stochastic dynamics

The fundamental question of how to achieve reliable computation by means of unreliable elements dates back to a time when the first computers were built (von Neumann 1956). In the context of noisy dynamical networks, it is an important question, as well. However, adding noise to Boolean networks is not straightforward w.r.t. arbitrarily small noise levels. Commonly, a whole node is flipped to its opposite state (Qu et al. 2002; Aldana & Cluzel 2003; Kauffman et al. 2003, 2004; Shmulevich et al. 2003), which is not very realistic if one wants to mimic stochastic fluctuations. Let us expand the Boolean model to be time-continuous and stochastic such that it can account for these effects.

A simple extension is to add a (protein) concentration dynamics inside a node, while keeping Boolean ON/OFF states on the outside, for communication between the nodes. At each node, the input signals from neighbour nodes are summed up and drive growth or decay of a concentration variable ci(t) (motivated by the dynamics of protein in regulatory processes of the cell). We here extend the basic model with an explicit time delay td, accounting for the transmission time of the incoming signals. Similar models for regulatory networks have been discussed by Glass (1975). We here in addition allow the transmission delays to fluctuate (in order to account for biochemical noise). Depending on whether the sum is negative or positive, a decay or growth process results, most easily described by a simple differential equation, driven by a binary (0 or 1) input,Embedded Image(5.1)Embedded Image(5.2)with a suitably defined threshold hi for each node. The binary output of the node is derived from the concentration by a simple threshold rule:Embedded Image(5.3)Embedded Image(5.4)Noise can now conveniently be added to the transmission delay times tdtd+Χij with Χij a uniformly distributed random number Χ∈{0,Χmax} chosen independently for each single link Jij. Randomness is not quenched in this model, which means that each Χij is freshly drawn whenever a new signal enters the link. In general, with this technique of noisy transmission times, the effect of very small levels of noise can be examined.

The most significant consequence of extending a Boolean network in this way is that the nodes are no longer synchronously updated in discrete time steps. Instead, each node obeys its own, autonomous dynamics (only when noise fluctuations are turned down to zero, the original synchronized dynamics is restored). With noise in the system, however, processes may desynchronize in the network and become unstable, and the question of how a reproducible time sequence can be generated by the network can be studied in this setting. This can be viewed as a toy model for how robustness against noise from biochemical stochasticity can be achieved in cellular regulation. In fact, when adding noise to Boolean networks, it was found that most attractors in Boolean networks are artefacts of the synchronous update mode and disappear in the presence of noise (Greil & Drossel 2005; Klemm & Bornholdt 2005a). Therefore, not every dynamics of a deterministic Boolean network can be reproduced in a noisy Boolean network without a central update clock, and presumably not in the wet analogue of a biochemical network either.

Let us call those networks ‘reliable’, whose dynamical attractors of the deterministic Boolean network are correctly reproduced in the noisy Boolean network version. We can then look for the conditions that a network architecture has to fulfil in order to exhibit reliable dynamics. This is the model version of the question of how a biochemical system manages to produce a reliable time sequence of protein states, despite lack of a central update clock as in a computer. It turns out that even a simple network as the extended Boolean network above is able to produce a reproducible dynamics despite noise and lack of a central clock (Klemm & Bornholdt 2003). The low pass filter characteristics of the smooth loading curve, as well as the signal transmission time delay, help the formation of a self-organized internal clock.

Specific circuit motifs, however, exhibit a dynamics that is not reproducible (see figure 8 for the simplest example of a two-node oscillator (Braunewell & Bornholdt submitted)). In general, certain circuit patterns are unreliable in the presence of noise, another example being feedback loops with an even number of inhibiting interactions (Klemm & Bornholdt 2005b). A practical example for a reliable circuit is the three-node feedback loop with inhibitory couplings, also called the repressilator (Elowitz & Leibler 2000). A four-node version of it, on the contrary, is unreliable and would not exhibit stable oscillations. A central requirement is that the time ordering of flips (state changes of nodes) has to be robust against noise for the network to stay within a given attractor. This leads to conditions on the circuitry similar to known rules in electrical engineering (Klemm & Bornholdt 2005b). Related criteria for dynamics in feedback loops have been worked out for non-delayed networks (Glass & Pasternack 1978).

Figure 8

Simple example of small oscillator network: under synchronous update or noiseless autonomous dynamics, oscillations would persist, while adding noise or timing fluctuations to autonomous nodes will make oscillations disappear. (a) Unreliable dynamics; (b) reliable dynamics; (c) concentration variables of the two nodes A and B decay with time in the unreliable scenario; (d) concentration variables of the two nodes A and B show stable oscillations in the reliable case.

So, how about the budding yeast cell cycle network: is it reliable? Clearly, this is a rather philosophical question because, as we all know, yeast functions very well. However, from the modelling side, we so far only know that deterministic models reproduce the biological sequence. On the other hand, phases with multiple flips among the nodes can in principle desynchronize the system. With noisy Boolean networks at hand, we are now able to make a double check, which indeed has been done (Braunewell & Bornholdt 2006) by reformulating the model by Li et al. (2004) in terms of noisy Boolean nets. The result of this test is that the correct control sequence emerges from the network, even in the presence of strong noise. Therefore, the yeast cell cycle network is reliably controlled, with the order of switching events being stable against timing fluctuations.

6. Summary and outlook

We started out with a comparison of computers with the principles of computation in the cell and discussed the fundamental difference between central clocking and emergent sequence in the autonomous dynamical system of a regulatory network in the cell. The dynamical systems analogy turns out to be fruitful as it seems to provide a tool for simple exploratory modelling of regulatory networks where kinetic details for precise modelling are not yet known, or where even part of the circuit might still be unknown.

Why do Boolean networks work as models for regulatory network sequences? In a sense we can view them as coarse simplifications of the successful differential equation models in the yeast example. The detailed yeast models (Chen et al. 2000) rely on the well-founded assumption that the regulatory dynamics largely consists of transitions between stationary states. These stationary states are the basis for the Boolean states of the network model, with the Boolean dynamics modelling the transitions between them, as well. A second point addresses the noise aspect: at least in our model perspective, we can say that an attractor that is stable in the noisy Boolean network is also present when turning the noise to zero—thus it can be represented in a deterministic Boolean network! This is simplicity for free, unless noise is active on the macroscopic level.

Where could these models fail? A clear limitation is where stochastic effects propagate from the micro- to the macro-level. While in general this is a rather exotic phenomenon in regulatory networks (Acar et al. 2008), it may be relevant in specific circumstances (for example, cell differentiation). The simplest network model failure, and probably the most frequent, is insufficient knowledge of the network architecture.

An interesting outlook is the application of the Boolean approach to exploratory, predictive modelling of a system where indeed kinetic constants are not sufficiently known for constructing a predictive differential equations model.

Knowledge about the network architecture of the regulatory module which one wants to simulate, however, has to be rather complete: as the knockout experiments on the budding yeast model network of Li et al. (2004) have shown, a single change in the wiring diagram changes the dynamical trajectory with a 50% probability. Therefore, in order to expect a dynamical simulation to match the biological system, the circuitry of the biological module is the most important asset of this approach. If the network structure is not fully known, on the other hand, exploratory modelling may be a valuable guide towards the completion of the network model wiring, for example by creating several variants of a network and then comparing each of them to the real system. In addition, the universal requirement of reliability against biochemical stochasticity may provide valuable hints and further constrain the set of possible topologies.

Boolean networks thus show a way to start modelling dynamics of molecular networks at an earlier stage than we are used to today. The simple steps to apply this technique are the following: (i) identify interaction network—make sure you have full knowledge of the network. Where unsure, make several variants of the network. (ii) Translate into a switching network. (iii) Simulate. (iv) Compare with known dynamical sequence data. Is not this what we do in our minds when drawing signalling networks on the blackboard?

Acknowledgments

The author thanks the organizers R. Albert, A. Goldbeter, P. Ruoff, J. Sible and J.J. Tyson and the participants of the workshop ‘Biological Switches and Clocks’ at KITP, Santa Barbara, for creating a truly inspiring meeting. Two anonymous referees contributed to this article with their valuable comments. This research was supported in part by the National Science Foundation under grant no. PHY05-51164.

Footnotes

  • One contribution of 10 to a Theme Supplement ‘Biological switches and clocks’.

    • Received April 4, 2008.
    • Accepted May 6, 2008.

References

View Abstract