## Abstract

The dynamical systems arising from gene regulatory, signalling and metabolic networks are strongly nonlinear, have high-dimensional state spaces and depend on large numbers of parameters. Understanding the relation between the structure and the function for such systems is a considerable challenge. We need tools to identify key points of regulation, illuminate such issues as robustness and control and aid in the design of experiments. Here, I tackle this by developing new techniques for sensitivity analysis. In particular, I show how to globally analyse the sensitivity of a complex system by means of two new graphical objects: the sensitivity heat map and the parameter sensitivity spectrum. The approach to sensitivity analysis is global in the sense that it studies the variation in the whole of the model's solution rather than focusing on output variables one at a time, as in classical sensitivity analysis. This viewpoint relies on the discovery of local geometric rigidity for such systems, the mathematical insight that makes a practicable approach to such problems feasible for highly complex systems. In addition, we demonstrate a new summation theorem that substantially generalizes previous results for oscillatory and other dynamical phenomena. This theorem can be interpreted as a mathematical law stating the need for a balance between fragility and robustness in such systems.

## 1. Introduction

It has recently been emphasized that uncovering the design principles behind complex regulatory and signalling systems requires an analysis of degrees of complexity that cannot be grasped by intuition alone (Csete & Doyle 2002; Kitano 2002, 2004; Stelling *et al*. 2004*b*). This task requires some form of mathematical analysis and the discovery of some more universal principles. In particular, this is true of two related key aspects of the design principles problem: firstly, determining how such systems address the need for robustness and trade-off robustness of some aspects against fragility of others; and, secondly, determining the key points of regulation in such systems, aspects of the network that are crucial to its behaviour and control.

Because it identifies which parameters a given particular aspect of the system is most sensitive to, classical sensitivity analysis (Hwang *et al*. 1978; Kacser *et al*. 1995; Heinrich & Schuster 1996; Campolongo *et al*. 2000; Stelling *et al*. 2004*a*) is a very useful tool that has been used to address both of these aspects. However, apart from some summation theorems about the control coefficients for period and amplitude of free-running oscillators that are analogous to those derived as in metabolic control analysis (Kacser *et al*. 1995; Heinrich & Schuster 1996; Fell 1997), there is currently rather little general theory about general non-equilibrium networks. There is a great need to develop tools that give a more integrated picture of all the sensitivities of a system and to develop more coherent universal or widely applicable general principles underlying these sensitivities. To this end, we provide a compact and easily comprehensible representation of all the sensitivities and a precise statement of the robustness–fragility balance (a global summation theorem).

Control coefficients have been widely used particularly in the engineering sciences and metabolic control theory. In such applications, it is natural to fix a particular observable or performance measure *Q* of interest and then ask how sensitive this is to the various parameters. However, in many systems biology applications there are multiple performance measures of interest. For example, in the study of circadian oscillators one is interested in many aspects such as the free-running period, the strength of entrainment and the consequent phases of the various molecular products, the phase response curves, period and phase as a function of temperature and the response to different day lengths. For signalling systems such as that of the NF-κB system, one is interested in multiple aspects of the response to a signal related to its strength, timing, persistence, decay and transient, equilibrium or oscillatory structure. Moreover, in the search for key points of regulation there may be aspects, where the system is particularly sensitive, that do not correspond to obvious performance measures. Therefore, it would be extremely useful to have an effective approach that will find the sensitivity of all the performance measures and operating aspects of a given model.

The approach to sensitivity analysis developed here is a global one that studies the variation of the whole solution rather than focusing on just one output variable. In addition, this more global approach allows us to address which observable variables *Q* (henceforth called observables) are affected by which parameters *k*_{j} without having to choose the variable or parameter in advance. The results of this analysis can be summarized in

*the sensitivity heat map (SHM)*from which one is able to effectively identify those observables*Q*that are sensitive to some parameter, and*the parameter sensitivity spectrum (PSS)*that characterizes the sensitivity of these observables and the system as a whole with respect to each parameter.

The crucial observation that makes the theory applicable in practice by ensuring that for a given tolerance the above objects are compact and manageable is that such network systems are rigid in the following sense. The map from parameters to the corresponding solutions of interest (a map from a high-dimensional space ^{s} to an infinite-dimensional solution space) locally maps round balls to ellipsoids with axes lengths where the lengths *σ*_{i} decrease very rapidly. This is rigidity because random jiggling of the parameter vector in the high-dimensional parameter space results in the variation of the solution of interest that effectively occupies a space of much lower dimension.

The sensitivity principal components (PCs) *U*_{i}(*t*) that we present in §2 are key components of our theory. These are multidimensional time series from which all system derivatives can be calculated and whose importance rapidly decreases as *i* increases. We show that, when the parameters being varied are a full set of linear parameters, the sensitivity PCs satisfy a global summation theorem which says that a certain linear combination of them sums to a function that is simply related to the original differential equation. This global summation theorem contains within it the other known simple summation theorems for dynamic systems such as those for the period and amplitude of an oscillatory solution of an unforced oscillator. However, it is a substantial generalization because it relates a set of functions rather than a set of numbers and thus is effectively an infinite number of simple summation conditions. Moreover, unlike the classical summation theorems, it applies to non-autonomous systems such as forced oscillators as well as to autonomous systems.

I have applied the theory to a broad range of examples (table 1), but for the purposes of discussion and illustration in this paper we will mainly consider two representative examples: a model of the mammalian circadian oscillator (Leloup & Goldbeter 2003) and a version of the Hoffmann model for the NF-κB signalling system (Hoffmann *et al*. 2002). The former is a reasonably representative example of a periodically oscillating system and for the latter the solution of interest is a transient solution produced by an incoming signal.

## 2. Results

Suppose we are considering a regulatory or signalling system modelled by the differential equation(2.1)where *t* is time; are the state variables of the system; and is a vector of parameters. The vector * k* may contain all the parameters but we will also consider the case where it only contains some and where the other parameters are held fixed and only are varied. For example,

*may consist of just those parameters that the system is particularly sensitive to or may consist of just the linear parameters as defined in §2.5.*

**k**In general, there will be a solution or a class of solutions defined for a specific time range that are of particular interest. For example, for circadian oscillations the primary object of interest is an attracting periodic orbit of equation (2.1) and *T* will be the period of this orbit. On the other hand, for models of signalling systems, one is often interested in a solution that is not periodic but is defined by a given initial condition *x*_{0}. Such a signalling system is usually also subject to a given perturbation caused by an incoming signal and this will typically be modelled by a sudden change in a system parameter or by the time dependence of the r.h.s. of equation (2.1).

In regulatory and signalling systems, the values of two parameters may differ by an order of magnitude or more. Therefore, it is usually not appropriate to consider the absolute changes in the parameters *k*_{j}, but instead to consider the relative changes. A good way to do this is to introduce new parameters because absolute changes in *η*_{j} correspond to relative changes in *k*_{j}. Then, for small changes *δ k* to the parameters, , so the changes

*δη*

_{j}are scaled and non-dimensional.

### 2.1 Fundamental observation

There are two aspects to the fundamental observation behind the tools and analysis presented here. The first is that for such regulatory and signalling systems there are the following easily computable objects:

a set of

*n*-dimensional time series , defined for , which are of unit length and orthogonal to each other in the sense of equation (2.3),a decreasing sequence of

*s*positive numbers called*sensitivity singular values*, anda special set of new parameters that are related to the original (scaled) parameter variations

*δη*by an orthogonal linear transformation*W*(i.e. ),

with the following key property that connects them: if *δη* is any change in the (scaled) parameter vector then the change *δg* in the solution *g* of interest is given by(2.2)

The second aspect is that for a broad range of networks such as those in table 1, the amplitudes actually decrease very rapidly, usually exponentially in the sense that decreases linearly with *i*, i.e. , *α*>0.

The *U*_{i} are of unit length and orthogonal to each other in the following sense(2.3)where if *i*≠*j* and equals 1 if *i*=*j*. These are called *sensitivity PCs*.

We stress two points here: (i) that the given system and solution of interest determine the *U*_{i}, the *σ*_{i}, *W* and the *λ*_{i} and (ii) that the change *δg* is described by (2.2) in terms of these for any parameter perturbations *δη*.

It can easily be shown (see the electronic supplementary material) that the derivatives of the solution *g* with respect to the parameters *η*_{j} are given by(2.4)where .

One can regard equation (2.3) as saying that *U*_{i} and *U*_{j} (*i*≠*j*) are uncorrelated as functions of time *t*. The derivatives and will in general be correlated with each other and writing them as in equation (2.4) is a decomposition of them into uncorrelated time series. Since the *σ*_{i} decay rapidly from a significant value we see that, in fact, the derivatives are highly correlated and their correlation is concentrated in a few components *U*_{i} with low values of *i*.

The usefulness of the *U*_{i}, the *σ*_{i} and the *S*_{ij} arises from a combination of the following facts:

they are straightforward to compute (see §3), even for very complex models,

classical sensitivity coefficients can be expressed in terms of them,

when represented in a heat map (see below), one can rapidly map out all the sensitivities of a complex model, and

since the amplitudes

*σ*_{i}get small very quickly, for a broad class of network models it is usually necessary to consider only a small number of the dominant*U*_{i}.

Let us illustrate the fundamental observation by considering the two models mentioned above. For the modified Hoffmann model (Hoffmann *et al*. 2002), there are *n*=10 state variables corresponding to the concentrations of nuclear and cytoplasmic NF-κB and IκBα and their complexes plus IKK, and *s*=42 parameters most of which are rate constants. This is a simplified version of the model in Hoffmann *et al*. (2002) in which, of the IκB's, only IκBα is included and not IκBβ and IκBϵ. The solution *g*(*t*) considered is the transient orbit produced when an incoming signal at *t*=0 increases the level of IKK above the equilibrium level. The IKK is washed out at *t*=600 min. The mammalian clock model (Leloup & Goldbeter 2003) has *n*=16 state variables and *s*=53 parameters. Both have rapidly decreasing sensitivity spectra as is shown in figure 1. The *σ*_{1}-scaled sensitivity PC for the above model of the mammalian oscillator is shown in figure 2 as a heat map. Although these two models have a large number of state variables and parameters, to study all their sensitivities that are no smaller than 5% of the biggest it is enough to consider only the first five *U*_{i}.

The results behind this observation about the rapid decay of the *σ*_{i} were first developed independently in Brown & Sethna (2003), Brown *et al*. (2004) and Rand *et al*. (2004, 2006). In the former work, the *σ*_{i} appeared as the square roots of the eigenvalues of the Hessian of the function that has to be minimized when doing least-squares fitting of parameters to data for such models. In the latter reference they arose as part of an argument about the complex structure of circadian clocks being a result of the inflexibility of such systems despite the large number of parameters. The link between these two approaches is provided by the matrix defined above (see equation (2.4)) by . The square is an example of a Fisher information matrix and its eigenvalues are the . It can be shown that under certain conditions it is the mean value of the abovementioned Hessians (see electronic supplementary material). In Waterfall *et al*. (2006), it is argued that such systems form a universality class and it will be important to determine whether this is true or whether there is a more mundane reason for this decay. More evidence for the seemingly universal ubiquity of the rapid decay of the *σ*_{i} in tightly coupled systems biology models and the consequences for parameter estimation are discussed in Gutenkunst *et al*. (2007).

### 2.2 Classical sensitivity coefficients from the *U*_{i}

Typical classical sensitivity coefficients can be written in terms of the *U*_{i} and *S*_{ij}. As explained in the electronic supplementary material many can be written either as a sum(2.5)for some finite set of times *t*_{ℓ} or as an integral over a interval of times where is either(2.6)and is the derivative of *U*_{i,m} with respect to time *t*. Examples involving a sum include the control coefficients of phase and amplitude for forced oscillators and the time for signals to peak in signalling systems. Examples involving an integral include the Fourier transforms of the components of the solution (reflecting changes in the shapes of the time series). Thus, we conclude that the control coefficients of interest are all linear sums or integrals over *t*_{ℓ} of terms that are of the form given in (2.6). This fact is key to understanding the use of the SHMs.

In the electronic supplementary material the reader will find a table listing some key observables for oscillators and signalling systems and giving the expressions for their control coefficients in terms of the *U*_{i,m} using formula (2.5).

### 2.3 SHM and parameters sensitivities graphically summarize all the system's sensitivities

We now discuss how to analyse the sensitivity of such a complex dynamical system globally using the SHM and the PSS (figures 3 and 6). They allow us to graphically analyse what observables are significantly changed by what parameters. We do not have to fix the observable or parameter in advance but let the model decide what the most salient observables are. The SHM and PSS are intrinsic to the system and characterize its sensitivity in a global fashion.

The strategy is to

use the SHM to identify all those times

*t*_{ℓ}and indices*i*and*m*that correspond to the terms, which are significantly large, of the form given in equation (2.6) and thus to effectively determine which observables*Q*have significantly large for some parameters*k*_{j}, and then touse the PSS to identify, for those

*Q*from (i), which of the parameter indices*j*have significantly large.

#### 2.3.1 Sensitivity heat map

Suppose(2.7)and(2.8)(Note that , , and the *σ*_{i} are decreasing rapidly for the systems of interest.)

Then and . Thus, if is a linear combination of terms as in equation (2.6) using a given *m* and a given set of times *t*_{ℓ}, the following is true: if and are small for all those values of *t*_{ℓ}, then must be small.

Therefore to determine which observables can have a significant control coefficient we need to determine all *i*, *m* and *t*_{ℓ} such that either or have significant values. To do this we fix a small threshold *τ* (e.g. 1% of the maximum value achieved by all the *f*_{i,m} and ) and identify all pairs (*i*,*m*) such that either or is greater than *τ*. Luckily, since these sizes are comparable to *σ*_{i}, there are relatively few pairs (*i*,*m*) for which *f*_{i,m} or have to be plotted: in the examples studied so far about twice the number of state variables.

These *f*_{i,m} and are then plotted in the SHM. Since relatively few *f*_{i,m} or have to be plotted, the heat map is compact and therefore convenient. For each such pair (*i*,*m*) we inspect the *f*_{i,m} plotted in the SHM to determine the set *T*_{i,m} of times such that *f*_{i,m}(*t*) or is significantly large. This achieves step (i) above.

SHMs for the mammalian clock model and the NF-κB signalling systems are shown in figures 3 and 6.

#### 2.3.2 Parameter sensitivity spectrum

The matrix characterizes the sensitivity of the system with respect to each parameter. Recall that, up to second-order terms that are , the variation *δg* produced by a parameter variation issince . We see that *S*_{ij} completely determines the effect of small changes *δη*_{j} in the *j*th parameter *η*_{j}. Moreover, since the *U*_{i}(*t*) are orthogonal in time-series space, the *S*_{ij} act in independent directions and efficiently parametrize the derivatives . In fact, the *S*_{ij} give a representation of the derivatives that is optimal in that it maximizes the effect of terms with low *i* (for a precise statement see the electronic supplementary material).

In figure 4 we see that, for the model of the mammalian circadian clock, the magnitude of the *S*_{ij} decreases rapidly with *i* and relatively few of them have . In figures 3 and 6, the are plotted as a grouped bar chart with the parameters *k _{j}* reordered according to the size of their sensitivity. Using this for a given value of

*i*we can immediately identify the strength of each parameter in moving the solution

*g*in the direction of

*U*

_{i}. Although not monotonically decreasing in

*i*, the

*S*

_{ij}nevertheless rapidly get small as

*i*increases. This can be seen in figure 4 where we plot and see that very few of the

*S*

_{ij}have a magnitude greater than one per cent of . Therefore, we only have to consider the

*S*

_{ij}for a few values of

*i*and the grouped bar chart can be restricted to these.

Thus, if we (i) use the SHM to determine the set *T*_{i,m} of times *t* such that either *f*_{i,m}(*t*) or is significantly large, and (ii) use the PSS to identify those parameters *η*_{j} such that |*S*_{ij}| is significantly large we obtain a set of triples (*i*,*m*,*j*) that give the significant terms of the form in equation (2.6). These are called hot. We can then conclude that the control coefficients that are significant are those which involve terms of the form given in equation (2.6) where (*i*,*m*,*j*) is hot and the times *t*_{ℓ} are in *T*_{i,m}.

### 2.4 Signalling: the NF-κB system

There are now a number of models of the NF-κB system (see the references in Tiana *et al*. 2007). For illustrative purposes, we consider a modified version of the model due to Hoffmann *et al*. (2002) although a similar analysis can, and, in most cases, has been applied to the other models. There are *n*=10 state variables corresponding to the concentrations of nuclear and cytoplasmic NF-κB and IκBα and their complexes plus IKK, and *s*=42 parameters *k*_{j} most of which are rate constants. The solution *g*(*t*) considered is the transient orbit produced when at *t*=0 an incoming signal increases the level of IKK above the equilibrium level. The IKK is washed out at *t*=600 min. A conventional sensitivity analysis of a related model was carried out in Ihekwaba *et al*. (2004) and Cox (2005) and an analysis based on the variation in the full solution was carried out in Yue *et al*. (2006).

The solution of interest is shown in figure 5. For oscillators it is clear how to choose the length *T* of the time period under consideration. For signalling systems like this, the choice of *T* depends upon the problem being considered. For example, if one is only interested in the initial response, then *T* will be chosen small, while if one is interested in the full response, then a longer period will be chosen.

For the purposes of illustration let us suppose that we are interested in the first two oscillations (i.e. until *t*=200) and in the full trajectory (i.e. until *t*=1000).

We see from figure 5 that the different components *g*_{i}(*t*) of the solution have very different amplitudes. This raises the problem that parameter changes will tend to produce larger absolute changes to those variables with larger magnitudes. Therefore, it will usually be the case that, in situations like this, relative changes in the *g*_{i} are more appropriate than absolute ones. One way to allow for this is to use instead of . However, this is not sensible in this case as for some times *t*, *g*_{i}(*t*) is very close to 0. When this is the case it is usually more appropriate to normalize and non-dimensionalize the *g*_{i} by dividing by the mean value or some other appropriate measure to obtain a scaled solution and then to consider the control coefficient .

Using the software described in the electronic supplementary material the analysis of this system takes a few seconds and in figure 6 we show the SHM and the appropriate rows of the sensitivity matrix. We apply this to discuss the sensitivity of the peak values and their timing for the sequence of oscillations (figure 7).

### 2.5 Summation law

Like certain metabolic control coefficients, the sensitivity PCs satisfy a summation law. This law can be interpreted as a mathematical statement of the idea (e.g. Csete & Doyle 2002; Kitano 2002, 2004) that there is a balance between fragility and robustness in systems like those we study and that increasing robustness in parts will increase fragility in others.

This result holds when the parameters being considered are a full set of linear parameters, i.e. are the parameters in front of the terms which make up *f* with such a parameter in front of every term. A precise definition of such a set is as follows: it satisfies for all *ρ*>0. There may be other parameters but we consider here the case where these are held fixed and only the linear parameters are varied so that the parameter vector * k* just consists of these parameters.

We first consider (i) autonomous systems (i.e. when *f* does not depend explicitly on *t*) and (ii) non-autonomous systems where the solution of interest *g*(*t*) is defined by its initial condition as in signalling systems (i.e. is the solution of the differential equation with a given fixed initial condition). Then the summation law is(2.9)The function *Φ* is given by(2.10)where *X*(*s*,*t*) is the *n*×*n* matrix solution of the variational equationwith initial condition given by *X*(*s*,*s*) being the identity matrix. In the above equation, *J*(*t*) is the Jacobian evaluated at .

In the remaining case where the solution of interest is a periodic solution of a non-autonomous system, the summation law is(2.11)where *Φ* is as above and *τ* is the period.

When the system is autonomous then and therefore and the summation law reduces to(2.12)From equation (2.12), one can deduce the following known summation laws for free-running oscillations with period *τ* and amplitude *A*_{m} for the *m*th state variable (see §5.8.5 of Heinrich & Schuster 1996):However, these can also be proved in a much easier manner using the fact that scaling the linear parameters corresponds to scaling time. Again the sums are over just the linear parameters.

Note that for autonomous systems (sum over all *i* and *j*). Thus, if we have exponential decay of the *σ*_{i},because . Therefore, if *σ*_{2} is small compared to *σ*_{1}, as is often the case, we deduce that *U*_{1}(*t*) is approximately proportional to . But, for oscillators, is the infinitesimal generator of a change in the period of *g*, i.e. is the derivative at *φ*=1 of . Thus, in this case *U*_{1}(*t*) is roughly proportional to an infinitesimal period change.

## 3. Methods

The mathematical object underlying this analysis is a matrix *M* that is made up from the partial derivatives , where . We restrict time *t* to a discrete set of equally spaced values and for each parameter *k*_{j} and each state variable *x*_{m} consider the column vectors . For each *j* we concatenate the *r*_{m,j} into a single column vector *r*_{j} and then consider the matrix *M* whose *j*th column is *r*_{j}.

This matrix is a time-discretized version of the linear operator that associates with each change of scaled parameters the linearized change *δg* in the solution of interest *g* that is in the infinite-dimensional space of appropriate *n*-dimensional time series.

In order to ensure that in the limit Δ*t*→0 the singular value decomposition (SVD) of *M* is independent of the choice of the time discretization (assumed for the moment to be independent of *i*), we normalize *M* by and consider instead .

We use the version of SVD that is often called thin SVD. Since *M*_{1} has *nN* rows and *s* columns (i.e. is *nN*×*s*), this thin SVD is a decomposition into a product of the form (superscript ‘t’ denotes transpose), where *U* is a *nN*×*s* orthonormal matrix ( and *U*^{t}*U*=*I*_{s}), *V* is a *s*×*s* orthonormal matrix and is a diagonal matrix. The elements are the *singular values* of *M*. The matrix *W* is the inverse of *V* and since *V* is orthogonal .

The columns *U*_{j} of *U* are orthogonal unit vectors and can be augmented to provide an orthonormal basis for the space of discretized time series. As for *r*_{j} they are in the concatenated form. To restore them to their form as time series in *n*-dimensional space, the concatenation must be undone but this is straightforward. The *U*_{j}(*t*_{i}) then approximate the . From one immediately deduces that , where *V*_{j} is the *j*th column of *V*. The fundamental equation (2.2) follows directly from this.

If it is appropriate to scale the solution *g* as above in §2.4 then, in the definition of *M*, we use the derivatives for the scaled rather than those for *g*. If it is preferred to use the original variables *k*_{j} instead of the scaled ones , then we just use the derivatives with respect to *k*_{j} instead of *η*_{j} in the definition of *M.*

## 4. Discussion

There is a pressing need for effective tools with which to probe how a network's function depends upon its structure and parameters. The development of such tools presents many challenges because typically (even when they have relatively few components) these networks have significant complexity, are highly nonlinear and the states of interest are dynamical and non-equilibrium. In particular, they involve large numbers of state variables and even larger numbers of parameters. The paper by Kitano (2007) points out that a solid theoretical foundation of biological robustness is yet to be established and represents a key challenge in systems biology, and starts a discussion of how this can be achieved.

However, Brown & Sethna (2003), Brown *et al*. (2004) and Rand *et al*. (2004, 2006) uncovered a surprising property of such systems that aids the construction of such tools. This is the local geometric rigidity described in §2.1: variation in the high-dimensional parameter space causes variations of the solution of interest that effectively occupies a space of much lower dimension.

We have shown that the fundamental observation enables a more global approach to sensitivity analysis in which we do not have to fix an observable function in advance but can instead effectively consider the effect of all the parameters on all reasonable observables. As a result we are able to represent all the sensitivities of these complex dynamical systems in terms of a pair of relatively simple graphical objects, the SHM and the PSS.

Since *F*=*S***S* is intimately related to the Fisher information matrix for such systems, it is clear that the approach presented here will be useful in developing techniques for experimental optimization (Brown *et al*. 2004).

Our approach is local in phase space, estimating the structure of the model in a small neighbourhood of a given set of parameter values. An important task for the future is to extend this to a theory that is more global in parameter space. This will require the development of tools that allow one to sew together the local domains. Luckily, all the computations used in this paper are very fast and can be carried out on relatively small computers. Moreover, many of the computations can be effectively parallelized. Thus, it is probable that this task is quite practical from a computational point of view. This more global approach will be of relevance to algorithms that search parameter or structure space. These spaces are very high dimensional and one needs help in determining in which direction to move. The current theory suggests how to do this since only movement in the directions of the dominant PCs produces substantial changes in the system.

Another limitation is that the approach presented here, being deterministic in nature, does not make any use of the significant amount of information contained in the stochastic fluctuations in data. The ability to incorporate this into the approach would be a significant addition.

We mentioned above that in order to allow for the fact that different parameters and different components *g*_{i}(*t*) of the solution may differ in size by an order of magnitude or more, it is usually appropriate to scale the parameters (i.e. take as the parameters) and/or to scale the components *g*_{i}(*t*). The choice of whether to scale one or other or both of these depends upon the context. It is also sometimes natural not to scale either; for example, this is sometimes the case when using this approach for experimental optimization.

## Acknowledgments

The software used in my analysis was developed with Paul Brown and much of it was originally developed in collaboration with Boris Shulgin. I am very grateful to both of them and also to Nigel Burroughs and How Sun Jow for their discussions on the experimental optimizations that are related to some aspects discussed here. I had some very useful discussions with David Broomhead and Mark Muldoon about how to prove the fundamental observation. I am very grateful to Hugo van den Berg for a critical reading of a draft manuscript. I also thank Sanyi Tang, Andrew Millar, Bärbel Finkenstadt, Isabelle Carré and John Tyson for their useful discussions on these topics, the KITP for its hospitality and the BBSRC, EPSRC and EU (BioSim Network Contract no. 005137) for funding. I currently hold an EPSRC Senior Research Fellowship.

## Footnotes

Electronic supplementary material is available at http://dx.doi.org/10.1098/rsif.2008.0084.focus or via http://journals.royalsociety.org.

One contribution of 10 to a Theme Supplement ‘Biological switches and clocks’.

- Received February 27, 2008.
- Accepted April 16, 2008.

- © 2008 The Royal Society