A geometric method for eigenvalue problems with low-rank perturbations

We consider the problem of finding the spectrum of an operator taking the form of a low-rank (rank one or two) non-normal perturbation of a well-understood operator, motivated by a number of problems of applied interest which take this form. We use the fact that the system is a low-rank perturbation of a solved problem, together with a simple idea of classical differential geometry (the envelope of a family of curves) to completely analyse the spectrum. We use these techniques to analyse three problems of this form: a model of the oculomotor integrator due to Anastasio & Gad (2007 J. Comput. Neurosci. 22, 239–254. (doi:10.1007/s10827-006-0010-x)), a continuum integrator model, and a non-local model of phase separation due to Rubinstein & Sternberg (1992 IMA J. Appl. Math. 48, 249–264. (doi:10.1093/imamat/48.3.249)).

-Stability for stationary solutions of model for phytoplankton growth [9].
Four of the models take the form of a reaction-diffusion equation with a non-local term. The study of the stability of stationary solutions of such models naturally leads to an eigenvalue problem which takes the form of a self-adjoint Sturm-Liouville operator plus a finite-rank perturbation coming from the non-local term. Freitas [3] has considered a similar problem and has some related results for a single perturbation (rather than a two parameter family); Bose & Kriegsmann [8] have some other related results, mainly in the case where f i = g i where the problem is self-adjoint. We refer the interested reader to the review paper of Freitas [10], which details a number of models whose stability problems take the form of a 'nice' operator with a low-rank (typically rank one) perturbation.
In all of these problems, the eigenvalue problem arises in the study of the stability of a particular steady state. In this situation, one is typically interested in understanding qualitative properties of the spectrum as a function of the parameters (ρ 1 , ρ 2 ). In particular, one might wish to understand, for any particular pair (ρ 1 , ρ 2 ), -how many eigenvalues are in the right half-plane, and -how many eigenvalues are real (versus complex).
Here we give a direct way to construct a phase-diagram in the (ρ 1 , ρ 2 ) plane which answers these questions.
The main approach that we take here is to exploit the low-rank nature of the perturbations, along with some geometric constructions for the quantities of interest. Since the eigenvalues will vary continuously as a function of the parameters (ρ 1 , ρ 2 ), the quantities noted above are constant on open sets, with the boundary of these sets being, respectively -the set of (ρ 1 , ρ 2 ) for whichM has a purely complex eigenvalue λ = ıω (including λ = 0), and -the set of (ρ 1 , ρ 2 ) for whichM has a double real eigenvalue.
Knowledge of these boundary sets, together with knowledge of the spectrum of the unperturbed operator M, would therefore enable us to study stability in the entire plane.
In passing we note that, while the unperturbed operator M is self-adjoint in some of the examples presented here, self-adjointness is not strictly necessary. What is necessary is that the spectrum of the unperturbed operator M is known (at least qualitatively), so that we have a baseline with which to compare the perturbed operator (much as a constant of integration fixes a particular solution of a differential equation). In each example studied here, the unperturbed operator M will have a purely real spectrum.
Our main result in this paper will be to identify-and give a recipe for computing-a set of geometric quantities associated with the spectrum of a rank-two perturbation of a well-known operator, which can be used to analyse the perturbed operator in the entire plane: this is done in §2. We will then apply our technique to three specific problems which can be written in the form of equation (1.1). The first is a model of a coupled brainstem-cerebellum neuronal network called the oculomotor integrator ( §3); the second is a continuum version of that model, in which the (relatively numerous) brainstem neurons are replaced by a neural 'line' ( §4). Finally, we analyse a stability problem that arises in a nonlocal reaction-diffusion equation [2] ( §5). In this last problem, we also use an intermediate result from §2 (the Aronszajn-Krein formula and its consequences) to prove a new theorem about stability of stationary solutions.

Basic calculations
We begin with some general dimension-counting arguments. The matrix 1M is assumed to be a real N × N matrix. Real non-symmetric matrices will generically have a real eigenvalue of multiplicity higher than one on a set of codimension one. In a two-parameter model such as we are considering here this codimension one set divides the parameter space into open sets having a constant number of real eigenvalues. As one crosses this set the number of real eigenvalues changes by (generically) two. This motivates the following definition: Definition 2.1. We define the bifurcation curve to be the locus of points V = (ρ 1 , ρ 2 ) for whichM has a real eigenvalue of multiplicity two or higher.
For matrix problems, of course, there exists an algebraic procedure for determining the values in the (ρ 1 , ρ 2 ) plane where the matrix has multiple eigenvalues. One can simply compute the discriminant (in λ) of the characteristic polynomial of the matrixM, disc λ (det(M − λI)) = P(ρ 1 , ρ 2 ), which gives a polynomial in the parameters (ρ 1 , ρ 2 ). The variety defined by the zero set of this polynomial V = {(ρ 1 , ρ 2 )|P(ρ 1 , ρ 2 ) = 0}, determines the bifurcation curve. Unfortunately, this computation is not practical to carry out analytically for real problems: for a large matrix, P(ρ 1 , ρ 2 ) will be a polynomial of large degree and the zero set will be difficult to compute. For the case of operators, even very nice ones, it is not clear that the discriminant makes sense at all.
Instead we use the fact that the perturbations are of finite rank to give an explicit rational or algebraic parametrization of the bifurcation curve. We begin by stating a preliminary lemma, which is basically the Aronszajn-Krein formula for rank-one perturbations: In the case where g 1 and g 2 (or f 1,2 ) are linearly dependent Q(λ) = 0.
Proof. Owing to the rank-two nature of the perturbation, the characteristic polynomial can contain no powers of ρ 1 or ρ 2 above the first. The easiest way to see this is via multilinear algebra. The determinant is clearly polynomial in λ, ρ 1 , ρ 2 . A general term of the form ρ j 1 ρ k 2 comes from the wedge product of j factors of g 1 , k factors of g 2 and (N − ( j + k)) columns from M − λI. Any term with more than one factor of g 1 and one factor of g 2 must be zero. Hence the determinant is of the form given in equation (2.1).
Finally, if g 1 and g 2 are linearly dependent, then any wedge product including both g 1 and g 2 will vanish, so that Q(λ) ≡ 0.
The explicit form of the polynomials P i (λ), Q(λ) is easy to compute from the above construction (a detailed derivation is provided in appendix A). If we let cof t (M − λI) denote the transpose cofactor matrix of M − λI, then one has the following formulae: If in addition M is self-adjoint, we have alternative formulae, in which the polynomial form of Q(λ), etc., is even more evident: and where λ i and φ i are the eigenvalues and eigenvectors of the unperturbed matrix M. It is often more convenient to divide equation (2.1) by det(M − λI) to put the eigenvalue condition in the form (2.4) or even more simply, when Q(λ) = 0, This form has the advantage that it is expressed in terms of resolvents (i.e. R λ ≡ (M − λI) −1 ), which are defined very generally for operators, rather than determinants and cofactors, which are not. One geometric way to interpret this characteristic polynomial is as defining a one-parameter family of rational curves, the curves of constant eigenvalue. For each value of λ, equation (2.1) defines a curve in the (ρ 1 , ρ 2 ) plane along which λ is an eigenvalue. For instance, the matrix has zero as an eigenvalue along the curve in the (ρ 1 , ρ 2 ) plane. In the special case where Q(λ) ≡ 0, equation (2.1) defines a one-parameter family of lines. Given a family of curves, it is always fruitful to consider its envelope, which is the simultaneous solution to and Whereas each curve in the family encodes information on the location of a particular eigenvalue, the envelope of the curve encodes information on eigenvalue coincidence: this is the content of the next lemma. To simplify notation, we use the wedge notation for Wronskians: f ∧ g = fg − f g. Lemma 2.3. The solutions to (2.5), (2.6) give the bifurcation curve V. If Q(λ) is not identically zero the bifurcation curve V is generically given by the union of the pair of parametric curves given by and If Q(λ) is identically zero, then the bifurcation curve is generically given by the parametric curve and Proof. The equivalence of the envelope and the discriminant is standard-see, for instance, Bruce & Giblin [11] or Spivak [12]. Generically, one just has to solve equations (2.5) and (2.6) for ρ 1 and ρ 2 , respectively. This is equivalent to solving a linear and a quadratic equation in the general case and a pair of linear equations in the special case Q(λ) = 0.
For certain values of λ the system (2.5), (2.6) maybe be inconsistent, or consistent but underdetermined. Inconsistency indicates that this eigenvalue cannot be achieved by any choice of ρ 1,2 ; this will occur if rank If the system is consistent but underdetermined, then there may be a curve in the (ρ 1 , ρ 2 ) plane along which λ is a multiple eigenvalue. The system (2.5), (2.6) is consistent and underdetermined for λ if one of the following conditions C1, C2, C3 hold: and C3 If any of these genericity conditions are satisfied for some value of λ, then (generically) there exists a curve in the (ρ 1 , ρ 2 ) plane along which that value of λ is a multiple eigenvalue. If they are not satisfied for any λ, then the bifurcation curve is equal to the envelope curve. Note that one can always check whether or not a pair of polynomials have a common root by computing the resultant of the polynomials: one need not be able to explicitly factor the polynomials to test this condition. Thus, the genericity condition is readily checkable. Lemma 2.4. As (ρ 1 , ρ 2 ) are varied so as to cross the envelope the number of real eigenvalues generically changes by two.
Proof. We assume that most readers are familiar with this phenomenon from the theory of first-order quasi-linear partial differential equations, where the characteristic curves form a one-parameter family of curves and the envelope (caustic) marks the transition between regions which are (typically) singly and triply covered by characteristics, but we give a short proof.
For more details on the envelope, see the text of Bruce & Giblin [11].
We would next like to consider the possibility of eigenvalues of higher multiplicity. The envelope curve is, in general, a well-behaved curve and admits a parametrization by arc length. However, this may fail at an isolated set of points in (ρ 1 , ρ 2 ). The next lemma says that (modulo some genericity assumptions) the following are all equivalent, and occur on a codimension two set (isolated points in the (ρ 1 , ρ 2 ) plane): -Points in the (ρ 1 , ρ 2 ) plane where the model has a real eigenvalue of multiplicity at least three.

Alternatively, the model has a triple eigenvalue if and only if either
Proof. The conditions for an eigenvalue of multiplicity (at least) two are given by (2.5) and (2.6). Differentiating each of these with respect to λ gives the following equations for ρ 1 , ρ 2 : The conditions for an eigenvalue of multiplicity at least three are given by (2.5) and (2.6) together with the condition Thus, it is clear that ρ 1 = 0, ρ 2 = 0 implies that the eigenvalue is of multiplicity (at least) three. Further if P 1 + ρ 2 Q P 2 + ρ 1 Q P 1 + ρ 2 Q P 2 + ρ 1 Q = 0, then the above system can be solved uniquely for ρ 1 , ρ 2 and the existence of an eigenvalue of multiplicity three implies ρ 1 = 0, ρ 2 = 0. A bit more algebra gives another characterization of points where the eigenvalue has multiplicity three or higher. Equations (2.5), (2.6) and (2.15) form a system of three equations in three unknowns ρ 1 , ρ 2 and ρ 3 = ρ 1 ρ 2 . Solving these three equations for (ρ 1 , ρ 2 , ρ 3 ) and imposing the consistency condition or all of the Wronskians vanish. The first possibility is typically of codimension two-it is expected to occur at isolated values of λ corresponding to isolated values of (ρ 1 , ρ 2 ). The second does not typically happen at all, since it requires the simultaneous vanishing of several polynomials. However, in example 2.7, this case occurs because it is forced by a symmetry of the model. Finally recall that a simple zero of the tangent vector represents a cusp, and generically an eigenvalue of multiplicity at least three will have multiplicity exactly three, so typically cusps in the envelope are equivalent to triple eigenvalues.
The geometry of a bifurcation in the neighbourhood of a triple eigenvalue is illustrated in figure 4b. In a neighbourhood of this point, there are three dominant eigenvalues which participate in the bifurcation. The cusp of the envelope represents a transition between a bifurcation between the intermediate and the smallest eigenvalue in the trio, and a bifurcation between the intermediate and the largest eigenvalue in the trio. Emerging from the cusp-point is a curve that represents an exchange of dominance phenomenon, with a complex conjugate pair of eigenvalues crossing a single real one.
When considering questions of stability and the behaviour of the dominant eigenvalue it is also important to understand the behaviour of the complex eigenvalues. In particular, one would like to understand the locus of points at which the matrix has purely imaginary eigenvalues, as this curve indicates where the model loses stability due to a Hopf bifurcation. Definition 2.6. The Hopf curve is the locus of points in the (ρ 1 , ρ 2 ) plane whereM has a pair of purely imaginary eigenvalues. Generically, this curve is given parametrically by Re(D(iω)) + ρ 1 Re(P 1 (iω)) + ρ 2 Re(P 2 (iω)) + ρ 1 ρ 2 Re(Q(iω)) = 0 (2. 16) and Im(D(iω)) + ρ 1 Im(P 1 (iω)) + ρ 2 Im(P 2 (iω)) + ρ 1 ρ 2 Im(Q(iω)) = 0, (2.17) where Re, Im represent the real and imaginary parts, respectively. The genericity conditions are the same as in lemma (2.3) with the replacement of the Wronskians f ∧ g by the quantities Re( f (iω)) Im(g(iω)) − Re(g(iω)) Im( f (iω)). The Hopf curve, the envelope, and the λ = 0 eigenvalue curve will all intersect at a single point, the point at which there is a zero eigenvalue of higher multiplicity. We now present an illustrative example.

Example 2.7.
We consider the following model: It is straightforward to compute that the characteristic polynomial of this matrix is given by The zero eigenvalue curve is given by The bifurcation curve is given by the envelope together with the singular piece ρ 1 = − 1 2 ∪ ρ 2 = − 1 2 , which is associated with the value λ = −2 + √ 2/2, where the equations defining the envelope fail to have full rank. The envelope and the singular piece of the bifurcation curve meet tangentially at ρ 2 = − 1 2 , ρ 1 = − 3 2 and ρ 2 = − 3 2 , ρ 1 = − 1 2 . Because of the symmetry we have P 1 = P 2 and thus P 1 ∧ P 2 ∧ Q ≡ 0, so the condition for a triple eigenvalue reduces to simultaneous vanishing of P 1 ∧ D ∧ Q and D ∧ P 2 ∧ Q. Note that since P 1 = P 2 these are not independent-P 1 ∧ D ∧ Q = −D ∧ P 2 ∧ Q. Calculating we find that the triple eigenvalue condition becomes This cubic has three real roots: a double root at λ = −2 + √ 2/2 and a simple root at λ = − 1 2 + The envelope is not defined for λ ∈ − 5 2 , − 3 2 , so the root at λ = − 1 2 + √ 2/4 does not correspond to a real multiple eigenvalue. Thus, the only real eigenvalue of multiplicity higher than two is λ = −2 + √ 2/2. Since this eigenvalue is associated with the singular piece of the bifurcation curve we can potentially have many points where this is a triple eigenvalue. Along the curve ρ 1 = − 1 2 the eigenvalues are So the only triple eigenvalue is at ρ 2 = − 3 2 , the point of intersection with the envelope curve. A similar calculation holds along ρ 2 = − 1 2 . The Hopf curve is given parametrically by and where, as always, the signs are not independent. Note that the argument of the square root is strictly positive, so there exist purely imaginary eigenvalues corresponding to oscillations The most interesting region of the stability diagram is depicted in figure 2. The zero eigenvalue curve is depicted in dashed red, the envelope in blue (including a dot at the origin), the singular piece of the bifurcation curve in dot-dashed magenta and the Hopf curve in solid dotted green. From this information, it is easy to derive the stability diagram. At the origin, the eigenvalues are λ = −3, λ = −2, λ = −2, λ = −1. As there is a degenerate eigenvalue one needs to do a local perturbation analysis near λ = −2, ρ 1 = 0, ρ 2 = 0 to determine whether in the neighbourhood of this point one has a real pair of eigenvalues or a complex conjugate pair. Letting λ = −2 + δ shows that near this point one has where O(3) denotes terms of order three or higher in δ, ρ i . The discriminant of the above is √ 2ρ 1 + √ 2ρ 2 2 − 4ρ 1 ρ 2 = 2ρ 2 1 + 2ρ 2 2 > 0, indicating that in a neighbourhood of the origin the double eigenvalue splits into a real (distinct) pair of eigenvalues. Thus in the region containing the origin and bounded by the singular pieces of the bifurcation curve and the upper branch of the envelope (labelled A) there are four real eigenvalues in the left half-plane (LHP). As ρ 2 is decreased so as to cross the line ρ 2 = − 1 2 the first bifurcation occurs. Since this line corresponds to eigenvalue λ = −2 + √ 2 and −2 < −2 + √ 2/2 < −1 the bifurcation consists of the two dominant real eigenvalues bifurcating to a complex conjugate pair. Thus in region B we have two real eigenvalues and two complex eigenvalues, all in the left half-plane. As one leaves region B across the Hopf curve into the region labelled E the complex conjugate pair moves into the right half-plane (RHP), giving two complex eigenvalues in the right half-plane and two real eigenvalues in the left half-plane. Proceeding in this manner the stability diagram can be labelled as follows: -Region A: four real eigenvalues in the left half-plane. -Region G: one real eigenvalue in the right half-plane, one real and two complex eigenvalues in the left half-plane.
Additionally, there is a narrow region between the regions labelled E and F (to the left of ρ 2 = − 4 + √ 2 − √ 30 ≈ −10.9 above the zero eigenvalue curve and below the envelope curve) where there are two real eigenvalues in the left half-plane and two real eigenvalues in the right half-plane. This region is not labelled since it is not visible on this scale.
One feature which we have not labelled are points where a real eigenvalue and a complex conjugate pair all have the same real part, corresponding to points where a real eigenvalue and a complex conjugate pair exchange dominance. Although it is easy to write down an implicit equation satisfied by these curves, it is generally not possible to find an explicit formula as for the Hopf curve, envelope, etc.
As the characteristic polynomial of this problem is of order four, it would, in principle, be possible to extract the above information directly from the solution formula for the quartic. In practice, it would be exceedingly difficult to recover such detailed information. In the next section, we consider a model that arises from a differential equation of order eight; in this situation, it is no longer possible even in principle to state a direct formula for the roots.
In this section, we identified-and gave a recipe for computing-a set of geometric quantities associated with the spectrum of a rank-two perturbation of a well-known operator. Because some of these quantities (specifically the bifurcation curve, the λ = 0 eigenvalue curve, and the Hopf curve) form curves which together partition the (ρ 1 , ρ 2 ) plane into open sets with qualitatively similar behaviour, this information allows us to analyse the perturbed operator in the entire plane. We now apply this procedure to three specific problems which can be written in the form of equation (1.1): a model of a coupled brainstem-cerebellum neuronal network called the oculomotor integrator ( §3); a continuum version of that model, in which the (relatively numerous) brainstem neurons are replaced by a neural 'line' ( §4); and a stability problem that arises in a non-local reaction-diffusion equation [2] ( §5).

Example: a model for the oculomotor integrator
The oculomotor integrator is a neural network that is essential for eye movement control. This network holds your gaze steady despite transient body motions, by using cues it receives from oculomotor subsystems such as the vestibular system (which processes sensory input from the semi-circular canals of the inner ear) [13,14]. Here, we present a model for the oculomotor integrator which can either simulate normal integrator function, or a common eye movement disorder-congenital or infantile nystagmus (IN)-with a few small changes. We are able to make this determination by analysis of the envelope, constant eigenvalue curves and Hopf curve as described in §2. Some of this analysis was presented in [15], which focused on using the model to simulate various eye movements associated with IN; here, we focus on analysing the full-phase space generated by the low-rank perturbations.
We first summarize how the integrator works. The neurons that compose the integrator are located mainly in a brainstem region known as the vestibular nucleus. These neurons must integrate velocity signals into a desired position; thus, they perform the operation of integration with respect to time (or temporal integration). Single neurons produce a small amount of temporal integration, in that a firing rate increase due to a transient input will decay with a time constant of about 5 ms. The observed integrator time constant is closer to 20 s, so the integrator network must lengthen the single-neuron time constant by about 4000 times [14]; for a linear system, this corresponds to having an eigenvalue near zero [16]. Furthermore, the integrator should have the right gain in response to velocity signals; the ratio of its response to the input should be appropriate. It should also be plastic; i.e. it should be able to adjust, if injury or some other change occurs (for example, you adjust your oculomotor integrator gain when you get a new pair of glasses) [17][18][19][20].
In order for the oculomotor integrator to function, the vestibular nucleus must be connected to a second major brain region, the cerebellum [21][22][23]. These connections are essential both for normal operation and for plasticity [24][25][26][27]. Neuroanatomical research has shown that these connections are asymmetric in an important way: while the connections from the vestibular nucleus to the cerebellum are numerous and excitatory, the feedback connections from the cerebellum are sparse and are inhibitory [28][29][30][31][32][33]. Anastasio & Gad [1] showed previously that this asymmetry permits the cerebellum to sensitively control both the time constant and the gain of the oculomotor integrator, despite (or perhaps because) the projections from the cerebellum back to the vestibular nuclei are so sparse. These sparse cerebellar- to-vestibular connections, which can plastically change their strength, are precisely the low-rank perturbations we analyse in detail here.
Anastasio & Gad [1] proposed a linear differential equation model that combined a vestibular network with sparse, asymmetric feedback connections from cerebellar Purkinje cells, i.e. the evolution of the system is given by where v represents the response of the system (vestibular neurons and Purkinje cells together),M represents a matrix of connections, s(t) is a scalar input (velocity) signal to the integrator and b a fixed vector representing the pattern in which the vestibular neurons receive the input signal. The connection matrixM proposed by [1] was The vestibular neuron-to-Purkinje cell coupling vectors are given by the w i ; the Purkinje cell-tovestibular neuron coupling vectors are given by ( We state the remaining details for completeness; for a full explanation of the biophysical motivation, see [1,15]. The matrix T of effective connections between vestibular neurons is given by The parameter α is set to 200 s −1 (corresponding to a typical 5 ms membrane time constant); the parameter β is fixed so that the vestibular sub-network has a time constant of 0.2 s in the absence of cerebellar interaction (ρ 1 = ρ 2 = 0). The largest eigenvalue of T is given by λ 1 /α = −1 + β(1 + 2 cos(π/(N + 1))), where N is the number of vestibular cells (on each side of the network); therefore, we choose β = (λ 1 /α + 1)/(1 + 2 cos(π/(N + 1))), where λ 1 = 5 s −1 .
As we have noted, we will treat the Purkinje-to-vestibular feedback connections as our low-rank perturbations; to relate this model to equation (1.1), take where N is the number of vestibular neurons (therefore M is (N + 2) × (N + 2), and f i , g i are in R N+2 ). We will assume that the output of the integrator is a linear readout of the vestibular neuron responses, which for simplicity we take to be equal to b: i.e. b, v(t) . Defining the eigenvectors e i and adjoint eigenvectors f i , respectively, byM When there is a separation of time scales (Re(λ 2 ) Re(λ 1 )), the response is largely due to the dominant eigenvalue; we define the gain, γ , to be the ratio of this response, to the magnitude of the filtered input: Thus, γ captures how the circuit amplifies-or suppresses-the incoming signal. Setting this ratio correctly allows the organism to respond appropriately to its environment. However, injury or normal growth can alter the sensory systems that supply inputs to the integrator, and thus create a mismatch between input and desired output. To counteract this, the system must change this gain to compensate. We will allow our network to change, by adjusting the Purkinje-to-vestibular weights ρ 1 and ρ 2 ; this is biologically plausible, since one of the major functions of the cerebellum is to regulate motor plasticity and learning.
The asymmetry of the matrixM is crucial, to allow gain to be adjusted freely; for a symmetric (more generally normal) matrix, −1 ≤ γ ≤ 1. When can we get big changes in gain, with relatively small changes in ρ 1 and ρ 2 ? It would be ideal for the denominator of γ to be near zero (assuming both f 1 , e 1 are normalized to unit length): f 1 , e 1 1. However, this corresponds to near-orthogonality of the right and left eigenvectors, which occurs near a double eigenvalue (perfect orthogonality, f 1 , e 1 = 0, can occur only when there is a degenerate double eigenvalue).
We now show an example of an integrator that can perform normal integration with arbitrary adjustment of gain: let T be 6 × 6 and choose where e j is the jth identity vector (in this case e j ∈ R 6 ).
With this choice,M is the reduced matrix of a model with six vestibular neurons and two Purkinje cells on each side of the bilaterally symmetric network. The only parameters that may vary are ρ 1 and ρ 2 , the strengths of the Purkinje-to-vestibular connections. In order to fix a certain time constant, ρ 2 and ρ 1 should be constrained to lie on the appropriate constant eigenvalue curve. The biologically appropriate time constant is in the neighbourhood of 20 s, so the appropriate eigenvalue is λ = − 1 20 . From the results of §2, we know that this holds along the curve The basic picture of integrator operation is as follows: let us suppose that ρ 2 is allowed to vary but that ρ 1 is given by equation (3.9), so that λ = − 1 20 is always an eigenvalue. As ρ 2 is increased from zero the dominant eigenvalue is fixed at λ = − 1 20 and the (in this case real) subdominant eigenvalue increases. Because we are nearing the envelope curve (and thus a degenerate double eigenvalue λ 2 = λ 1 = − 1 20 ), we expect the gain to increase. At ρ 2 = 1.22, where the constant eigenvalue curve is tangent to the envelope, there is a collision of eigenvalues and the dominant eigenvalue is degenerate. As ρ 2 is further increased the formerly subdominant eigenvalue is now dominant-it is real and larger than − 1 20 . Furthermore, we can compute the gain associated with the dominant mode along the λ = − 1 20 curve and find   2.52, 5.92 and 12.88. The corresponding measurements from the impulse response (given by dividing the maximum response by b 2 ) yielded 2.54, 5.87 and 12.53, respectively; so we see excellent agreement. Figure 3b displays the interaction of the two dominant eigenvalues of the network in the vicinity of the current operating region. In order to increase gain, the network must climb up the λ = −0.05 curve in the vicinity of the double eigenvalue point. At the intersection of this curve with the envelope, the integrating eigenvalue exchanges dominance with another real eigenvalue, producing an unstable integrator. Note that the larger two gain cases straddle the point where the λ = − 1 20 s −1 curve crosses the envelope, indicating an eigenvalue bifurcation in a subdominant mode. In this case, it is the mode(s) with the next largest real part. At about ρ 2 ≈ 1.02, the subdominant complex conjugate pair collides at the real axis, and for ρ 2 above this value the first three most dominant eigenvalues are all real. As ρ 2 increases the subdominant eigenvalue increases until ρ 2 ≈ 1.22 where there is an eigenvalue collision and exchange of dominance. Figure 3b also uses letter labels to show the character of the dominant eigenvalue. Normal operation requires the network to remain in Regions A or G. If the network is in error, it may wander into a region where the two eigenvalues are complex (F), or into the region where one or both eigenvalues are in the right half-plane (B,C, D or E). In neither case is normal operation possible.
Infantile nystagmus (IN) is a hereditary disorder characterized by involuntary, periodic eye movements. These movements (or waveforms) can be broadly classified into two forms, jerk and pendular. In jerk waveforms, the eye moves outward from the central position with increasing speed until interrupted by a sudden saccade; a pendular waveform resembles a sinusoidal oscillation. The presence of a jerk waveform suggests an unstable eigenvalue (λ > 0); as noted in many integrator models, the need to maintain an eigenvalue very near zero implies that an unstable eigenvalue is a natural consequence of imprecision.
In our model, the ability to modulate gain is enhanced near the bifurcation curve; this suggests that the network is also near a point where the dominant eigenvalue is complex, which would generate the sinusoidal oscillations characteristic of pendular nystagmus. For example, consider the network specified in equation (3.11). This is very similar to equation (3.8); the only change is that the vestibular-to-Purkinje input from a handful of neurons has been altered. Here, as the cerebellum attempts to increase gain by adjusting ρ 1 and ρ 2 , it enters an oscillating regime. an oscillation is superimposed on normal integration. Figure 4b illustrates why this behaviour occurs. As we follow the λ = − 1 20 s −1 curve from left to right through Region A, gain will increase. However, the eigenvalue curve has an intersection with the Hopf curve far to the left of its intersection with the envelope. At this point, the integrating eigenvalue exchanges dominance with a pair of imaginary eigenvalues. Beyond this point, the response of the network contains both the normal integrating mode and a superimposed oscillation (Region F).

Example: a continuum model of the oculomotor integrator
We next consider a continuum model that can be derived from the model of Anastasio and Gad by replacing the (relatively numerous) vestibular neurons with a continuum vestibular 'line' denoted by ψ(x), while the relatively few Purkinje cells remain discrete.
Suppose the network has N vestibular cells on each side of the integrator, arranged in a row of length L. Recall that the vestibular interaction matrix T has nearest-neighbour structure; each row of T takes the form βv j−1 + (−1 + β)v j + βv j+1 (equation (3.3)). This will converge, in the continuum limit (N → ∞), to a second derivative, because:

The sum over vestibular cells in the equation for the Purkinje cells (two final rows of equation (3.2)) can similarly be replaced by an integral:
where ψ(j x) = v j and φ 1 (j x) = (w t 1 ) j . This leads to the following integro-differential eigenvalue problem on where we have replaced L/N with x. 2 The delta function coupling reflects the sparseness of the Purkinje-to-vestibular connections, with x 1,2 denoting the points on the vestibular line innervated by the Purkinje cells, and the functions φ 1,2 (x) represent the density of vestibular to Purkinje connections.
Eliminating P 1 , P 2 from equation (4.1) algebraically, we arrive at the single equation This maps to our original problem, equation (1.1), as follows: This model can be solved in much the same way as the discrete model analysed in §3. To illustrate we take L = 1 and the vestibular-to-Purkinje connections to be φ 1 (x) = φ 2 (x) = 1. Note that when ρ 1 = 0, ρ 2 = 0, the vestibular neurons decouple from the Purkinje cells, and the eigenvectors ψ are given by after which P 1 , P 2 are obtained by using equations (4.2): The corresponding eigenvalues are given by λ n = −1 + 3β − β(n 2 π 2 /N 2 ), together with λ = −1, an eigenvalue of multiplicity two corresponding to the Purkinje cells. Note that the even modes (n = 2k) do not actually excite a Purkinje cell response (i.e. P 1 = P 2 = 0), and thus the even modes are eigenfunctions of this problem for all values of ρ 1 , ρ 2 : these modes do not change under perturbation by the Purkinje cells.
We now apply the techniques developed earlier in the paper to find the lines of constant eigenvalue. We will not reproduce the entire calculation, but merely note a few salient points. The first step is to act on equation (4.1) with the appropriate resolvent (inverse) operator (here, R λ = (β( x) 2 ∂ xx + 3β − (1 + λ)) −1 ). The terms R λ δ(x − x i ) are simply Green's function for the operator β( x) 2 ∂ xx + 3β − (1 + λ) acting on L 2 [0, L] with Dirichlet boundary conditions, which can be easily calculated-see (for example) the text of Keener [34]; we also supply some details in appendix A.
The main conclusion is that the perturbed problem will have piecewise trigonometric eigenfunctions of the form sin(ωx), etc. (with appropriate continuity conditions; see equation (A 16)), where We now fix the remaining constants in the model: x 1 = 1 3 and x 2 = 1 2 , and perform the calculation thus described. We find that the resulting envelope curves have asymptotes when ω is a multiple of 4π and 6π ; this is a consequence of the location of the Purkinje cell innervation (i.e. x 1 and x 2 ).
One issue that becomes apparent, is that as the network becomes large we are unable to find an integrating eigenvalue using equation (4.7) in this way; observe that λ is bounded above by λ = −1 + 3β − β( x) 2 ω 2 ≤ −1 + 3β ≈ − 1 40 + 39 120 (π/(N + 1)) 2 , which → − 1 40 as N → ∞. However, the desired eigenvalue for a 20 s time constant (recall that we have scaled out the 5 ms single neuron time constant, α = 200) is λ = −0.05/200, which is much closer to zero than − 1 40 ; as N increases, the integrating eigenvalue will become out of reach. One way to view this observation is that as the ratio of vestibular to cerebellar (Purkinje) cells increases, sparse cerebellar innervation becomes less able to alter the time constant of the vestibular network.
The alternative, is to consider piecewise exponential eigenfunctions; i.e.
where now Although such functions cannot match the boundary conditions of the unperturbed problem, the discontinuities in the perturbed problem bring them back into consideration. We can plot the corresponding bifurcation curve; it begins near (ρ 2 , ρ 1 ) ≈ (−0.09, 0.09) ( for N = 12) and increases without bound into the second quadrant. In figure 5, we plot the phase plane for several values of N; N = 12, 24, 50 and 100. In each panel, we show several pieces of the bifurcation curve in different colours: (0, 4π ) (purple), (4π , 6π ) (blue), (6π , 8π ) (cyan), (8π , 12π ) (green), and the curve for exponential eigenfunctions (yellow). We note that the structure is stable, for increasing N; however, the corresponding values of λ are 'compressed' (see equation (4.7)). This is to be expected; in the finite-dimensional problem, the eigenvalues cluster together as N increases; therefore, we expect increasing density of constant eigenvalue curves as N increases.
We now return to the question that originally motivated our analysis in §2; can this network behave as an integrator; i.e. can it achieve the correct time constant and gain, by adjusting its Purkinje-to-vestibular weights ρ 1 , ρ 2 ?
Recall that we characterized the performance of an integrator by two quantities; the dominant eigenvalue λ 1 and the gain γ . In §3, we established that in order to achieve high gain, the parameters   We now take one further step in generality, by asking how the network behaves as the Purkinjeto-vestibular connection locations-x 1 and x 2 -change. This would reflect a different source of network plasticity. We will show that for the vestibular-to-Purkinje projection patterns chosen here, the ρ 1 , ρ 2 coordinates of the bifurcation point will always be of opposite sign, and therefore will never occur in the first quadrant. (4.3) with L = 1. Suppose φ 1 = φ 2 = 1; then for sufficiently large N, the bifurcation point corresponding to the integrating eigenvalue λ 1 will not be found in the first quadrant (ρ 1 , ρ 2 > 0), for any 0 < x 1 , x 2 < 1.

Lemma 4.1. Consider the continuum integrator model defined in equation
Proof. We assume that N is large enough that the eigenvalue of interest requires an exponential eigenfunction; i.e. λ 1 > −1 + 3β − β( x) 2 ω 2 . We begin by directly computing the required terms in the characteristic polynomial: and (refer to equation (2.4) for definitions and appendix A for a more detailed calculation in terms of resolvent operators). Note that for the same function f (ω, x). Then by equations (2.9) and (2.10) the coordinates of the bifurcation point corresponding to any desired ω = ω 0 will be: consequently, We will show that (∂f /∂ω)(ω, x) > 0 for ω > 0 and 0 < x < 1; therefore, the ratio of ρ 1 (ω 0 ) and ρ 2 (ω 0 ) must be negative.
In conclusion, we cannot make this network act as an integrator by adjusting its Purkinjeto-vestibular connections. Instead, we would have to adjust the underlying vestibular-to-Purkinje projection pattern φ 1 .

Example: the Rubinstein-Sternberg model
In this section, we give a third, quite different, application for the technique based on low-rank perturbations. Rubinstein & Sternberg [2] introduced a non-local model for phase separation of the form We will compute the stability of a standing front type solution of this model, where the stability operator takes the form of a rank-one perturbation to a standard Sturm-Liouville operator. This model has been analysed by a number of authors, most notably Freitas [3,10,35,36], and the closely related problem of the coarsening rate for the Cahn-Hilliard equation has been analysed in the classic paper of Bates & Fife [37]. The main goal of this example is to illustrate the utility of treating the problem using the rank-one perturbation formula: while the final result appears to be new, several of the intermediary results have analogues in the work of Freitas, and we will point these out where germane. We begin with the equation This equation always admits a constant solution and for sufficiently large widths it admits front type solutions. At L = nπ f (0)/2, a bifurcation occurs giving rise to a solution containing n fronts. In the absence of the non-local term, these front solutions are ( for L finite) always unstable. The most unstable mode has a non-vanishing mean, so the instability is connected with non-conservation of mass. Rubinstein and Sternberg introduced the non-local term as a Lagrange multiplier to enforce mass conservation and remove this instability mechanism. It is easy to see that, if u represents a stationary solution to equation (5.1) then the linearized evolution equation is given by Therefore, the associated eigenvalue problem takes the form of a rank-one perturbation of a self-adjoint operator. To explicitly relate equation (5.2) to equation (1.1), we can define The problem with a bistable cubic nonlinearity, f (u) = u − u 3 , is the simplest and most natural from a physical perspective, and can be analysed rather explicitly. Assuming L > π/2, there is a front solution which can be expressed in terms of elliptic functions. 3 After a simple rescaling u = (1 + k 2 ) −1 v, x = (1 + k 2 ) −1/2 y and t = ( √ 2k/ 1 + k 2 )s , the equation can be written in the form Here the quantity k ∈ (0, 1) denotes the elliptic modulus and K(k) denotes the complete elliptic integral of the first kind The elliptic modulus k is determined from L by the relation The unperturbed problem now becomes a well-known identity for elliptic functions states that a stationary solution to this equation is given by where sn is the Jacobi elliptic sine function. Note that as the function sn(x, k) is odd the non-local term (1/2K)

K(k)
−K(k) ((1 + k 2 )u − 2k 2 u 3 ) dx vanishes; thus, this function solves both the perturbed and unperturbed problem (i.e. both equations (5.4) and (5.5)). However, the non-local term changes the stability problem; we will find that the stability properties of the two problems are not the same.
Linearizing around the elliptic function solution gives the following non-local evolution equation: where we have identifiedH as a perturbation from a 'simpler' operator, H. The unperturbed operator H is a two-gap Lamé operator, for which the spectral problem can be solved exactly [38,39]. When subject to periodic boundary conditions on [−2K(k), 2K(k)] the largest five eigenvalues are simple and are given by Here the superscript indicates whether the function satisfies a Neumann or a Dirichlet condition at ±K(k). Thus, the unperturbed operator subject to Neumann boundary conditions has one positive eigenvalue λ 0 = −(1 + k 2 − 2a(k)) and the remainder of the eigenvalues on the negative real line.
To summarize, the stability problem (equation (5.7), plus boundary conditions v x (±K(k)) = 0) takes the form of a rank-one perturbation of a self-adjoint problem: with G = −(1/2K(k))1 and g = (1 + k 2 − 6k 2 sn 2 (x, k)). While neither G nor g is an eigenvector of H, both G and g lie in the span of the zeroth and second eigenfunctions φ (N) 0/2 (x) = k 2 sn 2 (x, k) − 1 3 (1 + k 2 ∓ a(k)). Since the eigenfunctions of a self-adjoint operator are orthogonal this implies that the rank-one piece, G g, v vanishes on the span of all the remaining eigenfunctions. This implies that the perturbed operator decomposes as a direct sum of two operators, one that is self-adjoint and negative definite and one that is rank two:H =H| span(φ 0 ,φ 2 ) We emphasize that the perturbation term here is very special, in that G and g are actually given by linear combinations of just two of the eigenfunctions of the unperturbed operator. In the generic case (i.e. for a different nonlinearity f (u)), one expects G and g to have non-trivial projections onto all eigenfunctions.
From the exact eigenvalues in equation (5.8), the second term satisfies the coercivity estimatẽ H| span(φ 0 ,φ 2 ) ⊥ ≤ −3k 2 I, and the entire stability problem is reduced to understanding the two by two matrix eigenvalue problem defined byH| span(φ 0 ,φ 2 ) . Furthermore, since the range ofH consists of mean zero functions it follows from the Fredholm alternative thatH| span(φ 0 ,φ 2 ) must have a zero eigenvalue. This zero eigenvalue is connected with mass conservation. There is a one-parameter family of solutions to this equation of fixed spatial period L, which can be thought of as being related to the total mass of the stationary solution. Here we have only considered the simplest solutions-those with zero Since we have a one-parameter family of solutions, by Noethers theorem there must be an element in the kernel of linearized operator corresponding to the generator of this family.
It is straightforward to compute the restriction of the linearized operatorH| span(φ 0 ,φ 2 ) in the (independent but non-orthogonal) basis {1, sn 2 (x, k)} in terms of complete elliptic integrals of the first and the second kind. By using the following identities: we derive the following expression forH| spanφ 0 ,φ 2 , the restriction of the operator to the span of the zeroth and second eigenfunctions, in the basis {1, sn 2 (x, k)}: The eigenvalues of this restriction are given by and It is easy to check directly from the definition of the elliptic integrals that the quantity (3 − 3k 2 )K(k) − 6E(k) is strictly negative for k ∈ [0, 1). This follows, for instance, from the Taylor series representations from which it is easy to see that the quantity (1 − k 2 )K(k) − 2E(k) is even with only negative terms in its Taylor series, implying that it is strictly negative. Since the equation conserves mass, it makes sense to consider only perturbations with zero net mass (those orthogonal to the kernel). In this case, we have all eigenvalues strictly in the left half-plane and the stationary solution is nonlinearly stable. The problem for a general nonlinearity is slightly more complicated, since we do not have explicit formulae for the eigenvalues and eigenfunctions, but it can essentially be completely solved. Assuming that the equation has a stationary solution u(x), we can compute the linearized operator as where H is the unperturbed operator: Applying the Aronszjan-Krein formula gives the following eigenvalue condition: Thus, the spectrum again decomposes into two pieces. Any eigenvectors of H which happen to be orthogonal to f (u) ( f (u), v = 0) remain eigenvectors of the perturbed problem. Eigenvectors which are not orthogonal to f (u) must satisfy the Aronszajn-Krein eigenvalue condition (This condition was also identified by Freitas [3], who referred to the eigenvalues which satisfy equation (5.12) as 'moving eigenvalues'.) Now note that one has the identity H1 = f (u) and thus we can write where we use the identity 1 = 1, v i v i and therefore that (H − λI) −1 1 = ( 1, v i /(λ i − λv i )); here, λ i and v i are the eigenvalues and (orthogonal) eigenvectors of the unperturbed operator H. Note that in the last expression h(λ) is a Herglotz function 4 -an analytic function that is real on the real axis and maps the open upper half-plane to itself-for ρ ∈ [0, 1]. It is well known that the zeroes and poles of a Herglotz function are real, implying that the eigenvalues of the Rubinstein-Sternberg model for ρ ∈ [0, 1] are real (see Simon [40] p. 920 for a more detailed discussion of Herglotz functions). This observation is analogous to lemma 5.2 in the paper of Freitas [36], where reality of the eigenvalues for ρ ∈ [0, 1] is established using a combination of identities derived from the original equation. In later work, Freitas [10] considers a general rank-one perturbation of a self-adjoint operator:H = H + |a b|, and associates with each eigenvalue λ i a signature given by sgn( b, v i v i , a ), where v i is the eigenfunction of the unperturbed problem-see, in particular, section 3 of [10]. The condition that all of these signatures are the same is equivalent to requiring that the Aronszajn-Krein function is Herglotz.
This calculation shows that, despite the fact that the linearized operator is not self-adjoint, the spectrum is purely real for ρ ∈ [0, 1]. Now let us assume that we understand the spectrum of H 0 = H, the unperturbed operator, in particular that we know n + (H), the number of positive eigenvalues of the unperturbed operator. Let us consider doing a homotopy in the parameter ρ. Since we have shown that the eigenvalues are real the only way that an eigenvalue can move from the right half-plane to the left half-plane (or vice versa) is by passing through the origin. We can detect when this occurs by taking λ = 0 in equation (5.12), which becomes Here we have used the fact that H1 = f (u) so that 1 = H −1 f (u). (We are also assuming that H is invertible. Minor changes are required in the case that H has a kernel; see remark 5.2.) This calculation shows that the unique value of ρ for which H ρ has a zero eigenvalue is ρ = 1.
We are now in a position to count the number of positive eigenvalues ofH using a continuation argument. The operatorH is bounded above and, by standard arguments, has a finite number of positive eigenvalues. We assume that the number of positive eigenvalues of the unperturbed operator is given by n + (H) = k: at ρ = 0, there are k positive eigenvalues and the remaining eigenvalues are negative. For ρ ∈ (0, 1), the kernel of H ρ is empty, so no eigenvalues cross from the left half-line to the right. At ρ = 1, there is an eigenvalue at λ = 0, which either came from the left half-line or the right half-line. We can determine which by computing dλ/dρ and evaluating at λ = 0. Doing so we find that If 1, H −1 1 > 0, the eigenvalue is moving from the positive half-line to the negative, while if 1, H −1 1 < 0 it is moving from the negative to the positive half-line. Finally, the assumption that the unperturbed operator H is invertible implies that the perturbed operatorH has at most a one-dimensional kernel, since ifH had a higher dimensional kernel one of the eigenfunctions could be chosen to be orthogonal to f (u) and would thus lie in the kernel of the unperturbed operator H. This completes the proof of the following theorem: Thus, a necessary and sufficient condition for spectral stability is that n + (H) = 0 (from which it follows that 1, H −1 1 < 0) or n + (H) = 1 and 1, H −1 1 > 0.
Remark 5.2. The case where the unperturbed operator H has a kernel is somewhat more involved but can be addressed similarly. There one must do another perturbation calculation near ρ = 0 to understand how the zero eigenvalue(s) move with ρ in order to compute n + (H ρ ) for ρ small but non-zero. From there the calculation is the same: the number of positive eigenvalues can stay the same or decrease by one, and this is determined by the sign of 1, H −1 1 . Here H is singular but 1 is in the range of H, so 'H −1 1' may be interpreted in the sense of the Moore-Penrose pseudo-inverse.
As we remarked earlier, there is actually a one-parameter family of stationary solutions to the nonlocal equation (5.1). We now give an alternative characterization of the stability criterion by expressing it in terms of the dependence of the integrated reaction rate on the mass of the solution. Assume that the family of stationary solutions can locally be parametrized by the total mass M, and that the number of positive eigenvalues of the unperturbed operator is given by n + (H) = k. Then the dimension of the unstable manifold is given by In particular, a necessary condition for stability is that the total reaction rate must be an increasing function of the total mass. 2. Suppose that the stationary solution is given by the quadrature for appropriate constants E, κ. Let μ − and μ + be two turning points for the quadrature: i.e. μ ± are simple roots of antiderivative of the reaction rate, F (u) = f (u). Define the period type integrals where Γ is a simple closed contour containing the branch cut along the real axis from μ − to μ + . Then the dimension of the unstable manifold is given by First, we show that we can find a one-parameter family of stationary solutions through quadrature: see, for instance, the text of Landau & Lifshitz [41]. A stationary solution u must satisfy for some suitable κ; multiplying by u x we find that for some integration constant E, and F (u) = f (u). Solving for u x yields du 2E + 2κu − 2F(u) = dx. (5.13) The Neumann boundary condition du/dx = 0 is satisfied at a turning point of the function 2E + 2κu − 2F(u); therefore, we integrate along the real axis between two points μ − = u(−L), μ + = u(L), at which 2E + 2κu − 2F(u) = 0: where Γ is a simple closed contour that loops around the segment between μ − and μ + .
Also note that one has the identity κP(E, κ) = R(E, κ), so one could in principle eliminate one of these quantities although we have chosen not to do so here.
Having found a family of stationary solutions parametrized by the arc length, u(x; s), we now proceed to compute the quantity 1, H −1 1 in terms of M and R. First, take the equation for the stationary solution While we are not aware of previous results of this type for dissipative equations, there are a number of results for the stability of nonlinear dispersive waves that relate the index of some linearized operator to the sign of the derivative of some conserved quantity with respect to a parameter, analogous to the result presented here. The classical Vakhitov-Kolokolov criteria, relating the stability of solitary wave solutions ψ(x, t) = e iωt φ(x; ω) to the nonlinear Schrödinger equation iψ i = −ψ xx + g(|ψ| 2 )ψ, to the sign of (d/dω) |φ| 2 (x; ω) dx falls into this category [42][43][44][45]. The problem of the stability of periodic solutions to nonlinear dispersive waves is perhaps even more similar to the present case: in this situation, the index is determined by the signs of certain determinants of derivatives of period integrals (see [46][47][48], for examples).
It is worth discussing the relationship of this result to the results of Freitas [36], who relates the dimension of the unstable manifold to the lap number of the underlying solution. Specifically Freitas shows (see Theorems 5.1 and 5.13 of [36]) that the dimension of the unstable manifold is related to m, the number of extrema of the underlying solution u in (−L, L) via m ≤ n + (H) ≤ m + 2.
Thus, the dimension of the unstable manifold can differ from the number of extrema by up to two. To summarize the reasoning, the unstable manifold of the unperturbed operator has dimension equal to the lap number, l(w) = m + 1; the corresponding dimension for the perturbed operator can then go up (to m + 2) or down (to m) by at most one.
Here, we determine this direction (whether up to m + 2, or down to m), by computing the index of the unperturbed operator; in terms of m, our result states that

Discussion
In conclusion, we have presented a general method of analysing low-rank perturbations of self-adjoint operators. We show how to use a simple idea of classical differential geometry (the envelope of a family of curves) to completely analyse the spectrum. When the rank of the perturbation is two, this allows us to view the system in a geometric way through a phase diagram in the perturbation strengths (ρ 1 , ρ 2 ). By locating constant eigenvalue and eigenvalue coincidence curves (both computable through simple formulas), we can determine where the perturbed operator is stable, and where double real eigenvalues bifurcate into complex pairs. This latter situation (bifurcation into a complex pair) coincides with a poorly conditioned eigenvalue, which in turn signals that small changes in the perturbation parameter will yield large changes in the operator behaviour. We used these techniques to analyse three problems of this form; a model of the oculomotor integrator due to Anastasio & Gad [1], a continuum version of the oculomotor integrator model, and a non-local model of phase separation due to Rubinstein & Sternberg [2]. In the first two problems, the physical interpretation of our model (a neural network that must maintain a steady eye position in the absence of input) required that we identify (a) where the perturbed system had a specific eigenvalue and (b) where this particular eigenvalue would be poorly conditioned. Our results in §2 show that both (a) and (b) can only occur in proximity to a specific point on the (ρ 1 , ρ 2 ) plane, which was then easy to visualize. In § §3 and 4, some portions of the model are not completely specified by biology (such as the vestibularto-Purkinje connections), but must be chosen arbitrarily (even randomly). In this paper, we analysed a few carefully chosen examples. But, the geometric method we describe here also gives us a rapid way to survey a large family of such models; using such a survey to draw conclusions about the vestibular-tocerebellar pathway is an area for future work. The problem analysed in §5 involves a rank one (rather than rank two) perturbation, and so a phase plane approach is not applicable. Instead, we systematically exploit the rank-one nature of the perturbation to characterize stability of stationary solutions in terms of the unperturbed operator. We further show how to construct a one-parameter family of stationary solutions, and relate the stability condition to the relative change in two integrated quantities (mass and reaction rate) as one travels along this family.
In analysing these three problems, we have by no means exhausted the possible applications. For example, the eigenvalue problem in equation (4.3) is very similar to the stability problem for spike solutions to activator-inhibitor models in the limit of slow activator diffusion [3,4] (although the problem we study here differs because the eigenvalue enters in a nonlinear way). Similar models of reactiondiffusion equations with non-local interactions have arisen in a number of other contexts including population dynamics [49], runaway ohmic heating [5][6][7] and microwave heating [8]. Therefore, we anticipate that the techniques presented here should be applicable to understanding these problems.