Evans function and Fredholm determinants

We explore the relationship between the Evans function, transmission coefficient and Fredholm determinant for systems of first-order linear differential operators on the real line. The applications we have in mind include linear stability problems associated with travelling wave solutions to nonlinear partial differential equations, for example reaction–diffusion or solitary wave equations. The Evans function and transmission coefficient, which are both finite determinants, are natural tools for both analytic and numerical determination of eigenvalues of such linear operators. However, inverting the eigenvalue problem by the free-state operator generates a natural linear integral eigenvalue problem whose solvability is determined through the corresponding infinite Fredholm determinant. The relationship between all three determinants has received a lot of recent attention. We focus on the case when the underlying Fredholm operator is a trace class perturbation of the identity. Our new results include (i) clarification of the sense in which the Evans function and transmission coefficient are equivalent and (ii) proof of the equivalence of the transmission coefficient and Fredholm determinant, in particular in the case of distinct far fields.


Introduction
Our goal is to establish the connection between the Evans function, transmission coefficient and Fredholm determinant associated with linear nth order eigenvalue problems on R of the form Here, ∂ is the derivative operator ∂Y = Y and A 0 : R × C → C n×n and V : R → C n×n are bounded multiplicative operators. We suppose that V represents a perturbative   . . potential function that decays to zero in the far field of the domain R, while A 0 generates a background or free state. It is constant in the far field though the limits are not necessarily the same. We suppose further that A 0 depends linearly on a spectral parameter λ ∈ C. Indeed, large classes of eigenvalue problems can be couched in the form above. The problem is to determine those values of λ, eigenvalues, for which suitable integrable solutions Y ∈ C n exist to the equation above. The Evans function and transmission coefficient are standard tools in this endeavour. Away from the essential spectrum, and suitably scaled, they are analytic functions of the spectral parameter λ whose zeros coincide with eigenvalues. The multiplicity of the zeros coincide with the algebraic multiplicity of the eigenvalues. Modulo a non-zero scalar factor that renders it domain independent, the Evans function is the determinant of the square matrix whose left block is Y − and right block Y + . The columns of these two matrices are solutions to the differential equation above that decay to zero exponentially fast in the left and right far fields, respectively. The Evans function measures the 'distance from intersection' of the subspaces spanned by the columns of Y − and Y + . The transmission coefficient which is also a determinant, measures the degree to which the solutions Y − , that decay to zero in the left far field, are orthogonal to the subspace that is orthogonal to the subspace of solutions that decays to zero in the right far field. Unwrapping the two orthogonality conditions explains why the Evans function and transmission coefficient are essentially equivalent. We assume away from the essential spectrum (∂ − A 0 ) −1 exists. Then our eigenvalue problem can be expressed in the form (id − (∂ − A 0 ) −1 V)Y = O, or, with V = U|V| representing the polar decomposition of V and setting φ := |V| 1/2 Y, in Birman-Schwinger form From this perspective, we again seek values of the spectral parameter λ ∈ C for which solutions to this problem that decay to zero in the far field exist. The natural underlying Hilbert space is L 2 (R; C n ). For the applications we have in mind, establishing that |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 is a Hilbert-Schmidt compact operator on this space is relatively straightforward. However, herein we focus on the case when it is a trace class operator, i.e. a nuclear operator. With this property, zeros of the Fredholm determinant of id − |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 coincide with eigenvalues. Thus, we come to the central issue. In what sense are the Evans function, transmission coefficient and Fredholm determinant related? Let us briefly outline what has already been established.
The Evans function was first proposed by Evans [1], while Alexander et al. [2] established it as a geometric tool for stability analysis. Subsequently, it has become a standard tool in analytical and numerical studies of the stability of travelling waves; see the review papers featuring the Evans function by Sandstede [3] and Kapitula [4]. The Evans function is also called the miss-distance function [5]. It is also a generalization of the Wronskian and Jost function. The transmission coefficient has it origins much further back in the mathematical literature. Its connection to the Evans function, though trivial in the scalar case, can be found in Swinton [6] and Bridges & Derks [7] for higher order problems. The Fredholm determinant for determining the solvability of linear integral equations was introduced by Fredholm [8]. It has been given recent impetus by Bornemann [9]. Its connection to the transmission coefficient goes back to Jost & Pais [10]. Simon [11,12] computes the explicit relationship between the Fredholm determinant and Wronskian for some example scalar Schrödinger operators; also see Kapitula & Sandstede [13]. However, more generally, Gesztesy & Makarov [14] showed that for operators with semi-separable kernels, their Fredholm and 2-modified Fredholm determinants can be reduced to the determinant of finite rank operators, potentially useful for the evaluation of such Fredholm determinants. Gesztesy et al. [15] then established the connection between the Evans function and a 2-modified Fredholm determinant. They also gave a coordinate-free definition of the Evans function as a ratio of the perturbed and unperturbed versions of the function. The 2-modified Fredholm determinant is relevant for their equivalence results as systems of firstorder operators generate operators |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 which are Hilbert-Schmidt class and in general not trace class. When such operators are trace class, the Fredholm determinant is the 3 rspa.royalsocietypublishing.org Proc. R. Soc . . natural object in the equivalence result. Indeed, systems of Schrödinger operators represent an explicit example [16,Section 4].
Our goal herein is to establish a unified picture of the relationship between the Evans function, transmission coefficient and Fredholm determinant. We focus on those systems of first-order operators for which |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 is trace class and the matrix trace of the matrix perturbation potential V is zero. By considering this subclass of first-order operators, we gain a degree of clarity and directness. To begin with we assume A 0 is constant, but in our final main §7 we assume distinct far field limits for A 0 which is therefore no longer constant. The free Evans function and free transmission coefficients are the corresponding quantities associated with the operator ∂ − A 0 . What we achieve in this paper is as follows, we: (i) Provide practical tests to determine when |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 is trace class. These follow results in Simon [12, ch. 4 Items (ii) to (v) above are a self-contained collection of new results. We bookend the sections above with § §2 and 8. In §2, we provide preliminary results characterizing the spaces of trace class and Hilbert-Schmidt class operators and their relation. We include some important inequalities required in subsequent sections. In §8, we summarize our results, discuss conclusions we can draw from them and outline possible future projects.

Characterizations
To be self-contained, we record a few basic facts on compact operators that we shall need. We refer to Reed & Simon [17,18], Simon [12] and Gohberg et al. [19] The set I p equipped with the norm K p I p := tr |K| p is a Banach space. An operator K ∈ I ∞ is trace class if it belongs to I 1 and Hilbert-Schmidt class if it belongs to I 2 . The latter class I 2 is a Hilbert space with inner product K 1 , K 2 I 2 := tr K † 1 K 2 . A crucial property of the trace is that the trace of a product composition, of any bounded operator with a trace class operator, is invariant to their permutation. We can also characterize the Schatten-von Neumann classes of compact operators I p as follows. The eigenvalues {λ m } m≥1 of any compact operator K ∈ I ∞ are finite in number away from the origin and the origin itself is the only possible accumulation point. The singular values {s m } m≥1 of K ∈ I ∞ are the eigenvalues of √ K † K. Then we can equivalently characterize tr K p = m≥1 λ p m and tr |K| p = m≥1 s p m . The former is bounded by the latter. There is a natural ordering of the Schatten-von Neumann classes as follows: I p → I q for any p ≤ q. Fundamentally, for any trace class operator K ∈ I 1 , the Fredholm determinant det 1 Using the relation det exp = exp tr, we can also characterize it (for p = 1) by When p is an integer greater than one, we define the p-modified or regularized Fredholm determinants for compact operators K ∈ I p by this last formula as well, knocking out the lower order non-convergent traces. Three further results will prove very useful to us. First, if A, B ∈ I 2 , then AB ∈ I 1 . Second, if A : H → H is a bounded operator and B ∈ I 1 , then AB ∈ I 1 and BA ∈ I 1 . This is the trace class ideal property. Third, if A : H → H is a bounded operator and B ∈ I 2 , then AB ∈ I 2 and BA ∈ I 2 . This is the Hilbert-Schmidt ideal property. Indeed, we have which also hold for BA and where · op denotes the operator norm. The proof of these three results can be found for example in Conway [20, Section 18].

Practical tests
The natural setting we require, and which we assume hereafter, is the separable Hilbert space of C n -valued square integrable functions H = L 2 (R; C n ); see Reed & Simon [18,p. 121] for an example basis. As we will be concerned with kernel functions, we also require the separable Hilbert space L 2 (R 2 ; C n×n ) with inner product, for any G, H ∈ L 2 (R 2 ; C n×n ), given by G, H L 2 (R 2 ;C n×n ) := R 2 tr(G † (x; y)H(x; y)) dx dy.
The following fundamental lemma is proved in appendix A.

Lemma 3.1 (Hilbert-Schmidt class operators). The operator K ∈ I ∞ is Hilbert-Schmidt if and only if there is a function G
for all ϕ ∈ L 2 (R; C n ). In addition, we have K I 2 = G L 2 (R 2 ;C n×n ) .

Lemma 3.2 (Practical test for Hilbert-Schmidt class).
SupposeK, H ∈ L 2 (R; C n×n ) then K * H ∈ I 2 , and indeed we have Proof. By direct computation, line by line we successively use the following results: (i) the kernel of K * H is (2π ) −1/2 G(x − y)H(y) and the trace of a product of two operators is invariant to their permutation; (ii) the Cauchy-Bunyakovski-Schwarz inequality in the form tr A † B ≤ (tr A † A) 1/2 (tr B † B) 1/2 for any two matrices A and B [21, p. 289]. We used this inequality with A = G † G and B = HH † and also that tr(HH † ) † HH † ≡ tr(H † H) † H † H; (iii) The sum of the squares of singular values is less than the square of their sum, i.e. (tr A † A) 1/2 ≤ tr(A † A) 1/2 ; (iv) The Young inequality; (v) That tr G † G L 1 (R;C) = G 2 L 2 (R;C n×n ) and (vi) The Plancherel Theorem. The direct computation is as follows: Lemma 3.2 provides us with a test to determine if a given bounded operator is of Hilbert-Schmidt class. In practice, suppose we know the Fourier transformK =K(ξ ) of an operator and we have established it lies in L 2 (R; C n×n ). Further suppose J = J(x) and H = H(x) are bounded multiplicative operators from L 2 (R; C n ) to L 2 (R; C n ) and J ∈ L ∞ (R; C n×n ) and H ∈ L 2 (R; C n×n ).
We would like an analogous practical test of when an operator such as JK * H from L 2 (R; C n ) to L 2 (R; C n ) is of trace class. To achieve this, we require the two classical results mentioned above, that the product of two Hilbert-Schmidt class operators is of trace class, and the trace class ideal property. To establish that K * H is trace class, for example where H = H(x) is a bounded multiplicative operator and K * is the operator corresponding toK =K(ξ ) in Fourier space, we naturally require the stronger conditionsK ∈ L 2 w (R; C n×n ) and H ∈ L 2 w (R; C n×n ), i.e. in a weighted square integrable space. More precisely we define

Lemma 3.3 (Practical test for trace class).
SupposeK, H ∈ L 2 w (R; C n×n ) then K * H ∈ I 1 , and there exists a constant c > 0 such that We adapt the proof for the scalar case given in Reed & Simon [22,Appendix 2]. Using the practical test for Hilbert-Schmidt class lemma 3.2 and that L 2 w (R; C n×n ) → L 2 (R; C n×n ), our assumptions onK and H imply that K * H is of Hilbert-Schmidt class and has integral kernel (2π ) −1/2 G(x − y)H(y). We decompose K * H = AB into the product of the two operators A and B defined as follows, is the weight function. Then using AB I 1 ≤ A I 2 B I 2 and slightly adapting the proof of lemma 3.2 to take into account that w = w(x) is scalar, we find ) . We now insert this into the estimate above.
The trace ideal property of I 1 implies JK * H I 1 ≤ J op K * H I 1 and thus JK * H is also trace class for any bounded multiplicative operator J : We end this section with a trace formula and an immediate corollary. A proof is provided in appendix B.

Corollary 3.5 (Trace formula: separable kernel).
If a Hilbert-Schmidt operator K has a separable kernel G(x; y) = G 1 (x)G 2 (y) for all (x, y) ∈ R 2 , then for any integer ≥ 1, we have Remark 3. 6. Some additional comments on the material above are as follows: (i) the map K * above corresponds to the operator K(−i∇) in Reed & Simon [18, p. 57] and Simon [12, p. 37]; (ii) the sufficiency conditions in lemma 3.3 onK and H can be weaker, for example a result of Birmann-Solomajk requires they need only be in the space of 1 -summable L 2 w (I k ; C n×n )-norms ofK and H, where I k is the unit interval with centre k ∈ Z-this space contains L 2 w (R; C n×n )-see Simon [12, ch. 4] for more details. However, the sufficient conditions quoted above are adequate for the applications we have in mind; (iii) the proof of the trace class lemma 3.3 for scalar operators by Reed & Simon [22,Appendix 2] is for any spatial dimension d, with weights w α and α > d/2; and (iv) in the trace formula lemma 3.4, we relax the continuity condition on G in §6 and allow G to have a jump across its diagonal-the formula for tr K remains meaningful as the kernels we consider have continuous matrix traces along the diagonal; see also remark 6.3 where we discuss results by Brislawn [23,24] for discontinuous kernels.

Examples
Anticipating our main result in § §6 and 7, we will be concerned with establishing whether operators from L 2 (R; C n ) to L 2 (R; C n ) of the form is the unitary matrix obtained from the polar decomposition of V. As we saw in §3, the practical tests for Hilbert-Schmidt or trace class operators at our disposal rely on testing the integrability properties of the multiplicative operator (iξ id − A 0 ) −1 in Fourier space corresponding to the operator (∂ − A 0 ) −1 , as well as the integrability properties of |V| 1/2 and U|V| 1/2 . We assume in this section that the eigenvalues of A 0 are non-zero and never purely imaginary. Hence, the integrability properties of the operator (iξ id − A 0 ) −1 rely on the rate of its asymptotic decay as |ξ | → ∞. We observe (iξ id − A 0 ) −1 is square integrable. We assume hereafter that where C(R; C n×n ) represents the space of continuous C n×n -valued matrix functions on R.
Recall the sufficient conditions for |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 to pass the practical test for trace class lemma 3. 3. These are first that |V| 1/2 have bounded operator norm, and second that U|V| 1/2 ∈ L 2 w (R; C n×n ) which is equivalent to |V| 1/2 ∈ L 2 w (R; C n×n ), which is equivalent to V ∈ L 1 w 2 (R; C n×n ). These two conditions are satisfied by our assumptions on the potential function stated above. However, the third condition is not satisfied as (iξ id − A 0 ) −1 ∈ L 2 w (R; C n×n ). Hence in general, without further insight and knowledge, the operator |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 fails the practical test for trace class lemma 3. 3. However, we now consider two examples of operators of the form above which pass the practical test for trace class. This is achieved by taking advantage of the structure of (∂ − A 0 ) −1 and V, as will be apparent.

Example 4.1 (System of elliptic operators). Consider the coupled Schrödinger operator of the form
where c 0 and d 0 are constant square matrices and v is a matrix potential on R. Such operators, for example, arise in the study of the linear stability of travelling wave solutions to systems of nonlinear reaction-diffusion equations. Linearizing the system of equations about the travelling wave in a co-moving frame and assuming an exponential time dependence with growth λ (the spectral parameter) generates an eigenvalue problem of the form Here, a 0 is the matrix of diffusion coefficients. Replacing c 0 + λa −1 0 → c 0 , the determination of eigenvalues reduces to finding the zeros of the determinant of the operator above. In phase space, the first-order constant coefficient operator corresponding to ∂ 2 − d 0 ∂ − c 0 and matrix potential V corresponding to v have the form where O represents the zero matrix. The matrices |V| 1/2 and U have the form which can be verified by direct inspection. Additional direct calculation reveals that , and the properties we assume for V transfer to v, and vice versa. Hence by the practical test for trace class lemma 3.3

Example 4.2 (High-order scalar operator).
Consider the scalar nth order linear operator given by ∂ n + a n−1 ∂ n−1 + · · · + a 1 ∂ + a 0 + v, where the a i , i = 0, . . . , n − 1 are scalar constants and v is a scalar potential function on R. If we rewrite this as a first-order system in phase space, then the corresponding matrix potential V has the same form as that in the last example, with the only non-zero entry being the lower left entry which is v; however, this is now the scalar entry in the lower left position and the rest of the n × n matrix V has zero entries. Similarly, the matrices |V| 1/2 and U have the same form as in the last example, but the entries shown are scalar with the rest of the matrix entries being zero. Direct calculation reveals that the only non-zero entry in the matrix operator n represents the top right-hand scalar entry in the operator (∂ − A 0 ) −1 . Direct calculation reveals the corresponding entry in the Fourier transform of (∂ − A 0 ) −1 is given by [(iξ id − A 0 ) −1 ] 1,n = ((iξ ) n + (iξ ) n−1 a n−1 + · · · + (iξ )a 1 + a 0 ) −1 . This is square integrable in Fourier space with respect to the weight w. Hence, as in the last example, by the practical test for trace class lemma 3.3, the operator |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 is trace class.

Evans function and transmission coefficient
Our main practical concern are eigenvalue problems associated with the stability of travelling pulses. A wide class of such eigenvalue problems can be expressed in the following form on R: Here, A 0 = A 0 (λ) is a constant C n×n -valued matrix which depends linearly on the spectral parameter λ ∈ C. The C n×n -valued matrix function V = V(x) with x ∈ R represents a potential perturbation. We assume throughout that V is w 2 -integrable, uniformly bounded and continuous, exactly as outlined at the beginning of the last examples ( §4). A comprehensive reference at this stage for the present material is Sandstede [3]. Our goal is to determine the values of λ for which the operator ∂ − A 0 − V is not invertible, i.e. the spectrum of this operator. The complement to the spectrum in C is the resolvent set. Specifically from the spectrum, we are interested in determining the pure-point spectrum of this operator rather than its complement the essential spectrum. To achieve this, we can construct determinant discriminants which are analytic in the spectral parameter away from the essential spectrum and only zero at purepoint spectrum values. Contour integration using such determinants in the spectral parameter plane then provides a global and local location strategy for the pure-point spectrum. We will assume that away from the essential spectrum, the matrix A 0 = A 0 (λ) is strictly hyperbolic; this is characteristic of a wide class of travelling pulse stability problems. Hence away from the essential spectrum, the eigenvalue equation above has an exponential dichotomy and the unbounded operator ∂ − A 0 − V is Fredholm with index zero. At pure-point spectrum values, the kernel of ∂ − A 0 − V is non-trivial. The existence of the exponential dichotomy to the eigenvalue equation above, and thus Fredholm property of ∂ − A 0 − V, as well as the locale of the essential spectrum, is determined by the classification of the solutions to the following associated constant coefficient equation which does not exhibit a pure-point spectrum: The existence of an exponential dichotomy to the eigenvalue equation (with potential V) implies that there is a, say, k-dimensional subspace of solutions to the eigenvalue equation that decays exponentially as x → −∞, and an (n − k)-dimensional subspace of solutions that decays exponentially as x → +∞. We denote by Y − = Y − (x; λ) the C n×k -valued function whose column span coincides with the subspace decaying as x → −∞ and by Y + = Y + (x; λ) the C (n−k)×k -valued function whose column span coincides with the subspace decaying as x → +∞. Corresponding subspaces of commensurate dimension exist for the constant coefficient equation, we denote by Y ± 0 = Y ± 0 (x; λ) the corresponding matrix valued functions whose columns span the respective exponentially decaying subspaces. Asymptotically as x → ±∞, we have Y ± ∼ Y ± 0 . From this perspective, the pure-point spectrum is determined by the values of λ ∈ C for which the two subspaces spanned by the columns of Y − and Y + , intersect. In other words, a solution to the eigenvalue problem exists that decays exponentially to zero in both far fields. This is equivalent to the condition that the columns of Y − and Y + are not linearly independent, and a test for that is whether the determinant, of the n × n matrix whose columns are the columns of Y − and Y + , is zero. This intersection property should be x-independent and by Liouville's Theorem an appropriate scalar factor achieves this. This is the Evans function, first introduced by Evans [1]. A comprehensive study is provided in [2].

Definition 5.1 (Evans function). The Evans function is the λ-dependent complex scalar quantity
The free Evans function is that corresponding to the constant coefficient operator ∂ − A 0 , i.e. corresponding to the free state. If we replace Y ± by Y ± 0 and set V ≡ 0 in the Evans function, we get the free Evans function defined as exp(−tr A 0 (λ)x) det( Y − 0 (x;λ) Y + 0 (x;λ) ).
Associated with the eigenvalue problem above is the adjoint eigenvalue problem given by We denote by Z − = Z − (x; λ) and Z + = Z + (x; λ) the, respectively, C (n−k)×n -valued and C k×n -valued functions whose respective row spans determine solution subspaces to the adjoint eigenvalue problem that decay exponentially as x → −∞ and x → +∞. In addition, we denote by Z ± 0 = Z ± 0 (x; λ) the corresponding matrices for the adjoint constant coefficient equation The solutions Y ± 0 and Z ± 0 to the constant coefficient problems above satisfy a diagonal relation that will be helpful in our subsequent analysis.

Lemma 5.2 (Diagonal relation). The solutions Y ±
0 and Z ± 0 satisfy the relation where D is a constant diagonal matrix with non-zero entries.

Proof.
Consider any pair of solutions Y 0 ∈ C n×1 and Z 0 ∈ C 1×n to their respective constant coefficient problems. Then we see Each solution Y 0 has the form U exp(μx), where μ is an eigenvalue and U ∈ C n×1 a corresponding right eigenvector of A 0 . By our strict hyperbolicity assumption, there are n independent solutions, and k of the eigenvalues have a positive real part. Each solution Z 0 , corresponding to an adjoint solution, has the form W exp(−νx), where ν is an eigenvalue and W ∈ C 1×n a corresponding left eigenvector of A 0 . Classically, if μ = ν then WU = 0, while if μ = ν we have WU = 0 [21, p. 405, 523].
With all this in hand, we can now motivate and define the transmission coefficient. Starting with the Evans function, we find Note that the first two factors on the right constitute the free Evans function which is xindependent and λ-dependent. We could define a generalized transmission coefficient as the product of the middle exponential term and the numerator in the ratio. The denominator would correspond to a free generalized transmission coefficient. However, classically, we take the limit as x → +∞ in these latter terms. The diagonal relation in lemma 5.2 Hence the determinants in the ratio above collapse as x → +∞ (using that the limit and determinant operations commute). Cancelling off the determinant of Z − 0 Y + 0 which appears in both the numerator and denominator, we find .
Note by the diagonal relation in lemma 5.2, the quantity Z + 0 Y − 0 is constant. For completeness, we now define the transmission coefficient and free transmission coefficient.

Definition 5.3 (Transmission coefficient). This is defined as the λ-dependent complex scalar quantity
The free transmission coefficient is simply the quantity det(Z + 0 Y − 0 (λ)). The relation we derived above establishes that away from the essential spectrum the Evans function can be decomposed as a product of the free Evans function and a ratio of the transmission coefficient and free transmission coefficient. In other words, schematically, we have Evans function free Evans function = transmission coefficient free transmission coefficient .
We end this section with three important observations. First, the solutions Y − 0 ∈ C n×k , Y + 0 ∈ C n×(n−k) , Z − 0 ∈ C k×n and Z + 0 ∈ C (n−k)×n to the constant coefficient problems above have the following explicit special forms. Let U − ∈ C n×k and U + ∈ C n×(n−k) denote the matrices whose columns are the eigenvectors of A 0 , respectively, corresponding to the k eigenvalues with positive real part and the n − k eigenvalues with negative real part. Also let W − ∈ C n×k and W + ∈ C n×(n−k) 11 rspa.royalsocietypublishing.org Proc. R denote the matrices whose rows are the left eigenvectors of A 0 , respectively, corresponding to the k eigenvalues with negative real part and the n − k eigenvalues with positive real part. Then we have the identifications where Λ − and Λ + denote the diagonal matrices of the eigenvalues of A 0 with positive and negative real parts, respectively. Second, we can rescale the free state solutions Y ± 0 and Z ± 0 to the constant coefficient problems so they satisfy some unitary relations that are helpful for our subsequent analysis. We choose to rescale the adjoint solutions Z ± 0 by rescaling W ± as follows.

Definition 5.4 (Unitarily scaled solutions).
Suppose D is the constant diagonal matrix from the diagonal relations lemma 5.2. Let D − denote the upper k × k block of D and D + the lower (n − k) × (n − k) block. We define rescaled solutionsŴ ± and correspondinglyẐ ± 0 bŷ Note the solutionsẐ ± 0 have the same exponential form as Z ± 0 , but with W ± replaced byŴ ± . The nomination ofẐ ± 0 as unitarily scaled solutions is justified in the following.

Lemma 5.5 (Unitary relations). The solutions Y
These are generated by corresponding unitary relations satisfied by U ± andŴ ± . Proof. From the proof of the diagonal relation lemma 5.2, we already know that Substituting W ± = D ∓Ŵ ± into this relation generates the corresponding unitary relation forŴ ± and U ± , i.e. with the identity matrix on the right. Hence the inverse of the n × n matrix (U − U + ) exists and is given by the corresponding n × n matrix withŴ + andŴ − in the upper and lower block positions, respectively, as shown above. The second unitary relation, with the order of the two block matrices withŴ ± and U ± swapped round, now follows [21, p. 117]. The unitary relations for Y ± 0 andẐ ± 0 now follow using their explicit exponential forms.
Third, in the next section, the unitarily scaled solutionsẐ ± 0 are the natural choice for constructing the Green's kernel associated with (∂ − A 0 ) −1 . They are also natural for establishing the equivalence between the transmission coefficient and corresponding Fredholm determinant, as the corresponding free transmission coefficient is unity with these scaled solutions. Any result we prove using the unitarily scaled solutions we can recover for the original solutions Z ± 0 by substituting the relation between the two. This is important, as an edifying feature of the Evans function is that it is analytic in λ away from the essential spectrum and its zeros correspond to eigenvalues of ∂ − A 0 − V with coincident multitude. The solutions Y ± 0 and Z ± 0 can be chosen to be analytic in λ from the start. The Evans function and free Evans function are independent of the unitary rescaling as they are defined only using Y ± and Y ± 0 , respectively. And, as we should expect, the ratio of the transmission coefficient and free transmission coefficient is also invariant to the unitary rescaling. We return to these points in our Conclusion ( §8).
asymptotically as x → +∞ we have Y − ∼ Y − 0 a + Y + 0 b where the constant k × k and (n − k) × (n − k) matrices a = a(λ) and b = b(λ) are the transmission and reflection matrix coefficients, respectively. For an element of the span of Y − to be an eigenfunction, we require it to be asymptotically in the span of Y + 0 b or equivalently in the span of Y + 0 , as x → +∞. Or equivalently in this limit, we require an element of the span of Y − to be orthogonal to the subspace of C n that is orthogonal to the subspace spanned by the columns of Y + 0 . In other words, we require an element of the span of Y − to be orthogonal to the subspace spanned by the rows of Z + 0 . The existence of a non-trivial linear combination of columns of Y − to be orthogonal to each row of Z + 0 amounts to requiring det(Z + 0 Y − ) to be zero in the limit. Modulo the constant non-zero exponential factor, we define the transmission coefficient as this determinant. However we note, using the diagonal relation lemma 5.2 In other words, det(a(λ)) equals the ratio of the transmission coefficient to the free coefficient above. For the classical example of the transmission coefficient for the scalar Schrödinger operator see Kapitula & Sandstede [13], for which tr V ≡ 0. They show in this case, the connection between the Evans function, transmission coefficient and Fredholm determinant (in the following sections). Bridges & Derks [7] show the transmission coefficient equals the Evans function up to a non-zero analytic factor.

Equivalence theorem
Our goal in this section is to show that the transmission coefficient equals the Fredholm determinant for trace class operators, associated with eigenvalue problems on R of the form The setting is precisely that outlined in §5. Let us now specify which Fredholm determinant we mean. We have already seen that away from the essential spectrum (∂ − A 0 ) −1 exists. Hence our eigenvalue problem can be expressed in the form As in § §4 and 5, we assume throughout the potential perturbation V is w 2 -integrable, uniformly bounded and continuous on R. An equivalent formulation is that of Birman-Schwinger. If we use the polar decomposition for V = U|V| and set φ := |V| 1/2 Y, we see Here, |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 is the Birman-Schwinger operator as considered in §4. To proceed, we establish Green's kernel corresponding to for some function G ∈ L 2 (R 2 ; C n×n ), which in this context here without loss of generality, we will also assume is continuously differentiable everywhere except along the diagonal y = x. Note that G also depends on λ through A 0 = A 0 (λ). However, we suppress the dependence on λ in all the relevant variables for the moment. Classical theory implies that we require G = G(x; y) to satisfy the pair of differential equations ∂ x G − A 0 G = δ(x − y)id and −∂ y G − GA 0 = δ(x − y)id. Two formal calculations that can retrospectively be made rigorous reveal this is the correct prescription for G = G(x; y).
Green's function G = G(x; y) with the correct decay properties in the far field is given by the following semi-separable form which can be confirmed by direct substitution, The second relation of the unitary relations in lemma 5.5 The unitarily scaled solutionsẐ ± 0 in Green's kernel thus guarantee the natural jump condition G(x + ; x) − G(x − ; x) = id across the diagonal y = x is satisfied.
The solutions Y ± 0 andẐ ± 0 have simple exponential forms as indicated at the end of §5. Indeed using these, we can express G in the form G = G(x − y), consistent with the kernel result for Hilbert-Schmidt operators stated prior to the practical test for Hilbert-Schmidt class lemma 3.2 Green's kernel corresponding to |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 has the semi-separable form |V(x)| 1/2 G(x; y)U(y)|V(y)| 1/2 . Upon closer inspection, the unitary relation Y − implies that the diagonal elements of the Green's kernel matrix G = G(x; y) have a unit jump at the diagonal y = x, while the off-diagonal elements are continuous. In addition, the unitary relation implies that along the diagonal y = x, we have Hence, if assume tr V ≡ 0, then along the diagonal y = x, we have and thus the matrix trace of the kernel |V(x)| 1/2 G(x; y)U(y)|V(y)| 1/2 is continuous at the diagonal y = x. We can thus unambiguously define the trace of |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 .

Definition 6.2 (Trace for discontinuous kernels).
For potential peturbations V which are w 2 -integrable, uniformly bounded and continuous on R and with tr V ≡ 0, we define the trace of the linear operator |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 , for which (∂ − A 0 ) −1 has a kernel with a jump along the diagonal, by

Remark 6.3.
We note the following: (i) consider the examples of trace class operators in §4. Green's kernels corresponding to |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 are given by |v(x)| 1/2 G 12 (x; y)v(y) |v(y)| −1/2 and |v(x)| 1/2 G 1n (x; y)v(y)|v(y)| −1/2 in the first and second examples, respectively. They both involve the off-diagonal elements of G only and hence these kernels are continuous; (ii) by analogous arguments to those above, if tr V ≡ 0, then along the diagonal y = x the matrix trace of the kernel G(x; y)V(y) corresponding to (∂ − A 0 ) −1 V is also continuous. Indeed, using the invariance of the trace to the order of the product of two operators, the matrix traces of G(x; y)V(y) and |V(x)| 1/2 G(x; y)U(y)|V(y)| 1/2 are equal along the diagonal y = x; (iii) consider the scalar Schrödinger operator ∂ 2 − c 0 − v which is a special case of both examples in §4. Re-writing it as a first-order operator and focusing on (∂ − A 0 ) −1 V, we observe if G = G(x − y) is the 2 × 2 Green's kernel corresponding to (∂ − A 0 ) −1 , then Green's kernel corresponding to (∂ − A 0 ) −1 V is the 2 × 2 matrix whose left column contains G 12 v and G 22 v and whose right column is zero. The Fourier transforms of G 12 and G 22 areĜ 12 (ξ ) = −(ξ 2 − c 0 ) −1 andĜ 22 = −iξ (ξ 2 − c 0 ) −1 . We noteĜ 12 is square integrable with respect to the weight function w, butĜ 12 is not. Indeed, they are explicitly given by G 12 (x) = (π/2c 0 ) 1 functions either side of the diagonal y = x are continuous, up to and including the diagonal. If the operator is trace class, then the kernel function is continuous on R 2 . Our results in (i) for the corresponding Birman-Schwinger operator are consistent with this. Further, as the scalar operator corresponding to G 22 has a jump along the diagonal y = x, we conclude it cannot be trace class; and (iv) Brislawn [23,24] has shown how to define the trace of a trace class operator with a kernel that is only square integrable and thus not necessarily continuous. This is achieved by averaging on cubes via the Hardy-Littlewood maximal function. In particular, the plain Volterra operator on L 2 ([0, 1]; R) with kernel equal to 1 below the diagonal and 0 above, has trace equal to 1 2 . However, it has singular values given by 2/(π (2n + 1)) and thus is not trace class [23, Example 3.2].
We now establish the main result of this section, our equivalence theorem. Before we state and prove it, we require the following key lemma. It concerns the solutions Y − of the eigenvalue problem (∂ − A 0 − V)Y = O whose column span coincides with the subspace of solutions decaying as x → −∞. Indeed, the crucial insight is that by the variation of constants formula, the solutions Y − satisfy the Volterra integral equation

Second, an equivalent expression for the kernel for J is given by
We can also establish this result by premultiplying the integral equation for Y − byẐ + 0 , using the unitary relations, and then applying the commutable operations of large x limit and determinant.
Second, we focus on the Fredholm determinant. The key observation is that we can decompose the operator |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 as follows. Direct computation using the kernel |V(x)| 1/2 G(x; y)U(y)|V(y)| 1/2 , where G is the kernel corresponding to (∂ − A 0 ) −1 , reveals where J is the Volterra integral operator given in the Volterra integral equation lemma 6.4 and R is the integral operator with kernel given by Note R has separable kernel and is thus a finite rank operator and trace class. As we have J I 1 ≤ |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 I 1 + R I 1 and |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 is trace class by assumption, we deduce J is also trace class. Using the product decomposition id − J + R = (id − J)(id + (id − J) −1 R and provided J and (id − J) −1 R) are trace class, we have As we see presently (id − J) −1 R is trace class as it has a separable kernel. Third, we compute det 1 (id − J). Using the relation det exp = exp tr, we have As J is a Volterra integral operator involving the Heaviside function, using the trace power formula lemma 3.4, we observe that tr J will involve an integral over R of an integrand with a factor H(y 1 − y 2 )H(y 2 − y 3 ) · · · H(y −1 − y )H(y − 1) of a product of Heaviside functions. Hence for all ≥ 2, the quantity tr J is zero as it is an integral of an integrand which is zero everywhere except on a subset of -dimensional measure zero. We also observe tr J = 0. This follows using the unitary relation in lemma 5.5 and the assumption tr V ≡ 0 which imply the matrix trace of the kernel of J is continuous on the diagonal. Hence, we deduce det 1 (id − J) = 1. Fourth, we establish that (id − J) −1 R is trace class and compute det 1 (id + (id − J) −1 R). Using that φ − 0 = (id − J)φ − from the Volterra integral equation lemma 6.4, we see RẐ + 0 (y)U(y)|V(y)| 1/2 ϕ(y) dy.
Hence, (id − J) −1 R has separable kernel φ − (x)Ẑ + 0 (y)U(y)|V(y)| 1/2 and is thus trace class. Then, using the result for separable kernels in corollary 3.5, we find for any > 0 sufficiently small, By analytic continuation, we can extend this result to = 1. Then removing the logarithms, this equals our expression above for the transmission coefficient. (iii) in essence, in the final calculation in the proof of theorem 6.5, we demonstrate that the trace of any power of (id − J) −1 R equals the trace of the corresponding power of (Ẑ + 0 VY − )(y) dy. Hence, we could prove the equivalence of the determinants using the Plemelj-Smithies formula for det 1 (id + (id − J) −1 R) which is analytic in C [25, Theorem 6.8]; (iv) as we can swap the order of two arguments under the trace, using the trace formula lemma 3.4, we observe the traces of all powers of |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 and (∂ − A 0 ) −1 V coincide. Hence, the Fredholm determinants of the two operators coincide, assuming the Fredholm determinant of id − (∂ − A 0 ) −1 V exists; (v) if tr V = 0, then we need to include the factor exp(−tr J) in the evaluation of the Fredholm determinant; (vi) the approach we used in the proof of the equivalence theorem is based on the standard approach-decomposing the given operator with semi-separable kernel into the sum of a Volterra and finite rank operators-which is given in Gesztesy et al. [15] for Hilbert-Schmidt operators. They use results from Gesztesy & Makarov [14], Gohberg et al. [26] and Gohberg et al. [19]; and (vii) Gesztesy et al. [15] do not assume strict hyperbolicity which we have done to keep the arguments as succinct as possible.

Distinct far fields
Our goal in this section is to show that the equivalence of the transmission coefficient and Fredholm determinant carries over to the case when the far field limits of the eigenvalue problem The Evans function and transmission coefficient are well defined in this instance with only minor modification to their construction in §5-which is the standard approach. What underlies our approach here is that we decompose A 0 + V in such a way that A 0 = A 0 (x) ensures the distinct far field limits are satisfied so V → O as x → ±∞. We assume this decomposition in this section, as it also gives us the natural framework to show the equivalence of the transmission coefficient and a Fredholm determinant with only a slight modification to the arguments in §6. We also assume in each far field, A 0 is constant and strictly hyperbolic, with the same hyperbolic splitting (the number of eigenvalues with positive real parts, say k, and negative real parts, consequently n − k). We can thus also assume V → O as x → ±∞. Further, we assume both A 0 and V are continuous and uniformly bounded. To construct the Evans function or transmission coefficient, we do not need to perform this decomposition; however, the properties we derive for the solutions of the equations ∂Y 0 = A 0 Y 0 and ∂Z 0 = −Z 0 A 0 are crucial to our equivalence proof. We now have two separate constant coefficient equations in the far field corresponding to ∂Y 0 = A 0 Y 0 . However as before in §5, given the identical hyperbolic splitting of the two far field limits of A 0 , there is k-dimensional subspace of solutions that decays exponentially as x → −∞, and an (n − k)-dimensional subspace of solutions that decays exponentially as x → +∞. We suppose these subspaces are given by the column span of solutions Y − 0 ∈ C n×k and Y + 0 ∈ C n×(n−k) , respectively. For the adjoint equation ∂Z 0 = −Z 0 A 0 , there exist solutions collected as rows in the matrices Z − 0 ∈ C (n−k)×n and Z + 0 ∈ C k×n which decay as x → −∞ and x → +∞, respectively. Solutions of commensurate dimension Y ± exist to the full problem 0 as x → ±∞. We can recover the unitary relations of lemma 5.5 for Y ± 0 and suitably defined scaled solutionŝ Z ± 0 despite distinct far fields. The diagonal relation for Y ± 0 and Z ± 0 becomes the following.

Lemma 7.1 (Diagonal block relation).
The solutions Y ± 0 and Z ± 0 satisfy the relation

Proof.
We observe, by direct computation for any pair of solutions Y 0 ∈ C n×1 and Z 0 ∈ C 1×n , that where the columns of U − and rows of W − are the right and left eigenvectors of A 0 (−∞). Similarly, we have Y + 0 ∼ exp(A 0 (+∞)x)U + and Z + 0 ∼ W + exp(−A 0 (+∞)x) where the columns of U + and rows of W + are the right and left eigenvectors of A 0 (+∞). Then using that Note that the constant matrices D ± are in general no longer diagonal as Y − 0 and Z − 0 , and, Y + 0 and Z + 0 satisfy different asymptotic problems. We must keep in mind that Y ± 0 and Z ± 0 are the solutions generated by A 0 = A 0 (x). We assume the constant matrices Z + 0 Y − 0 = D − and Z − 0 Y + 0 = D + are non-singular. Generically, this will be the case-see the discussion in §8. This guarantees the free transmission coefficient is non-zero. Using the diagonal block relation lemma 7.1, it guarantees the free Evans function is non-zero. This guarantees (∂ − A 0 ) −1 exists (this was automatic in the equal far field case under the strict hyperbolicity assumption on the constant matrix A 0 ). With this proviso, the block diagonal relation implies the framework we provided in §5 for establishing the relation between the Evans function and the transmission coefficient carries through essentially unchanged. We can mirror all the results in §5 up to and including the schematic formula between the Evans function and transmission coefficient. Then again with the same proviso, the scaled unitary solutions are defined as for the equal far field case in §5 bŷ

Lemma 7.2 (Unitary relations reprise).
Unitary relations for Y ± 0 andẐ ± 0 hold for distinct far fields. Proof. Making the corresponding change of variables in the diagonal block relation above generates the same diagonal block relation but withẐ ± 0 in place of Z ± 0 and the appropriate identity matrices in place of D ± on the right. The two matrices on the left of the new diagonal block relation are then inverses of each other and the unitary relation in reverse order follows.
The framework we provided in §6 for establishing the relation between the transmission coefficient and the Fredholm determinant det 1 (id − |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 ), where now A 0 = A 0 (x) carries through with only a couple of modifications which we now outline. Green's function G = G(x, y) associated with (∂ − A 0 ) −1 has the same semi-separable form except that Y ± 0 andẐ ± 0 are now the solutions generated by A 0 = A 0 (x). Again the unitary relations imply the jump condition for G across the diagonal line y = x is satisfied. If we assume tr V ≡ 0, then the trace of |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 or (∂ − A 0 ) −1 V is defined in the same way. The Volterra integral equation φ − = φ − 0 + Jφ − of lemma 6.4 still applies but now only for J defined via the kernel H(x − y)(φ − 0 (x)Z + 0 (y) + φ + 0 (x)Z − 0 (y))U(y)|V(y)| 1/2 . This requires independent proof which does not rely on A 0 being constant as follows. Direct computation using the unitary relations and Premultiplying this Volterra equation for Y − by |V| 1/2 and using the definitions for φ − and φ − 0 gives the result. The proof of the equivalence theorem 6.5 now follows step by step with the solutions Y ± 0 andẐ ± 0 those generated by A 0 = A 0 (x). We summarize these conclusions as follows.

Conclusion
We have established the equivalence of the Evans function and transmission coefficient for the eigenvalue problem (∂ − A 0 − V)Y = O, in the sense that the ratio of the Evans function to the free Evans function is equal to the ratio of the transmission coefficient to the free transmission coefficient. As we remarked at the end of §5, the Evans function and free Evans function are invariant to the unitary rescaling, as is the ratio of the transmission coefficient to the free transmission coefficient. Hence, the Fredholm determinant det 1 (id − |V| 1/2 (∂ − A 0 ) −1 U|V| 1/2 ), which equals the transmission coefficient with unitarily scaled solutions for which the free transmission coefficient is unity, equals the ratio of the transmission coefficient to the free transmission coefficient-whether the unitary scaling is used or not. These statements hold for equal or distinct far fields. In other words, we have established that Let us now pull back our perspective to our original problem of determining values of λ ∈ C for which there exist solutions to (∂ − A 0 − V)Y = O where A 0 = A 0 (x; λ) in general. The locale of the essential spectrum is determined by the values of λ for which the far field limits of A 0 are no longer strictly hyperbolic-at least one eigenvalue of either limit becomes pure imaginary characterizing the continuous spectrum or the hyperbolic splitting no longer matches. Away from the essential spectrum, we must choose A 0 = A 0 (x; λ) suitably so that (∂ − A 0 ) −1 existsas mentioned previously this is not an issue in the equal far field case. Hence, away from the essential spectrum, the free Evans function and free transmission coefficients are bounded and non-zero. As the Evans function is analytic in that region, the product of the free Evans function and the ratio of the transmission and free transmission coefficients is analytic there, as well as the product of the free Evans function and the Fredholm determinant. Zeros of these product quantities must thus coincide in this region. Those zeros coincide with pure point eigenvalues with coincident multiplicity. We remark that for eigenvalue problems of the form (∂ − A 0 − V)Y = O that arise in the study of the stability of travelling waves, the origin is an eigenvalue associated with translation invariance. In some cases, the origin is embedded in the essential spectrum within which further analysis of all the discriminants above is required. Some instructive explicit examples illustrating the relation, between the Evans function and transmission coefficient can be found in Kapitula & Sandstede [13] and Kapitula [4], between the transmission coefficient and Fredholm determinant can be found in Simon [12, p. 51] and between the Evans function and 2-modified Fredholm determinant in Gesztesy et al. [15] and Gesztesy et al. [16].
For the eigenvalue problem (∂ − A 0 − V)Y = O where A 0 = A 0 (x; λ) and λ ∈ C is the spectral parameter, our interest surrounds families of operators ∂ − A 0 − V parametrized by λ. For example, the analyticity of the Evans function means we can conduct a global search for eigenvalues via contour integration and the residue theorem in any subregion away from the essential spectrum. Here, the family of operators consists of those associated with values of λ parametrizing the boundary contour of the subregion. Hence the object of interest is a determinant line bundle. See Quillen [27], Jost [28, Section 6.9], Deng & Nii [29] and Alexander et al. [2] for constructions of such line bundles. In Alexander et al. 's line bundle construction, the fibres are explicitly the Evans function. These constructions suggest we define, away from (G(x, ξ )ϕ m (ξ )) † (G(x, η)ϕ m (η)) dξ dη dx = m≥1 R 3 ϕ † m (ξ )G † (x, ξ )G(x, η)ϕ m (η) dξ dη dx.
The proof for the case of a single trace class operator K with continuous kernel G exactly follows the argument above with = 1.