Quantifying evolutionary constraints on B-cell affinity maturation

The antibody repertoire of each individual is continuously updated by the evolutionary process of B-cell receptor (BCR) mutation and selection. It has recently become possible to gain detailed information concerning this process through high-throughput sequencing. Here, we develop modern statistical molecular evolution methods for the analysis of B-cell sequence data, and then apply them to a very deep short-read dataset of BCRs. We find that the substitution process is conserved across individuals but varies significantly across gene segments. We investigate selection on BCRs using a novel method that side-steps the difficulties encountered by previous work in differentiating between selection and motif-driven mutation; this is done through stochastic mapping and empirical Bayes estimators that compare the evolution of in-frame and out-of-frame rearrangements. We use this new method to derive a per-residue map of selection, which provides a more nuanced view of the constraints on framework and variable regions.


Introduction
Antibodies encoded by somatically modified human B-cell receptor (BCR) genes bind a vast array of antigens, initiating an immune response or directly neutralizing their target. This diversity is made possible by the processes of VDJ recombination, in which random joining of V, D and J genes generates an initial combinatorial diversity of BCR sequences, and affinity maturation, which further modifies these sequences. The affinity maturation process, in which antibodies increase binding affinity for their cognate antigens, is essential to mounting a precise humoral immune response. Affinity maturation proceeds via a nucleotide substitution process that combines Darwinian mutation and selection processes. Mutational diversity is generated by somatic hypermutation (SHM), in which a targeted molecular mechanism mutates the BCR sequence. This diversity is then passed through a selective sieve in which B cells that bind well to antigen are stimulated to divide, whereas those that do not bind well or bind to self are marked for destruction. The combination of VDJ recombination and affinity maturation enables B cells to respond to an almost limitless diversity of antigens. Understanding the substitution process and selective forces shaping the diversity of the memory B-cell repertoire thus has implications for disease prophylaxis and treatment.
It has recently become possible to gain detailed information about the B-cell repertoire using high-throughput sequencing [1][2][3][4][5]. Recent reviews have highlighted the need for new computational tools that make use of BCR sequence data to bring new insight, including the need for reproducible computational pipelines [6][7][8][9]. Rigorous analysis of the B-cell repertoire will require statistical analysis of how evolutionary processes define affinity maturation. Statistical nucleotide molecular evolution models are often described in terms of three interrelated processes: mutation, the process generating diversity; selection, the process determining survival or loss of mutations and substitution, the observed process of evolution that follows from the first two processes. One major vein of research has focused on how nucleotide mutation rates depend on the identity of surrounding nucleotides (reviewed in [10]; see also [11,12]), but little has been done concerning other aspects of the process, such as how the substitution process differs between gene segments.
Along with mutation, selection owing to competition for antigen binding forms the other key part of the affinity maturation process. Inference of selective pressures in this context is complicated by nucleotide context-dependent mutation, leading some authors to proclaim that such selection inference is not possible [13]. Indeed, if one does not correct for contextdependent mutation bias, interactions between those motifs and the genetic code can lead to false-positive identification of selective pressure. Previous work has developed methodology to analyse selection on sequence tracts in this context (reviewed in §3b), but no methods have yet achieved the goal of statistical per-residue selection estimates. This has, however, been recently identified as an important goal [11]. Such selection estimates could be used to better direct generation of synthetic libraries of antibodies for high-throughput screening. Another application would be to the engineering of antibody Fc regions with specific properties, such as for bispecific monoclonal antibodies or antibody-derived fragments, while preserving overall stability.
The ensemble of germline V, D and J genes that rearrange to encode antibodies (equivalently: immunoglobulins) are divided into nested sets. They can first be identified by their locus: IGH, denoting the heavy chain; IGK, denoting the kappa light chain; or IGL, denoting the lambda light chain. Our dataset contains solely the IGH locus, so we will frequently omit the locus prefix for simplicity. Genes within a locus can be first subdivided by their segment, which is whether they are a V, D or J gene. IGHV genes are further divided into subgroups which share at least 75% nucleotide identity. Genes also have polymorphisms that are grouped into alleles, which represent polymorphisms of the gene between individuals [14].
VDJ recombination does not always produce a functional antibody, such as when the V and J segments are not in the same reading frame after recombination (an out-of-frame rearrangement) or when the receptor sequence contains a premature stop codon. However, each B-cell carries two copies of the IGH locus, with one on each chromosome. If the rearrangement on the first locus fails to produce a viable antibody, the second locus will rearrange; if this second rearrangement is successful, the antibody encoded by the second rearrangement will be produced by the cell [15]. If this second rearrangement does not produce a viable antibody the cell dies.
When assaying the BCR repertoire through sequencing, some of the sequences will be from cells for which the first rearrangement was successful, while others will be from cells with one productive and one out-of-frame rearrangement. Although the out-of-frame rearrangements from the second type of cell do not produce viable antibodies, their DNA gets sequenced along with the productive rearrangements. As SHM rarely introduces insertions or deletions (we observe whole codon insertion deletion events in between 0.013 and 0.014% of memory sequences within templated segments), it is appropriate to assume that observed frameshifts in sequences are dominated by out-of-frame rearrangement events. However, because they are not expressed, but rather are carried along in cells with a separate functional rearrangement, they have no selective constraints. For this reason, we use sequences from out-of-frame rearrangements as a proxy for the neutral mutation process in affinity maturation.
In this paper, we develop modern statistical molecular evolution methods for the analysis of high-throughput B-cell sequence data, and then apply them to a very deep short-read dataset of BCRs. Specifically, we first apply model selection criteria to identify patterns in the single-nucleotide substitution process that occurs during affinity maturation and find that they are similar across individuals but vary significantly across gene segments. Next, we investigate how substitution processes vary between V genes and find that the primary source of variation is whether a sequence produces a functional receptor. Finally, we develop the first statistical methodology and corresponding software for comprehensive per-residue selection estimates for BCRs. We leverage out-of-frame rearrangements carried along in B cells with a productively rearranged receptor on the second chromosome to estimate evolutionary rates under neutrality, thus avoiding difficulties encountered by previous work in differentiating between selection and motif-driven mutation. A key part of our method is our extension of the 'counting renaissance' method for selection inference [16] for non-constant sequencing coverage and a star-tree phylogeny. Using this modified method, we are able to efficiently derive a per-residue map of selection on more than 15 million BCR sequences; we find that selection is dominated by negative selection with patterns that are consistent among individuals in our study.

(a) Substitution model inference and testing
We evaluated the fit of nested models with varying complexity, ranging from a simple model with shared branch lengths and substitution processes for the three independent segments of the BCR, to a complex model with completely separate substitution processes and branch lengths for each segment (table 1). For the underlying nucleotide substitution model, we fit a general time-reversible (GTR) nucleotide model [17] with instantaneous rate matrix Q to subsets of the data, using 20 000 unique sequences from each individual. The choice of a stationary and reversible model, rather than a more general model, was based on the similarity of base frequencies between the germline and observed sequences (electronic supplementary material, table S3). We modelled substitution rate heterogeneity across sites using a four-category discretized Gamma distribution [18] with fixed mean 1.0.
We find that the best performing model (denoted t r Q i G i , table 2) is one in which the branch length separating a sequenced BCR from its germline counterpart is estimated independently for each observed sequence, but that V, D and J regions differ systematically in their relative amounts of sequence change (denoted t r ). Additionally, this model uses separate GTR transition matrices for V, D and J regions (denoted Q i ) and uses separate distributions for across-site rate variation for V, D and J regions (denoted G i ). Looking across models, both the Akaike information criterion (AIC) [19] (table 2) and the Bayesian information criterion [20] rstb.royalsocietypublishing.org Phil. Trans. R. Soc. B 370: 20140244 (data not shown) identified the same rank order of support; this ordering was also identical for each of the three individuals. Other than the t i Q i G i model, in which branch length is estimated independently across gene segments, models are ranked in terms of decreasing complexity. The finding that a complex model fits better than simpler models is probably aided by the large volume of sequence data available.
Next, we fit the best-scoring model (t r Q i G i ) to the full dataset for each individual. The median distance to germline was 0.063, 0.030 and 0.039 substitutions per site for individuals A, B and C, respectively. The distribution of branch lengths appears nearly exponential for individuals B and C, with many sequences close to germline and few distant from germline sequences (figure 1). Individual A displayed a higher substitution load and a non-zero mode. Despite these differences in evolutionary distance, the relative rate of substitution between the V, D and J segments for each individual was very similar. We note that the sorting procedure used to separate memory from naive B cells provided memory cells at approximately 97% purity, so these divergence estimates may be conservative because of low levels of contamination from the naive repertoire.
Coefficients from the GTR models for the same gene segment across individuals were quite similar to one another, while models for different gene segments within an individual showed striking differences (electronic supplementary material, figures S1 and S2). However, overall correlations of GTR parameters between individuals were very high, yielding correlation coefficients between r ¼ 0.988 and r ¼ 0.994. We observe an enrichment of transitions relative to transversions in all segments, as previously described [21].
Next, we compared the evolutionary process between various groupings of sequences to learn what determines the characteristics of this evolutionary process. We focused on the V gene segment, as it had the most coverage in our dataset, and partitioned the sequences by whether they were in-frame, then by individual and then by gene subgroup. We fit the t r Q i G i model to 1000 sequences from each set of the partition and calculated the transition probability matrix (P) associated with the median branch length across all sequences given an equiprobable starting state. These matrices were then analysed with a variant of compositional principal components analysis [22] (see §4 Material and methods). We find that substitution models are influenced by in-frame versus out-of-frame sequence status, find no evidence for models clustering by individual, and see some limited evidence for clustering by gene subgroup (figure 2). The Euclidean distance between these transformed discrete probability distributions and the Hamming distance between germline V genes showed significant, but moderate, correlation (Spearman's r ¼ 0.20, p , 10 215 ; electronic supplementary material, figure S3).

(b) Natural selection
The primary challenge for BCR selection inference is that nucleotide context is known to have a very strong impact on mutation rates (reviewed in [21]). These context-specific mutations combined with the structure of the genetic code can result in extreme dN/dS ratios using the classical definition that are not attributable to selection. To address this problem, we infer the selection coefficient v using a non-synonymoussynonymous ratio which controls for background mutation rate via out-of-frame sequences (3). We continue the tradition of calling the selection coefficient v in this context, even though it is a slightly different definition than previously used.
We apply this method to our dataset results in the first persite and per-gene maps quantifying selection in the B-cell T r Q i G i one branch length per sequence (n) þ relative rate between segments (2) one matrix per segment (8 Â 3) one distribution per segment (3) n þ 29 t r Q i G s one branch length per sequence (n) þ relative rate between segments (2) one matrix per segment (8 Â 3) one shared distribution (1) n þ 27 t r Q s G s one branch length per sequence (n) þ relative rate between segments (2) one shared matrix (8) one shared distribution (1) n þ 11 repertoire [23,24]. Sites were classified as negatively or positively selected based on whether the 95% Bayesian credible interval (BCI) excludes 1.0: sites for which the lower endpoint of the v BCI is greater than 1.0 are classified as being under positive selection, whereas sites for which the upper endpoint of their v BCI is less than 1.0 are classified as being under negative selection. We employ site numbering according to the IMGT unique numbering for the V domain [25]. IGHV3-23*01 is the most frequent V gene/allele combination in our dataset, and it displays patterns that are consistent with the other genes. Specifically, we see significant variation in the synonymous substitution rate (right panels, figure 3a) even in out-of-frame sequences, which is presumably because of motif-driven mutation. Thus, if we had directly applied traditional means of estimating selection by comparing the rate of non-synonymous and synonymous substitutions, we would have falsely identified sites as being under strong selection. By contrast, the selection inferences made using out-of-frame sequences stay much closer to neutral (figure 3b).
We note extensive negative selection in the residues immediately preceding the third heavy chain complementaritydetermining region (CDR3; figure 4). The amino acid profile for these sites shows a distinct preference for a tyrosine or rarely a phenylalanine two residues before the start of the CDR3 at site 102 (electronic supplementary material, figure  S4). It shows a preference for a tyrosine or more rarely a phenylalanine or a histidine in the residue just before the start of the CDR3 at site 103. These aromatic positions likely play important structural roles in the antibody complex: site 102 is buried in the core of the heavy chain and makes extensive van der Waals interactions as well as a sidechain-backbone hydrogen bond, while site 103 forms part of the interface between the heavy and light chains (see further description of structural results below).
Overall, we see extensive selection in our sequenced region (figure 5). The mean v estimate across sites with at least 100 productive and out-of-frame sequences aligned was 0.907; 65.6% of sites had a median v , 1 with a wide distribution of median v values and confidence interval widths. However, many of them were observed to be positively, negatively and neutrally evolving with narrow confidence intervals (figure 5, left column); 30.6% of sites were confidently classified as being under negative selection (figure 5, right column).
Because amino acids interior to the protein could be important for protein stability compared with exposed ones, we hypothesized that residues under negative selection would be more internal to the antibody protein than those  under neutral or positive selection, and that the inverse would be true for residues under positive selection. To test this, we mapped our v estimates onto antibody structures ( figure 6) and calculated the exposure of each amino acid position in the structure using the solvent-accessible surface area (SASA) using ROSETTA3 [27]. The normalized SASA was well correlated with the classification of each site: sites classified as being under positive selection were most exposed in the protein structure, followed by neutral sites, then negatively selected sites (electronic supplementary material, figure S7). Differences in surface accessibility were significant between the three groups, with p-values ranging from less than 0.002 for the comparison of positive versus neutral sites to less than 10 215 for the comparison of negative versus neutral sites (Wilcoxon rank-sum test [28]). Despite the three individuals surveyed here presumably having quite different immune histories, we observe remarkable consistency in substitution and selection within the memory B-cell repertoire. Indeed, we see a very strong correlation of median selection estimates between individuals (electronic supplementary material, figure S5), with between-individual coefficients of determination R 2 between 0.628 and 0.687 for site-specific v values.

Discussion
We find different patterns of substitution across the V, D and J regions which is consistent among individuals (electronic supplementary material, figure S1) even though those individuals have differing levels of substitution (figure 1). We find that the dominant factor determining the V segment substitution process is whether out-of-frame or productive, with the gene identity being a contributing factor. The pattern of selective pressure is consistent across individuals, and shows especially strong pressure near the boundary between the V gene and the CDR3. Selection estimates for BCRs are still high, with average v of %0.9, compared with common examples of Darwinian evolution, such as seen in Drosophila [29] and mammals [30], where most genes show v less than 0.1. However, we note that although our estimates of v are comparable with more traditional estimates, we calculate v slightly differently, using out-of-frame sequences as a control for motif-driven evolution. Finally, the patterns of selective pressure we observed correlated with levels of surface exposure in published antibody structures: highly conserved sites were more frequently found internally, while residues we classified as positively selected were more exposed.
We note that our analyses are based on data from only three individuals. It is possible that including more individuals would reveal variation in the mutation process. However, we note that these three unrelated individuals had an extraordinary level of agreement, which cannot be explained by sequencing error.

(a) Substitution process
We closely examine the substitution and selection processes in a context-independent manner, not to make a full description of this clearly context-dependent process, but rather to provide a solid framework for future study and to enable downstream comparative analyses (figure 2). Our model selection shows that the best-fitting model allows for a single branch length per sequence, a global multiplier for per-segment differences, a per-segment substitution model and a per-segment rate variation model across sites (table 2). These between-segment differences are certainly due in part to base composition, which also differs significantly between segments and is similar between individuals (electronic supplementary material, table S3). Another contributing factor is probably similarity  of local nucleotide context between the genes of a given segment compared with between segments; these nucleotide contexts are known to impact AID-induced somatic hypermutation (reviewed in [21]). We also note that the entirety of the D segment lies within the CDR3 region, and is thus more likely to directly contact an epitope; not surprisingly, we observe higher substitution rates within that segment. By analysing distances between GTR substitution rate matrices, we find that the most important difference between them is determined by whether they are productive or non-productive (figure 2),  which is presumably because of the impact of natural selection. We also find a significant correlation between sequence identity and substitution matrix (cf. [31]). In a related though distinct vein, Mirsky et al. [32] develop an amino acid substitution model for BCR sequences, which analogously aggregates information across positions.

(b) Selection process
The role of selection in BCR development has stimulated continuous interest since the pioneering 1985 paper of Clarke et al.  selection process for BCRs has focused on aggregate statistics to infer selection for entire sequences or sequence tracts, and there has been a lively debate about the relative merits of these tests [34][35][36][37][38]. Recent work has offered methods that evaluate selection on a per-sequence basis [38]. There have also been efforts to infer selection based on lineage shape [39][40][41][42][43][44], which has been a common approach in macroevolutionary studies (reviewed in [45]) and more recently in population genetics [46][47][48][49].
In this work, we develop the first means of inferring perresidue selection using high-throughput sequence data with non-uniform coverage. Our method side-steps the difficulties encountered by previous work in differentiating between selection and motif-driven mutation in BCRs [11,13,[34][35][36][37][38] by developing statistical means to compare in-frame and out-offrame rearrangements. An alternative means of estimating selection was recently developed by Kepler et al. [12] in which a regression model incorporating a detailed model of motif preferences was used to infer selection coefficients for the framework and CDR regions in aggregate. In contrast to the previous work on B-cell selection, our methods provide a per-residue selection map for a contiguous stretch of BCR sequence.
We use out-of-frame rearrangements as our selection-free control population. These sequences do not create functional IGH proteins, but may be carried in heterozygous B cells which do have a productively rearranged IGH allele. Thus they undergo SHM, but any selection occurs on the level of the productively rearranged allele, not on the residues in the unproductive allele. We observe a similarly high proportion of germline-identical sequences for in-frame and out-of-frame subsets in naive cells (electronic supplementary material, table S2); differences from germline derive in part from sequencing and other platform errors that do not depend on frame. For memory cells, we see extensive action of somatic hypermutation, but with a higher proportion of germline-identical out-of-frame sequences than in-frame (electronic supplementary material, table S2). We interpret these additional mutations for in-frame memory sequences as following from the process of affinity maturation for a specific antigen. We acknowledge that some out-of-frame sequences could still feel the impact of selection, which would occur if the sequences accrue frameshift mutations in the process of affinity maturation. However, it is thought that SHM is primarily a process of point mutation [21], and indeed, we observe whole codon indels in only 0.013 -0.014% of memory sequences within templated segments. Still, if a weaker version of selection was occurring on the out-of-frame sequences compared with the productive ones then this would simply make our estimates of selection conservative, pulling estimates of v closer to unity, and yet our selection estimates are confidently classified as non-neutral for a substantial fraction of sites (figure 5).
In applying our methodology to IGHV sequences, we gain a high resolution per-gene map of selective forces on BCRs for part of the V gene. This part is primarily in the framework region, which is thought to be under substantial evolutionary constraint to preserve structure. Indeed, we see a pattern of quite strong negative selection in the region around the beginning of the CDR3 (figure 4), agreeing with recent work that found strong negative selection in one site near the beginning of the CDR3 [11]. However, other sites in this section of framework have substantially relaxed selection ( figure 4). These results thus provide a more nuanced view into the constraints on BCR sequences rather than the traditional framework/ variable designations, as also noted by Yaari et al. [11].
This work points the way towards future directions. In this work, we assumed that the size of individual lineages is small compared with the size of the overall repertoire, and thus that lineage structure could be ignored for the purpose of evolutionary model analysis. Ideally, we would reconstruct lineages and then do evolutionary analysis on the tree corresponding to each lineage. However, reconstructing groups of sequences forming a lineage is a challenging prospect on its own, to say nothing of doing phylogenetics on sequences in the presence of strong context-specific mutation-selection patterns, and have left out incorporating those aspects until we have first developed the necessary methods. We have recently developed an HMM framework to analyse VDJ rearrangements [50] and are currently developing and validating ways to use this framework for likelihood-based (as opposed to procedure-based [51,52]) lineage group inference.

Material and methods (a) Dataset
The complete description of the experiment will be published separately [53]. However, here we give a brief overview of the data in order to facilitate understanding of our analysis and to emphasize that the experimental design has a number of features that greatly reduce errors in sequencing and quantification. A measure of 400 ml of blood was drawn from three healthy volunteers under IRB protocol at the Fred Hutchinson Cancer Research Center. CD19 þ cells were obtained by bead purification then flow sorted to isolate over 10 million naive (CD19 þ D27 2 IgD þ IgM þ ) and over 10 million memory (CD19 þ CD27 þ ) B cells, with greater than 97% purity. Genomic DNA was extracted and the ImmunoSeq assay described in [3] was performed on the six samples at Adaptive Biotechnologies in Seattle, WA, USA.
The experiments and preprocessing were carefully designed to give an accurate quantification of error-corrected observed sequences. To mitigate preferential amplification of some V/J pairs through primer bias, the PCR amplification was performed using primers optimized via a large collection of synthetic templates [54]. To reduce sequencing errors and provide accurate quantification, each sample was divided among the wells on two 96-well plates and bar-coded individually. These templates were then amplified and 'over-sequenced' (electronic supplementary material, table S1), so that an average of almost six reads were present for each template. Following Robins et al. [55], reads matching the same template were collapsed into a consensus sequence with reduced sequencing error. Here, we grouped reads from within a well into consensus sequences by joining reads with Hamming distance less than or equal to two, and inferred the consensus sequence in each group using parsimony. Groups with only one member were discarded. This procedure protects against collapsing distinct sequences, as the probability that nearly identical distinct sequences co-occur exclusively in the same wells is small. We acknowledge this procedure may eliminate low frequency variants, but we prioritized accuracy over sensitivity towards these variants; despite this conservative analysis pipeline we observed substantial signal in the data.
Deep sequencing these B-cell populations resulted in 15 023 951 (electronic supplementary material, table S1) unique 130 bp observed sequences after preprocessing that spanned the third heavy chain complementarity determining region (CDR3) region ( figure 7). The full dataset is available at http://adaptivebiotech. com/link/mat2015. Each sequence was first aligned to each V gene using the Smith-Waterman algorithm with an affine gap penalty [56]. The 3 0 portion of the sequence not included in the best V gene alignment was next aligned to all D and J genes available from the IMGT database [14]. The best-scoring V, D and J alignment for each sequence was taken to be the germline alignment, and the corresponding germline sequence was taken to be the ancestral sequence for that observed sequence; in the case of ties, one germline sequence was chosen randomly among those alleles present at abundance greater than or equal to 10%. Sequences were classified as productive or out-of-frame based on whether the V and J segments were in the same frame; all sequences with stop codons were removed, as these sequences could result from either an unproductive rearrangement event or inactivation due to a lethal mutation. The 18 V gene polymorphisms present at the highest frequency in the naive populations of the individuals surveyed which were not represented in the IMGT database were added to the list of candidates for alignment. In contrast to naive sequences which have no mutations across almost all sites, the alleles we added to the germline collection were all present at greater than 30% for the IGHV gene in question.

(c) Substitution models, fitting and analysis
The setting of B-cell affinity maturation is substantially different than that typically encountered in molecular evolution studies, and hence there are some differences between our model fitting procedure compared with common practice. For BCRs outside non-templated insertions, the root state is the V, D and J genes encoded in the germline from which a sequenced BCR derives. Thus, we analyse substitutions that have occurred in evolution away from the germline-encoded segments of observed BCR sequences, ignoring sites comprising non-templated insertions. The CDR3 region of an antibody is generally sufficient to uniquely identify its specificity [57]. Although there are certainly some clones in our dataset that derive from a single rearrangement event but differ due to somatic hypermutation, the probability that a given pair of distinct sequences derives from a single common ancestor is small: targeted searches for clonally related antibodies during infection have identified them at 0.003 to 0.5% [58]. It is a substantial challenge to statistically infer which sequences sit together in a clonal lineage and then to perform phylogenetic analysis on such a large dataset (see work by [12,59,60]) and performing this analysis incorrectly could bias our results. Additionally, we encountered significant computational barriers analysing the volume of sequences available, and adding phylogenetic structure to our analysis may have made the analysis computationally prohibitive even if we had the lineage structure in hand (we believe this is the largest number of sequences from a single dataset analysed in selection study to date).
For these reasons, our analyses were performed on a set of pairwise alignments, each representing a two taxon tree containing an observed sequence and its best-scoring germline sequence according to Smith -Waterman alignment. This is equivalent to using a rooted 'star' tree where the root state is known. This assumption allowed us to focus our attention on the selection inference problem.
Substitution models are summarized in table 1 and described in detail here. We will use n for the number of observed sequences. Our models are characterized by three components. First, the subscript of t describes how branch length assignments are allowed to vary across segments of a single sequence. The t i model allows branch lengths to vary independently, resulting in 3n parameters. The t r model has two global per-segment multipliers to define the branch lengths (e.g. figure 1) with the V segment rate fixed at unity, resulting in n þ 2 parameters. The subscript of Q describes how rate matrices are fit. The Q i model allows an independent global GTR rate matrix for each segment, with a total of 24 parameters. The Q r model just has one GTR rate matrix overall, with eight parameters. The subscript of G denotes how across-site substitution rate variation is modelled in terms of a four-category discrete Gamma distribution [18]. The G i model allows an independent rates across-sites parameter for each sequence, with three parameters. The G s has a global rates across-sites parameter, with one parameter. Given these choices concerning how the data were partitioned and parametrized, the standard phylogenetic likelihood function was used as described in the original literature [18,61,62] and in books (e.g. [63,64]).
Maximum-likelihood values of substitution model parameters and branch lengths were estimated using a combination of Bioþþ [65] and BEAGLE [66], with model optimization via the BOBYQA algorithm [67] as implemented in NLopt [68], and branch length optimization via Brent's method [69]. Optimization alternated between substitution model parameters and branch lengths until the change in log-likelihood at a given iteration was less than 0.001. Our software to perform this optimization is available from https://github.com/cmccoy/fit-star.
For the principal components analysis on substitution matrices, we first obtained the median branch length t across all sequences for all individuals. We then calculated the corresponding transition matrix for each model given equiprobable starting state: e Qt diag(0:25). These were then projected onto the first two principal components, adapting suggestions for doing PCA in the simplex [22]. Specifically, each row of these matrices, as a discrete probability distribution, is a point in the simplex. Hence, we applied a centred log transformation to each row of this matrix using the clr function of the R package compositions [70], and followed with standard principal components analysis.
To compare distance between inferred models and sequence distance, we calculated the Hamming distance between all pairs of V genes using the alignment available from the IMGT database [14]. To obtain distances between models, we calculated the Euclidean distance calculated between pairs of the transformed probability vectors. Figure 7. B-cell receptor schematic showing variable (V), diversity (D) and joining (J) gene segments as well as framework (FW) and complementarity-determining regions (CDRs). In VDJ recombination, individual V, D, and J gene segments are randomly selected and joined together via a process that deletes some randomly distributed number of nucleotides on their boundaries, then joins them together with random 'non-templated insertions' (N). The specificity of an antibody is primarily determined by the region defined by the heavy chain recombination site, referred to as the third complementarity-determining region (CDR3). The sequence data for this study started in the fourth framework (FW) region and continued into the third. Amplification was via a forward primer in the FW2 region and a reverse primer in the FW4 region. Because of the large volume of sequences to analyse, we also needed a mechanism to detect selection that could be run on over 15 million sequences, most of which did not share evolutionary history. Classical means of estimating selection by codon model fitting [71,72] could not be used, even in their most recent and much more efficient incarnation [73]. Instead, we used the renaissance counting approach [16], which we modified to work under varying levels of coverage. A key part of the renaissance counting approach is an empirical Bayes regularization procedure [74]. This procedure uses the entire collection of sites to inform substitution rate estimation at each site individually, effectively sharing data across sites, allowing inference at sites which either display few substitutions or have less sequencing coverage. We note that obtaining precise per-site selection estimates for hundreds of genes requires a large quantity of sequence data like that which we have here: the read coverage decrease on the 5 0 end of the V gene correspondingly increases the width of the error bar (figure 3, [23]), resulting in a decrease of power for selection regime classification ( figure 4).

(i) Bayesian inference of selection on a star-shaped phylogeny
To determine the site-specific selection pressure for each V gene, we extended the counting renaissance method, described in [16], to accommodate pairwise analyses of a large number of sequences with a known ancestral sequence and non-constant site coverage. The counting renaissance method starts by assuming a separate HKY substitution model [75] for each of the three codon positions and uses Markov chain Monte Carlo (MCMC) to approximate the posterior distribution of model parameters that include substitution rates and phylogenetic tree with branch lengths. As in our analyses, we assumed that query sequences are related by a starshaped phylogeny, our model parameters included only HKY model parameters and branch lengths leading to all the query sequences. Moreover, we fixed the parameters of the HKY model, along with the relative rates between codon positions, to the maximum-likelihood estimates produced using the whole dataset. We note that we could have fit per-codon-position GTR models and used them for stochastic mapping; however, such a model would still be substantially mis-specified compared with a codon model and thus we decided to follow [16] and use HKY for the mapping. A priori, we assumed that branch lengths leading to the query sequence independently follow an exponential distribution with mean 0.1. We performed 20 000 iterations of MCMC, scaling the branch length leading to the observed sequence at each iteration, and sampling every 40 iterations to generate a total of 500 samples. Given each posterior sample of query branch lengths, the counting renaissance method draws a sample of ancestral substitutions conditional on the observed data using a simple per-codon-position nucleotide model; the resulting sampled ancestral sequences are then used to count synonymous and non-synonymous mutations.

(ii) Sampling codon substitutions
For each unique read, for each codon position l and posterior sample j, counts of synonymous (C (S) jl ) and non-synonymous (C (N) jl ) substitutions at each site were imputed using stochastic mapping as described above in §4d(i).
For N MCMC iterations based on an alignment of L codons, the result of this procedure was two N Â L matrices, each containing the number of synonymous and non-synonymous events at each codon position in each posterior sample. Counts of each substitution type along with the total branch length for each site were aggregated across sequences from the same gene by elementwise addition. This took about 5 days on an 194 core cluster launched on Amazon Web Services using starcluster (http://star.mit.edu/cluster/).

(iii) Empirical Bayes regularization
The varying length of the CDR3, combined with short observed sequences, leads to quite skewed coverage of sites stratified by gene. We modified the empirical Bayes regularization procedure of the original counting renaissance method [16] to account for varying depth of observation as follows. First, we define a branch length leading to query sequence i for site l as t il ¼ t i , if any residues in the observed sequence i align to codon position l 0, otherwise: We assume that substitution counts for site l come from a Poisson process with rate l l t l : where t l ¼ P n i¼1 t il : As in the original counting renaissance, we assume that the site-specific rates l l come from a Gamma distribution with shape a and rate b: We fix a and b to their maximum-likelihood estimates â and b by treating sampled branch lengths and counts as fixed and maximizing the likelihood function (4:1) We provide a derivation of this likelihood function in the electronic supplementary material. In contrast to [16], we do not have closed-form solutions for the maximum-likelihood or method of moments estimators of a and b in this slightly more complex setting. However, it does not add a substantial computational burden to estimate these parameters numerically via the BOBYQA optimizer [67]. Given â and b , we draw rates l l from the posterior: l l jC l Gamma(C l þâ, t l þb ), (4:2) derived in the electronic supplementary material. Estimation of a and b by maximizing likelihood (4.1) fails when the sample variance of the observed counts C 1 . . .C L , weighted by the site-specific branch length sums, t 1 . . .t L , is less than the corresponding weighted sample mean. In these cases, we assume that the observed counts are drawn from Poisson distributions with site-specific rate lt l : where l is shared across sites and is estimated from the data by maximizing the likelihood (iv) Simulations To validate this method, we simulated 1000 sequences of 100 codon sites each under the GY94 model and a star-like phylogeny with branch lengths fixed to 0.05 using piBUSS [76]. We varied v over the alignment, with 85 sites having v ¼ 0.1, 5 sites having v ¼ 1, and 10 sites under positive selection-v ¼ 10. We next introduced varying coverage over the alignment: sequences were truncated such that no sequences covered the first 10 codons, only half of the sequences had coverage over the next 40 codons, and all sequences covered the remaining 50 codons (electronic supplementary material, figure S6, bottom panel). Estimates of v were more accurate with higher site coverage (electronic supplementary material, figure S6, top panel). Of note, as a result of the empirical Bayes regularization, even some sites with no coverage were classified as being under purifying selection. In all other analyses, we rstb.royalsocietypublishing.org Phil. Trans. R. Soc. B 370: 20140244 only report v estimates for sites covered by at least 100 sequences. As the starting state is always the germline amino acid, no classifications can be made for sites which are Tryptophan or Methionine in the germline, as all mutations are non-synonymous for codons encoding those amino acids.
(v) Site-specific estimates of v In [16], the authors arrive at site-specific estimates of v l by comparing data-conditioned (C) rates l l of non-synonymous (N) and synonymous (S) substitutions, each normalized by an 'unconditional rate' (U): v RC ): As SHM is highly context-specific, we chose to use rates inferred from outof-frame rearrangements in place of the unconditional rates, as these more accurately represent the mutation rates in the absence of selection: where P and O refer to productive and out-of-frame rearrangements, respectively.

(vi) Implementation
We used the BEAST [77] implementation of the counting renaissance procedure to sample counts for both synonymous and non-synonymous substitutions at each site. We extended BEAST v. 1.8.0 to generate 'unconditional' counts using the germline state as the starting state for simulating along the edge to the query as described above in §4d(v). This process (sampling substitutions for each sequence, then combining counts from sequences mapping to the same IGHV) provides a natural setting for parallelization via the map-reduce model of computation; we used the Apache Spark [78] framework to distribute work across a cluster running on Amazon EC2. Our software to perform this analysis is available from https://github.com/cmccoy/startreerenaissance.

(e) Structural analysis
For each of the eleven most frequently occurring V genes, we identified the closest structure in the Protein Data Bank (PDB) [79] using BLAST [80]. Structures were visualized using PyMOL [81]. We calculated the normalized SASA for each amino acid position using ROSETTA3 [27] and normalized these values by dividing them by the fully exposed SASA of the given residue type in an extended chain. Wilcoxon rank-sum tests [28] between all pairs of selection classifications (negative, neutral, positive) were used to assess whether the normalized SASA differed between groups. p-values were Bonferroni-corrected [82] to account for multiple comparisons. The details of our computational methods are available in the electronic supplementary material.
Data accessibility. Data are available at http://adaptivebiotech.com/link/ mat2015. We have made an Amazon Machine Image (AMI) [83] available pre-loaded with our analysis pipeline and some example data. To use it, launch an instance of AMI ami-ab295b9b in the us-west-2 region and log in as user ubuntu (no password needed: authentication by public key).