A statistical model of COVID-19 testing in populations: effects of sampling bias and testing errors
Abstract
We develop a statistical model for the testing of disease prevalence in a population. The model assumes a binary test result, positive or negative, but allows for biases in sample selection and both type I (false positive) and type II (false negative) testing errors. Our model also incorporates multiple test types and is able to distinguish between retesting and exclusion after testing. Our quantitative framework allows us to directly interpret testing results as a function of errors and biases. By applying our testing model to COVID-19 testing data and actual case data from specific jurisdictions, we are able to estimate and provide uncertainty quantification of indices that are crucial in a pandemic, such as disease prevalence and fatality ratios.
This article is part of the theme issue ‘Data science approach to infectious disease surveillance’.
1. Introduction
Real-time estimation of the level of infection in a population is important for assessing the severity of an epidemic as well as for guiding mitigation strategies. Several previous studies have addressed the issue of correcting for errors and testing biases. However, inferring disease prevalence via patient testing is challenging due to testing inaccuracies, testing biases and heterogeneous and dynamically evolving populations and severity of the disease.
There are two major classes of tests that are used to detect previous and current SARS-CoV-2 infections [1]. Serological, or antibody, tests measure the concentration of antibodies in infected and recovered individuals. Since antibodies are generated as a part of the adaptive immune system response, it takes time for detectable antibody concentrations to develop. Serological tests should thus not be used as the only method to detect acute SARS-CoV-2 infections. An alternative testing method is provided by viral-load or antigen tests, such as reverse transcription polymerase chain reaction (RT-PCR), enzyme-linked immunosorbent assay (ELISA) and rapid antigen tests, which are able to identify ongoing SARS-CoV-2 infections by directly detecting SARS-CoV-2 nucleic acid or antigen.
Test results are mainly reported as binary values (0 or 1, negative or positive) and often do not include further information such as the cycle threshold () for RT-PCR tests. The cycle threshold defines the minimum number of PCR cycles at which amplified viral RNA becomes detectable. Large values of indicate low viral loads in the specimen. An increase in by a factor of about corresponds to a viral load that is about one order of magnitude lower [2]. Cycle threshold cutoffs are not standardized across jurisdictions and range from values between 37 and 40, making it difficult to compare RT-PCR test results [3]. Lower cutoffs in the range of 30–35 may be more reasonable to avoid classifying individuals with insignificant viral loads as positive [3].
Further uncertainty in COVID-19 test results arises from different type I errors (false positives) and type II errors (false negatives) that are associated with different assays. Note that inherent to any test, the threshold (such as mentioned above) may be tunable. Therefore, besides intrinsic physical limitations, binary classification of ‘continuous-valued’ readouts (e.g. viral load) may also lead to an overall error of either type [4]. In this work, we will assume that there is a standardized threshold and the test readout is binary; if any virus is detected, the test subject is positive. We will not explicitly model the underlying statistics of the errors but assume that the test readouts are binary but can be erroneous at specified rates. Some uninfected individuals will be wrongly classified as infected with rate and some infected individuals will be wrongly classified as uninfected with rate . For serological COVID-19 tests, the estimated proportions of false positives and false negatives are relatively low, with and [5–8]. The s of RT-PCR tests depend strongly on the actual assay method [9,10] and may be significantly larger than those of serological tests. Typical values of for RT-PCR tests lie between 0.1 and 0.3 [11,12] but might be as high as if throat swabs are used [7,12]. False-negative rates may also vary significantly depending on the time delay between initial infection and testing [8]. According to a systematic review [13] that was conducted worldwide, the initial value of is about 0.54, underlying the importance of retesting. Similar to serological tests, reported false-positive rates of RT-PCR tests are about [7].
Estimates of disease prevalence and other surveillance metrics [14,15] need to account for s and s, in particular if reported positive-testing rates [16] are in the few percent range and potentially dominated by type I errors. In addition to type I/II testing errors, another confounding effect is biased testing [17], that is preferential testing of individuals that are expected to carry a high viral load (e.g., symptomatic and hospitalized individuals). Biasing testing towards certain demographic and risk groups leads to additional errors in disease prevalence estimates that need to be corrected for.
In §2, we discuss related studies that developed statistical methods to correct for erroneous and biased testing. To account for type I/II errors, bias, retesting and exclusion after testing, we develop a corresponding framework for disease testing in §3. We apply our testing model to COVID-19 testing and case data in §4 and estimate testing bias by comparing random-sampling testing data [18] with officially reported, biased COVID-19 case data in §5. We conclude our study in §6.
2. Related work
Several previous studies have addressed the issue of correcting for errors and testing biases. In the random-sampling study [18], specificity and sensitivity corrected ELISA results are reported without specifying the actual statistical correction method. In another work [19], corrected case numbers for different European countries are derived based on the assumption that the infection fatality ratio () is independent of the geographical location. If the were known exactly, this method could be used to estimate the sampling bias by comparing the reported number of cases with the corresponding reported number of deaths divided by . However, the framework in [19] does not account for false negatives and false positives. In addition, there are geographical variations of the that may be attributed to significant differences in incidence rates, population density, preparedness of public health systems and age structure [14,20,32]. Therefore, the assumption of a time and location-independent may yield inaccurate results.
In [21], a semi-Bayesian probabilistic bias analysis is used to estimate the cumulative number of SARS-CoV-2 infections in the United States. The employed corrections for erroneous testing are similar to the results that we derive in §4. Corrections for incomplete testing are based on distributions associated with random sampling studies similar to [18], which we use in §5.
One major difference between [21] and our work is that we derive the distributions with and without retesting, and explicitly account for test-type-dependent specificities and sensitivities.
3. Statistical testing model
Here, and in the following subsections, we develop a general statistical model for estimating the number of infected individuals in a jurisdiction by testing a sample population. The relevant variables and parameters to be used in our derivations are listed and defined in table 1. Suppose we randomly administer tests within a given short time period (e.g. within 1 day or 1 week) to a total effective population of previously untested individuals. This population comprises susceptible, infected and removed (i.e. recovered or deceased) individuals, which are unknown. , and can dynamically change from one testing period to another due to transmission and recovery dynamics, as well as removal from the untested pool by virtue of being tested. The total population can also change through intrinsic population dynamics (birth, death and immigration), but can assumed to be constant over the typical time scale of an epidemic that does not cause mass death.
symbol | definition |
---|---|
population in jurisdiction | |
number of tests administered | |
recorded positives under error-free testing | |
recorded positives under error-prone testing | |
true proportion of infected individuals | |
fraction of positives under biased, error-free testing | |
fraction of positives under biased, error-prone testing | |
testing bias parameter | |
estimates of bias and underlying infection fraction | |
false-positive rate | |
false-negative rate |
We start the derivation of our statistical model by first fixing , and , assuming both perfect error-free testing, considering a ‘testing with replacement’ scenario, in which tested individuals can be retested within the same time window. Under these conditions, the probability that tests are returned positive and tests are returned negative is
Equation (3.1) describes perfect error-free and random testing. However, if there is some prior suspicion of being infected, the administration of testing may be biased. For example, certain jurisdictions focus testing primarily on hospitalized patients and people with significant symptoms [17], thus biasing the tests to those that are infected. We quantify such testing biases through a biased-testing function , leading to the following modification of equation (3.1):
The expected value of , , can be understood as a product of the true underlying infected fraction and a bias function that depends on and the bias parameter , i.e. , where is given by
Figure 1a shows the bias function (3.9) as a function of for different infection fractions . For , the biased-testing fraction is larger than the unbiased-testing fraction . The opposite holds for . If only susceptible individuals are tested (i.e. ), the bias function and the expected observed positive testing fraction approach zero. For a complete bias towards infected individuals (i.e. ), the bias function approaches and .
Figure 1. Illustration of a bias function . (a) The bias function (equation (3.9)) for three different fractions of currently (and previously) infected individuals. Grey dashed lines indicate the asymptotic value . A value indicates a testing bias towards currently and/or previously infected individuals while susceptible and/or non-infectious individuals are preferentially tested for . Unbiased testing corresponds to and . (b) The variance exhibits a maximum value of at a typical value of bias . (Online version in colour.)
The variance of the Gaussian approximation (3.7) is plotted as a function of in figure 1b and exhibits a maximum value of at .
The probabilities derived in equations (3.1) and (3.3) correspond to ‘testing with replacement’. The opposite limit is ‘testing without replacement’; once an individual is tested they are labelled as such and removed from the pool of test targets, at least within the specified testing period. This concept of sampling with and without replacement commonly arises in the measurement of diversity in ecological settings [24]. Without replacement, and still under conditions of perfect random testing, two slightly different forms for arise for the different type of tests (e.g. antibody versus PCR/viral load). For antibody tests that perfectly identify recovered (or deceased) individuals as being previously infected, equation (3.1) is replaced by
To incorporate testing bias into the probabilities for testing without replacement, we first consider equation (3.11) where can be interpreted as the number of ways of distributing positive tests among infected individuals, and negative tests among uninfected individuals. As in the biased-testing formulation of equation (3.3), we interpret the bias as a factor that assigns more weight to tests in the or pools
The above choice for also allows us to explicitly evaluate the denominator in equation (3.12)
(a) Testing errors
The probability distributions that we derived in equation (3.3) and in equations (3.10)–(3.11) assume that testing is error-free, i.e. that the false-negative rate and false-positive rate , or equivalently that the true positive rate and the true negative rate . To incorporate erroneous testing, we now construct the probability distribution of error-generated deviation over the number of ‘apparent’ positives from tests that carry nonzero FPRs and FNRs, given that positives would be recorded if the tests were perfect. If apparent positive tests are tallied, of them might have been true positives drawn from the perfect-test positives in ways, while the remaining apparent positives might have been erroneously counted as positives drawn from the true negatives. The remaining true positive tests might have been erroneously tallied as false negatives, while the remaining negative tests might have been correctly tallied as true negatives. Assuming non-zero and , we find that the probability distribution of finding apparent positive tests is
Equation (3.20) reveals that the mean number of apparent positive tests is given by the sum of the expected value of true positive tests (i.e. ) and the expected value of false positive tests (i.e. ). Based on the derived expressions for and , we define the random variable as the fraction of observed positive tests under biased and error-prone testing and obtain

Figure 2. Distribution of apparently positive tests. Plots of with , , and different (a) values of , (b) testing biases , (c) FNRs, and (d) FPRs. The Gaussian approximation (solid light blue lines) of equation (3.22) provides an accurate approximation of . Dashed black lines correspond to distributions with replacement and the remaining thicker solid coloured lines correspond to those without replacement. (Online version in colour.)
(b) Temporal variations and test heterogeneity
Up to now, we have discussed single viral-load and antibody tests (with and without replacement) but have not considered temporal variations in the number of tests , the number of returned positives , and heterogeneity in and that are associated with different classes (types, manufacturing batches, etc.) of assays. To make our model applicable to empirical time-varying testing data, we use , , to denote the number of susceptible, infected and removed individuals at time (or in successive time windows labelled by ), respectively. If test classes are present, we also include an additional index in all relevant model parameters. The testing bias and the total number of tests may be both test-class and time-dependent. That is, and . Test specificity and sensitivity mainly depend on the assay type and not on time. We thus set and .
4. Inference of prevalence and application to COVID-19 data
One often wishes to infer the evolution of and , or and over a given time period from values of , and . Since is difficult to independently ascertain, one may only be able to infer . For a single test result (or ), we can generate the maximum likelihood estimate (MLE) of the bias-modified prevalence by setting the measured value to find
As an example, we collected US testing data [25] from March 2020 to March 2021. Figure 3a shows the daily number of observed positive tests (red bars) and the corresponding total daily number of tests (blue bars). The 7-day average of the observed positive testing rate is indicated by the black solid line. The first drop in in March 2020 was associated with the initially very limited number of available SARS-CoV-2 testing infrastructure followed by the ramping up of testing capacity. After new cases surged by the end of March and in April 2020, different types of stay-at-home orders and distancing policies with different durations were implemented across the USA [26]. In June and July 2020, reopening plans were halted and reversed by various jurisdictions to limit the resurgence of COVID-19 [27].
Figure 3. Observed and corrected proportions of positive tests in the USA. (a) The solid black line represents the 7-day average of the proportion of positive tests in the United States. Blue and red bars show the corresponding total number of daily tests and apparent positive tests , respectively. (b) The corrected proportion of positive tests , found by inverting equation (4.1), for different , and bias combinations. (Online version in colour.)
In figure 3b, we show the corrected proportion of positive tests , found by numerically inverting equation (4.1) for different , and bias combinations. We observe that a small shifts values towards zero such that the corrected positive testing rate . Reducing the from to has only little effect on the corrected proportion of positive tests (solid black and dashed lines in figure 3b). Accounting for a positive testing bias of (i.e. preferential testing of infected and symptomatic individuals by a factor of ), however, markedly changes the inferred (dashed-dotted black line in figure 3b). Since the 7-day average of the number of tests is about in the USA (figure 3a), the variance terms are very small compared to the values of .
5. Inference of bias b
One way to estimate the testing bias is to identify a smaller subset of control tests within a jurisdiction that is believed to be unbiased and compare it with the reported fraction of positive tests obtained via standard (potentially biased) testing procedures. Given this scenario, we can derive a rather complete methodology to estimate bias by formally comparing the statistics of two sets of tests applied to the same population. The first set of control tests with testing parameters is known to be unbiased (has prior distribution ), while the second set is taken with known parameters , but unknown testing bias . For example, the control set may consist of a smaller number of tests that are administered completely randomly, while the second set may be the scaled-up set of tests with . Since both sets of tests are applied roughly at the same time to the same overall population, the underlying positive fraction is assumed to be the same in both test sets. We can then use Bayes’ rule on the first unbiased test set to infer
Of course, a simpler MLE can also be applied to data by first inferring the most likely value of from the control test set. We can use the number of positive tests in the control sample to define the variable . One can then maximize with respect to and use this value in . Maximizing with respect to then gives the MLE estimate . We can use random and unbiased sampling results obtained in the German jurisdiction of Gangelt, North Rhine–Westphalia [18]. A total of 600 adult persons with different last names were randomly selected from a population of 12 597 and asked to participate in the study together with their household members. The resulting study comprised of subjects who underwent serological and PCR testing between 31 March and 6 April 2020. The specificity and sensitivity corrected, unbiased positive test fraction was determined to be (95% CI 12.31–18.96%). Thus, we use this value as an estimate for the true underlying positivity rate . The larger sample taken across North Rhine–Westphalia between 30 March 30 and 5 April 2020 was measured () to be [28]. Assuming that this value is also error-corrected, an estimate of the bias in this main testing set can be found by solving , for . We find that the difference between the unbiased positive testing rate of 15.53% and 10% corresponds to a bias of . This negative bias likely arises because Gangelt was an infection hotspot within the entire North Rhine–Westphalia region, so the control sample was probably not unbiased. For comparison, a higher biased positive testing rate of 20% would lead to an estimated testing bias .
The number of total deaths on 6 April 2020 amounted to 7. Hence, the corresponding estimate of the , the number of disease-induced deaths divided by the total number of cases at time , in this jurisdiction on 6 April 2020 was % (95% CI 0.29–0.45%) [18]. If only a biased estimate of the proportion of positive cases is known and not the true value , we can use our framework to distinguish between the true and the observed infection fatality ratio

Figure 4. Dependence of the observed on testing bias. The observed infection fatality ratio (equation (5.4)) as a function of testing bias . We used the example of the German jurisdiction Gangelt and set , and . A value indicates a testing bias towards currently and/or previously infected individuals while susceptible and/or non-infectious individuals are preferentially tested for . Unbiased testing corresponds to .
6. Summary and conclusion
Radiological testing methods such as chest computed tomography are used sporadically to identify COVID-19-induced pneumonia in patients with negative tests [29]. However, the overwhelming majority of COVID-19 tests are based on serological (or antibody) tests and rapid antigen tests, ELISA and RT-PCR assay [1]. These tests are designed to subsequently output a binary signal, either infected or not. The population statistics of this output are affected by testing errors and bias. False-positive and false-negative rates of serological tests are generally smaller than those of rapid antigen tests and RT-PCR tests. However, serological tests are unable to identify early-stage infections since they are measuring antibody titres that usually develop a few days up to a few weeks after infection. In addition to the occurrence of false positives and false negatives (i.e. type I and type II errors), certain demographic groups (e.g. elderly people or those with comorbidities such as heart and lung diseases) may be overrepresented in testing statistics.
To quantify the impact of both type I/II errors and testing bias on reported COVID-19 case and death numbers, we developed a mathematical framework that describes erroneous and biased sampling (both with and without replacement) from a population of susceptible, infected and removed (i.e. recovered or deceased) individuals. We identify a positive testing bias with an overrepresentation of previously or currently infected individuals in the study population. Conversely, a negative testing bias corresponds to an overrepresentation of susceptible and/or non-infectious individuals in the study population. We derived MLEs of the testing-error and testing-bias-corrected fraction of positive tests. Our methods can be also applied to infer the full distribution of corrected positive testing rates over time and for different types of tests across different jurisdictions.
The mathematical quantity that underlies most of our analysis is the proportion of apparent positive tests. As pointed out in [30], the absolute number of positive tests may not capture the actual growth of an epidemic due to limitations in testing capacity. Still, many jurisdictions report absolute case numbers without specifying the total number of tests or additional information about test type, date of test and duplicate tests [31], rendering interpretation and application to epidemic surveillance challenging. For a reliable picture of COVID-19 case numbers, more complete testing data, including total number of tests, number of positive tests, test type and date of test, has to be reported and made publicly available at online data repositories. To correct for false positive, false negatives and testing bias in testing statistics (figure 3), it will be also important to further improve estimates of , , and through in field studies. In particular, estimating the testing bias requires random sampling studies similar to that carried out in [18]. Finally, while we have presented our analysis in the context of the COVID-19 pandemic, the general results presented in this paper apply to testing and estimation of severity of any infectious disease afflicting a population.
Data accessibility
Our source codes are publicly available at https://github.com/lubo93/disease-testing.
Authors' contributions
L.B., M.R.D. and T.C. contributed equally to the study design, data analyses and manuscript writing.
Competing interests
The authors declare no competing interests.
Funding
L.B. acknowledges financial support from the Swiss National Fund (P2EZP2_191888). The authors also acknowledge financial support from the Army Research Office (W911NF-18-1-0345), the NIH (R01HL146552) and the National Science Foundation (DMS-1814364, DMS-1814090).