90 research outputs found
Prevalence Estimation and Optimal Classification Methods to Account for Time Dependence in Antibody Levels
Serology testing can identify past infection by quantifying the immune
response of an infected individual providing important public health guidance.
Individual immune responses are time-dependent, which is reflected in antibody
measurements. Moreover, the probability of obtaining a particular measurement
changes due to prevalence as the disease progresses. Taking into account these
personal and population-level effects, we develop a mathematical model that
suggests a natural adaptive scheme for estimating prevalence as a function of
time. We then combine the estimated prevalence with optimal decision theory to
develop a time-dependent probabilistic classification scheme that minimizes
error. We validate this analysis by using a combination of real-world and
synthetic SARS-CoV-2 data and discuss the type of longitudinal studies needed
to execute this scheme in real-world settings.Comment: 29 pages, 11 figure
Minimal Assumptions for Optimal Serology Classification: Theory and Implications for Multidimensional Settings and Impure Training Data
Minimizing error in prevalence estimates and diagnostic classifiers remains a
challenging task in serology. In theory, these problems can be reduced to
modeling class-conditional probability densities (PDFs) of measurement
outcomes, which control all downstream analyses. However, this task quickly
succumbs to the curse of dimensionality, even for assay outputs with only a few
dimensions (e.g. target antigens). To address this problem, we propose a
technique that uses empirical training data to classify samples and estimate
prevalence in arbitrary dimension without direct access to the conditional
PDFs. We motivate this method via a lemma that relates relative conditional
probabilities to minimum-error classification boundaries. This leads us to
formulate an optimization problem that: (i) embeds the data in a parameterized,
curved space; (ii) classifies samples based on their position relative to a
coordinate axis; and (iii) subsequently optimizes the space by minimizing the
empirical classification error of pure training data, for which the classes are
known. Interestingly, the solution to this problem requires use of a
homotopy-type method to stabilize the optimization. We then extend the analysis
to the case of impure training data, for which the classes are unknown. We find
that two impure datasets suffice for both prevalence estimation and
classification, provided they satisfy a linear independence property. Lastly,
we discuss how our analysis unifies discriminative and generative learning
techniques in a common framework based on ideas from set and measure theory.
Throughout, we validate our methods in the context of synthetic data and a
research-use SARS-CoV-2 enzyme-linked immunosorbent (ELISA) assay
- …