1,721 research outputs found
How a well-adapting immune system remembers
An adaptive agent predicting the future state of an environment must weigh
trust in new observations against prior experiences. In this light, we propose
a view of the adaptive immune system as a dynamic Bayesian machinery that
updates its memory repertoire by balancing evidence from new pathogen
encounters against past experience of infection to predict and prepare for
future threats. This framework links the observed initial rapid increase of the
memory pool early in life followed by a mid-life plateau to the ease of
learning salient features of sparse environments. We also derive a modulated
memory pool update rule in agreement with current vaccine response experiments.
Our results suggest that pathogenic environments are sparse and that memory
repertoires significantly decrease infection costs even with moderate sampling.
The predicted optimal update scheme maps onto commonly considered competitive
dynamics for antigen receptors
Asymptotic Bayes-optimality under sparsity of some multiple testing procedures
Within a Bayesian decision theoretic framework we investigate some asymptotic
optimality properties of a large class of multiple testing rules. A parametric
setup is considered, in which observations come from a normal scale mixture
model and the total loss is assumed to be the sum of losses for individual
tests. Our model can be used for testing point null hypotheses, as well as to
distinguish large signals from a multitude of very small effects. A rule is
defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our
chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes
oracle (a rule which minimizes the Bayes risk) converges to one. Our main
interest is in the asymptotic scheme where the proportion p of "true"
alternatives converges to zero. We fully characterize the class of fixed
threshold multiple testing rules which are ABOS, and hence derive conditions
for the asymptotic optimality of rules controlling the Bayesian False Discovery
Rate (BFDR). We finally provide conditions under which the popular
Benjamini-Hochberg (BH) and Bonferroni procedures are ABOS and show that for a
wide class of sparsity levels, the threshold of the former can be approximated
by a nonrandom threshold.Comment: Published in at http://dx.doi.org/10.1214/10-AOS869 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …