32,287 research outputs found
Computational strategies for dissecting the high-dimensional complexity of adaptive immune repertoires
The adaptive immune system recognizes antigens via an immense array of
antigen-binding antibodies and T-cell receptors, the immune repertoire. The
interrogation of immune repertoires is of high relevance for understanding the
adaptive immune response in disease and infection (e.g., autoimmunity, cancer,
HIV). Adaptive immune receptor repertoire sequencing (AIRR-seq) has driven the
quantitative and molecular-level profiling of immune repertoires thereby
revealing the high-dimensional complexity of the immune receptor sequence
landscape. Several methods for the computational and statistical analysis of
large-scale AIRR-seq data have been developed to resolve immune repertoire
complexity in order to understand the dynamics of adaptive immunity. Here, we
review the current research on (i) diversity, (ii) clustering and network,
(iii) phylogenetic and (iv) machine learning methods applied to dissect,
quantify and compare the architecture, evolution, and specificity of immune
repertoires. We summarize outstanding questions in computational immunology and
propose future directions for systems immunology towards coupling AIRR-seq with
the computational discovery of immunotherapeutics, vaccines, and
immunodiagnostics.Comment: 27 pages, 2 figure
Activity Recognition and Prediction in Real Homes
In this paper, we present work in progress on activity recognition and
prediction in real homes using either binary sensor data or depth video data.
We present our field trial and set-up for collecting and storing the data, our
methods, and our current results. We compare the accuracy of predicting the
next binary sensor event using probabilistic methods and Long Short-Term Memory
(LSTM) networks, include the time information to improve prediction accuracy,
as well as predict both the next sensor event and its mean time of occurrence
using one LSTM model. We investigate transfer learning between apartments and
show that it is possible to pre-train the model with data from other apartments
and achieve good accuracy in a new apartment straight away. In addition, we
present preliminary results from activity recognition using low-resolution
depth video data from seven apartments, and classify four activities - no
movement, standing up, sitting down, and TV interaction - by using a relatively
simple processing method where we apply an Infinite Impulse Response (IIR)
filter to extract movements from the frames prior to feeding them to a
convolutional LSTM network for the classification.Comment: 12 pages, Symposium of the Norwegian AI Society NAIS 201
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
Background: Many biological systems are modeled qualitatively with discrete
models, such as probabilistic Boolean networks, logical models, Petri nets, and
agent-based models, with the goal to gain a better understanding of the system.
The computational complexity to analyze the complete dynamics of these models
grows exponentially in the number of variables, which impedes working with
complex models. Although there exist sophisticated algorithms to determine the
dynamics of discrete models, their implementations usually require
labor-intensive formatting of the model formulation, and they are oftentimes
not accessible to users without programming skills. Efficient analysis methods
are needed that are accessible to modelers and easy to use. Method: By
converting discrete models into algebraic models, tools from computational
algebra can be used to analyze their dynamics. Specifically, we propose a
method to identify attractors of a discrete model that is equivalent to solving
a system of polynomial equations, a long-studied problem in computer algebra.
Results: A method for efficiently identifying attractors, and the web-based
tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other
analysis methods for discrete models. ADAM converts several discrete model
types automatically into polynomial dynamical systems and analyzes their
dynamics using tools from computer algebra. Based on extensive experimentation
with both discrete models arising in systems biology and randomly generated
networks, we found that the algebraic algorithms presented in this manuscript
are fast for systems with the structure maintained by most biological systems,
namely sparseness, i.e., while the number of nodes in a biological network may
be quite large, each node is affected only by a small number of other nodes,
and robustness, i.e., small number of attractors
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer
Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based
- …