24 research outputs found
Inclusive Flavour Tagging Algorithm
Identifying the flavour of neutral mesons production is one of the most
important components needed in the study of time-dependent violation. The
harsh environment of the Large Hadron Collider makes it particularly hard to
succeed in this task. We present an inclusive flavour-tagging algorithm as an
upgrade of the algorithms currently used by the LHCb experiment. Specifically,
a probabilistic model which efficiently combines information from reconstructed
vertices and tracks using machine learning is proposed. The algorithm does not
use information about underlying physics process. It reduces the dependence on
the performance of lower level identification capacities and thus increases the
overall performance. The proposed inclusive flavour-tagging algorithm is
applicable to tag the flavour of mesons in any proton-proton experiment.Comment: 5 pages, 5 figures, 17th International workshop on Advanced Computing
and Analysis Techniques in physics research (ACAT-2016
Bayesian Dark Knowledge
We consider the problem of Bayesian parameter estimation for deep neural
networks, which is important in problem settings where we may have little data,
and/ or where we need accurate posterior predictive densities, e.g., for
applications involving bandits or active learning. One simple approach to this
is to use online Monte Carlo methods, such as SGLD (stochastic gradient
Langevin dynamics). Unfortunately, such a method needs to store many copies of
the parameters (which wastes memory), and needs to make predictions using many
versions of the model (which wastes time).
We describe a method for "distilling" a Monte Carlo approximation to the
posterior predictive density into a more compact form, namely a single deep
neural network. We compare to two very recent approaches to Bayesian neural
networks, namely an approach based on expectation propagation [Hernandez-Lobato
and Adams, 2015] and an approach based on variational Bayes [Blundell et al.,
2015]. Our method performs better than both of these, is much simpler to
implement, and uses less computation at test time.Comment: final version submitted to NIPS 201
Approaching Utopia: Strong Truthfulness and Externality-Resistant Mechanisms
We introduce and study strongly truthful mechanisms and their applications.
We use strongly truthful mechanisms as a tool for implementation in undominated
strategies for several problems,including the design of externality resistant
auctions and a variant of multi-dimensional scheduling
A categorical characterization of relative entropy on standard Borel spaces
We give a categorical treatment, in the spirit of Baez and Fritz, of relative
entropy for probability distributions defined on standard Borel spaces. We
define a category suitable for reasoning about statistical inference on
standard Borel spaces. We define relative entropy as a functor into Lawvere's
category and we show convexity, lower semicontinuity and uniqueness.Comment: 16 page
The design and implementation of a visual analytics task to support experimental research on human reasoning with uncertain knowledge
This research project involved designing and implementing a web-based application to support research using visual analytics, or the use of interactive visualizations, to support human cognition. More specifically, the interactive visualization that was created was motivated by the problem that humans often express overconfidence in both judgments and predictions based on uncertain knowledge. The interactive visualization presents experimental participants with a series of binary (yes/no, T/F, etc.) general knowledge or prediction questions, and requires participants to answer these questions and also provide a probability or confidence estimate between 50% and 100%. The output of the software created is a quantitative measure of human performance in terms of both accuracy and latency. This web-based application, which can also be used in stand-alone (non-networked) mode, is expected to pave the way for a set of additional future research projects involving experiments with human participants, with the eventual goal of interface design approaches and guidelines for eliciting unbiased information from knowledgeable people when either their subjective knowledge or the judgment or prediction task itself is characterized by uncertainty
Recommended from our members
Comparing in-person and online modes of expert elicitation
Expert elicitation, a method of developing probability distributions over unknown parameters, traditionally involves in-person interviews by a trained analyst. There is growing interest in using the internet to enable participation of larger, more distributed groups of experts. However, analysts have questioned the quality of judgements elicited online rather than in person. We systematically compare online and in-person elicitation modes, finding no significant difference between the two modes across multiple measures: the two modes are similar in accuracy, uncertainty ranges, number of surprises, fatigue, and the substance of qualitative comments. These findings have an important caveat: many elicitation questions were subject to problems in online administration that made it impossible to compare to in-person results. We conclude that, although online elicitations represent a less resource-intensive option for large expert elicitations, they may require a higher level of testing and quality control since there is no analyst to catch errors or clarify small misunderstandings
Truthful Linear Regression
We consider the problem of fitting a linear model to data held by individuals
who are concerned about their privacy. Incentivizing most players to truthfully
report their data to the analyst constrains our design to mechanisms that
provide a privacy guarantee to the participants; we use differential privacy to
model individuals' privacy losses. This immediately poses a problem, as
differentially private computation of a linear model necessarily produces a
biased estimation, and existing approaches to design mechanisms to elicit data
from privacy-sensitive individuals do not generalize well to biased estimators.
We overcome this challenge through an appropriate design of the computation and
payment scheme.Comment: To appear in Proceedings of the 28th Annual Conference on Learning
Theory (COLT 2015
Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models
While a large amount of work has focused on designing adversarial attacks
against image classifiers, only a few methods exist to attack semantic
segmentation models. We show that attacking segmentation models presents
task-specific challenges, for which we propose novel solutions. Our final
evaluation protocol outperforms existing methods, and shows that those can
overestimate the robustness of the models. Additionally, so far adversarial
training, the most successful way for obtaining robust image classifiers, could
not be successfully applied to semantic segmentation. We argue that this is
because the task to be learned is more challenging, and requires significantly
higher computational effort than for image classification. As a remedy, we show
that by taking advantage of recent advances in robust ImageNet classifiers, one
can train adversarially robust segmentation models at limited computational
cost by fine-tuning robust backbones