389,615 research outputs found
Offline to Online Conversion
We consider the problem of converting offline estimators into an online
predictor or estimator with small extra regret. Formally this is the problem of
merging a collection of probability measures over strings of length 1,2,3,...
into a single probability measure over infinite sequences. We describe various
approaches and their pros and cons on various examples. As a side-result we
give an elementary non-heuristic purely combinatoric derivation of Turing's
famous estimator. Our main technical contribution is to determine the
computational complexity of online estimators with good guarantees in general.Comment: 20 LaTeX page
On Offline Evaluation of Vision-based Driving Models
Autonomous driving models should ideally be evaluated by deploying them on a
fleet of physical vehicles in the real world. Unfortunately, this approach is
not practical for the vast majority of researchers. An attractive alternative
is to evaluate models offline, on a pre-collected validation dataset with
ground truth annotation. In this paper, we investigate the relation between
various online and offline metrics for evaluation of autonomous driving models.
We find that offline prediction error is not necessarily correlated with
driving quality, and two models with identical prediction error can differ
dramatically in their driving performance. We show that the correlation of
offline evaluation with driving quality can be significantly improved by
selecting an appropriate validation dataset and suitable offline metrics. The
supplementary video can be viewed at
https://www.youtube.com/watch?v=P8K8Z-iF0cYComment: Published at the ECCV 2018 conferenc
Dual enhancement mechanisms for overnight motor memory consolidation
Our brains are constantly processing past events<sup>1</sup>. These offline processes consolidate memories, leading in the case of motor skill memories to an enhancement in performance between training sessions. A similar magnitude of enhancement develops over a night of sleep following an implicit task, in which a sequence of movements is acquired unintentionally, or following an explicit task, in which the same sequence is acquired intentionally<sup>2</sup>. What remains poorly understood, however, is whether these similar offline improvements are supported by similar circuits, or through distinct circuits. We set out to distinguish between these possibilities by applying transcranial magnetic stimulation over the primary motor cortex (M1) or the inferior parietal lobule (IPL) immediately after learning in either the explicit or implicit task. These brain areas have both been implicated in encoding aspects of a motor sequence and subsequently supporting offline improvements over sleep<sup>3,​4,​5</sup>. Here we show that offline improvements following the explicit task are dependent on a circuit that includes M1 but not IPL. In contrast, offline improvements following the implicit task are dependent on a circuit that includes IPL but not M1. Our work establishes the critical contribution made by M1 and IPL circuits to offline memory processing, and reveals that distinct circuits support similar offline improvements
Estimating the efficiency turn-on curve for a constant-threshold trigger without a calibration dataset
Many particle physics experiments use constant threshold triggers, where the
trigger threshold is in an online estimator that can be calculated quickly by
the trigger module. Offline data analysis then calculates a more precise
offline estimator for the same quantity, for example the event energy. The
efficiency curve is a step function in the online estimator, but not in the
offline estimator.
One typically obtains the shape of the efficiency curve in the offline
estimator by way of a calibration dataset, where the true rate of events at
each value of the offline estimator is measured once and compared to the rate
observed in the physics dataset. For triggers with a fixed threshold condition,
it is sometimes possible to bootstrap the trigger efficiency curve without use
of a calibration dataset. This is useful to verify stability of a calibration
over time when calibration data cannot be taken often enough. It also makes it
possible to use datasets for which no calibration is available. This paper
describes the method and the conditions that must be met for it to be
applicable
- …