1,491 research outputs found
Active classification with comparison queries
We study an extension of active learning in which the learning algorithm may
ask the annotator to compare the distances of two examples from the boundary of
their label-class. For example, in a recommendation system application (say for
restaurants), the annotator may be asked whether she liked or disliked a
specific restaurant (a label query); or which one of two restaurants did she
like more (a comparison query).
We focus on the class of half spaces, and show that under natural
assumptions, such as large margin or bounded bit-description of the input
examples, it is possible to reveal all the labels of a sample of size using
approximately queries. This implies an exponential improvement over
classical active learning, where only label queries are allowed. We complement
these results by showing that if any of these assumptions is removed then, in
the worst case, queries are required.
Our results follow from a new general framework of active learning with
additional queries. We identify a combinatorial dimension, called the
\emph{inference dimension}, that captures the query complexity when each
additional query is determined by examples (such as comparison queries,
each of which is determined by the two compared examples). Our results for half
spaces follow by bounding the inference dimension in the cases discussed above.Comment: 23 pages (not including references), 1 figure. The new version
contains a minor fix in the proof of Lemma 4.
Samplers and Extractors for Unbounded Functions
Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors
The IceCube Realtime Alert System
Following the detection of high-energy astrophysical neutrinos in 2013, their
origin is still unknown. Aiming for the identification of an electromagnetic
counterpart of a rapidly fading source, we have implemented a realtime analysis
framework for the IceCube neutrino observatory. Several analyses selecting
neutrinos of astrophysical origin are now operating in realtime at the detector
site in Antarctica and are producing alerts to the community to enable rapid
follow-up observations. The goal of these observations is to locate the
astrophysical objects responsible for these neutrino signals. This paper
highlights the infrastructure in place both at the South Pole detector site and
at IceCube facilities in the north that have enabled this fast follow-up
program to be developed. Additionally, this paper presents the first realtime
analyses to be activated within this framework, highlights their sensitivities
to astrophysical neutrinos and background event rates, and presents an outlook
for future discoveries.Comment: 33 pages, 9 figures, Published in Astroparticle Physic
On the Complexity of Bounded Context Switching
Bounded context switching (BCS) is an under-approximate method for finding violations to safety properties in shared-memory concurrent programs. Technically, BCS is a reachability problem that is known to be NP-complete. Our contribution is a parameterized analysis of BCS.
The first result is an algorithm that solves BCS when parameterized by the number of context switches (cs) and the size of the memory (m) in O*(m^(cs)2^(cs)). This is achieved by creating instances of the easier problem Shuff which we solve via fast subset convolution. We also present a lower bound for BCS of the form m^o(cs / log(cs)), based on the exponential time hypothesis. Interestingly, the gap is closely related to a conjecture that has been open since FOCS\u2707. Further, we prove that BCS admits no polynomial kernel.
Next, we introduce a measure, called scheduling dimension, that captures the complexity of schedules. We study BCS parameterized by the scheduling dimension (sdim) and show that it can be solved in O*((2m)^(4sdim)4^t), where t is the number of threads. We consider variants of the problem for which we obtain (matching) upper and lower bounds
Reliability of existing reinforced concrete slabs exposed to punching shear
Selected standardised models for the verification of punching shear in reinforced concrete structures are applied for the probabilistic assessment of their reliability level. It appears that the models given in EN 1992-1-1 and prEN 1992-1-1 lead to more realistic estimates of the reliability level of existing reinforced concrete members with respect to punching shear than the models recommended in some national codes. The controlled perimeter has significant influence on the results and should be harmonized in prescriptive documents
- …