3,427 research outputs found
Sorting and Selection with Imprecise Comparisons
In experimental psychology, the method of paired comparisons was proposed as a means for ranking preferences amongst n elements of a human subject. The method requires performing all (n2) comparisons then sorting elements according to the number of wins. The large number of comparisons is performed to counter the potentially faulty decision-making of the human subject, who acts as an imprecise comparator.
We consider a simple model of the imprecise comparisons: there exists some δ> 0 such that when a subject is given two elements to compare, if the values of those elements (as perceived by the subject) differ by at least δ, then the comparison will be made correctly; when the two elements have values that are within δ, the outcome of the comparison is unpredictable. This δ corresponds to the just noticeable difference unit (JND) or difference threshold in the psychophysics literature, but does not require the statistical assumptions used to define this value.
In this model, the standard method of paired comparisons minimizes the errors introduced by the imprecise comparisons at the cost of (n2) comparisons. We show that the same optimal guarantees can be achieved using 4 n 3/2 comparisons, and we prove the optimality of our method. We then explore the general tradeoff between the guarantees on the error that can be made and number of comparisons for the problems of sorting, max-finding, and selection. Our results provide close-to-optimal solutions for each of these problems.Engineering and Applied Science
Dark Matter as a Guide Toward a Light Gluino at the LHC
Motivated by specific connections to dark matter signatures, we study the
prospects of observing the presence of a relatively light gluino whose mass is
in the range ~(500-900) GeV with a wino-like lightest supersymmetric particle
with mass in the range of ~(170-210) GeV. The light gaugino spectra studied
here is generally different from other models, and in particular those with a
wino dominated LSP, in that here the gluinos can be significantly lighter. The
positron excess reported by the PAMELA satellite data is accounted for by
annihilations of the wino LSP and their relic abundance can generally be
brought near the WMAP constraints due to the late decay of a modulus field
re-populating the density of relic dark matter. We also mention the recent
FERMI photon constraints on annihilating dark matter in this class of models
and implications for direct detection experiments including CDMS and XENON. We
study these signatures in models of supersymmetry with non-minimal soft
breaking terms derived from both string compactifications and related
supergravity models which generally lead to non-universal gaugino masses. At
the LHC, large event rates from the three-body decays of the gluino in certain
parts of the parameter space are found to give rise to early discovery
prospects for the gaugino sector. Excess events at the 5 sigma level can arise
with luminosity as low as order 100 pb^{-1} at a center of mass energy of 10
TeV and less than ~ 1 fb^{-1} at a center of mass energy of 7 TeV.Comment: 2 columns, 9 pages, 5 figure caption
Fast Optimal Locally Private Mean Estimation via Random Projections
We study the problem of locally private mean estimation of high-dimensional
vectors in the Euclidean ball. Existing algorithms for this problem either
incur sub-optimal error or have high communication and/or run-time complexity.
We propose a new algorithmic framework, ProjUnit, for private mean estimation
that yields algorithms that are computationally efficient, have low
communication complexity, and incur optimal error up to a -factor. Our
framework is deceptively simple: each randomizer projects its input to a random
low-dimensional subspace, normalizes the result, and then runs an optimal
algorithm such as PrivUnitG in the lower-dimensional space. In addition, we
show that, by appropriately correlating the random projection matrices across
devices, we can achieve fast server run-time. We mathematically analyze the
error of the algorithm in terms of properties of the random projections, and
study two instantiations. Lastly, our experiments for private mean estimation
and private federated learning demonstrate that our algorithms empirically
obtain nearly the same utility as optimal ones while having significantly lower
communication and computational cost.Comment: Added the correct github lin
Does belief aim (only) at the truth?
It is common to hear talk of the aim of belief and to find philosophers appealing to that aim for numerous explanatory purposes. What belief’s aim explains depends, of course, on what that aim is. Many hold that it is somehow related to truth, but there are various ways in which one might specify belief’s aim using the notion of truth. In this paper, by considering whether they can account for belief’s standard of correctness and the epistemic norms governing belief, I argue against certain prominent specifications of belief’s aim given in terms of truth and advance a neglected alternativ
- …