4,306 research outputs found
Indistinguishability Obfuscation from Well-Founded Assumptions
In this work, we show how to construct indistinguishability obfuscation from
subexponential hardness of four well-founded assumptions. We prove:
Let be arbitrary
constants. Assume sub-exponential security of the following assumptions, where
is a security parameter, and the parameters below are
large enough polynomials in :
- The SXDH assumption on asymmetric bilinear groups of a prime order ,
- The LWE assumption over with subexponential
modulus-to-noise ratio , where is the dimension of the LWE
secret,
- The LPN assumption over with polynomially many LPN samples
and error rate , where is the dimension of the LPN
secret,
- The existence of a Boolean PRG in with stretch
,
Then, (subexponentially secure) indistinguishability obfuscation for all
polynomial-size circuits exists
The natural flow wing-design concept
A wing-design study was conducted on a 65 degree swept leading-edge delta wing in which the wing geometry was modified to take advantage of the naturally occurring flow that forms over a slender wing in a supersonic flow field. Three-dimensional nonlinear analysis methods were used in the study which was divided into three parts: preliminary design, initial design, and final design. In the preliminary design, the wing planform, the design conditions, and the near-conical wing-design concept were derived, and a baseline standard wing (conventional airfoil distribution) and a baseline near-conical wing were chosen. During the initial analysis, a full-potential flow solver was employed to determine the aerodynamic characteristics of the baseline standard delta wing and to investigate modifications to the airfoil thickness, leading-edge radius, airfoil maximum-thickness position, and wing upper to lower surface asymmetry on the baseline near-conical wing. The final design employed an Euler solver to analyze the best wing configurations found in the initial design and to extend the study of wing asymmetry to develop a more refined wing. Benefits resulting from each modification are discussed, and a final 'natural flow' wing geometry was designed that provides an improvement in aerodynamic performance compared with that of a baseline conventional uncambered wing, linear-theory cambered wing, and near-conical wing
Prefixless q-ary balanced codes with fast syndrome-based error correction
Abstract: We investigate a Knuth-like scheme for balancing q-ary codewords, which has the virtue that look-up tables for coding and decoding the prefix are avoided by using precoding and error correction techniques. We show how the scheme can be extended to allow for error correction of single channel errors using a fast decoding algorithm that depends on syndromes only, making it considerably faster compared to the prior art exhaustive decoding strategy. A comparison between the new and prior art schemes, both in terms of redundancy and error performance, completes the study
A Physiologically-Inspired Model of Numerical Classification Based on Graded Stimulus Coding
In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes – quantities like time, length, and brightness that are linearly ordered – constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber's Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP) of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP) signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task – numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli
A re-analysis of the three-year WMAP temperature power spectrum and likelihood
We analyze the three-year WMAP temperature anisotropy data seeking to confirm
the power spectrum and likelihoods published by the WMAP team. We apply five
independent implementations of four algorithms to the power spectrum estimation
and two implementations to the parameter estimation. Our single most important
result is that we broadly confirm the WMAP power spectrum and analysis. Still,
we do find two small but potentially important discrepancies: On large angular
scales there is a small power excess in the WMAP spectrum (5-10% at l<~30)
primarily due to likelihood approximation issues between 13 <= l <~30. On small
angular scales there is a systematic difference between the V- and W-band
spectra (few percent at l>~300). Recently, the latter discrepancy was explained
by Huffenberger et al. (2006) in terms of over-subtraction of unresolved point
sources. As far as the low-l bias is concerned, most parameters are affected by
a few tenths of a sigma. The most important effect is seen in n_s. For the
combination of WMAP, Acbar and BOOMERanG, the significance of n_s =/ 1 drops
from ~2.7 sigma to ~2.3 sigma when correcting for this bias. We propose a few
simple improvements to the low-l WMAP likelihood code, and introduce two
important extensions to the Gibbs sampling method that allows for proper
sampling of the low signal-to-noise regime. Finally, we make the products from
the Gibbs sampling analysis publically available, thereby providing a fast and
simple route to the exact likelihood without the need of expensive matrix
inversions.Comment: 14 pages, 7 figures. Accepted for publication in ApJ. Numerical
results unchanged, but interpretation sharpened: Likelihood approximation
issues at l=13-30 far more important than potential foreground issues at l <=
12. Gibbs products (spectrum and sky samples, and "easy-to-use" likelihood
module) available from http://www.astro.uio.no/~hke/ under "Research
- …