331 research outputs found
Onset of criticality and transport in a driven diffusive system
We study transport properties in a slowly driven diffusive system where the
transport is externally controlled by a parameter . Three types of behavior
are found: For the system is not conducting at all. For intermediate
a finite fraction of the external excitations propagate through the system.
Third, in the regime the system becomes completely conducting. For all
the system exhibits self-organized critical behavior. In the middle of
this regime, at , the system undergoes a continuous phase transition
described by critical exponents.Comment: 4 latex/revtex pages; 4 figure
High rate locally-correctable and locally-testable codes with sub-polynomial query complexity
In this work, we construct the first locally-correctable codes (LCCs), and
locally-testable codes (LTCs) with constant rate, constant relative distance,
and sub-polynomial query complexity. Specifically, we show that there exist
binary LCCs and LTCs with block length , constant rate (which can even be
taken arbitrarily close to 1), constant relative distance, and query complexity
. Previously such codes were known to exist
only with query complexity (for constant ), and
there were several, quite different, constructions known.
Our codes are based on a general distance-amplification method of Alon and
Luby~\cite{AL96_codes}. We show that this method interacts well with local
correctors and testers, and obtain our main results by applying it to suitably
constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant
relative distance}.
Along the way, we also construct LCCs and LTCs over large alphabets, with the
same query complexity , which additionally have
the property of approaching the Singleton bound: they have almost the
best-possible relationship between their rate and distance. This has the
surprising consequence that asking for a large alphabet error-correcting code
to further be an LCC or LTC with query
complexity does not require any sacrifice in terms of rate and distance! Such a
result was previously not known for any query complexity.
Our results on LCCs also immediately give locally-decodable codes (LDCs) with
the same parameters
Impacts of Aspen and Conifer Vegetation on Predation Risk and Distributions of Bird Species
Aspen forests are in decline around the globe and are largely being replaced by conifers. Associated with this shift in forest composition, we document an increase in nest predation risk and decrease in abundance of bird species that breed in aspens. These observational data from 5 years across 19 forest stands in western Montana were verified with an adaptive management experiment removing all conifers from three large aspen stands in the Mt. Haggin WMA. This landscape-scale approach strongly supports the active management of aspen stands, by such methods as removing conifers, to improve breeding bird habitat. Our results also suggest that vegetation-mediated effects of predation are associated with avian distributions and species turnover
A Noninvasive Optical Probe for Detecting Electrical Signals in Silicon IC’s
We report using a 1.3µm(silicon-sub-bandgap) optical probing system to detect electrical signals in silicon integrated circuits. Free carriers within integrated active devices perturb the index of refraction of the material, and we have used a Nomarski interferometer to sense this perturbation. Typical charge-density modulation in active devices produces a substantial index perturbation, and because of this, we have used an InGaAsP semiconductor laser to experimentally observe real-time 0.8V digital signals applied to a bipolar transistor. These signals were detected with a signal-to-noise ratio of 20dB in a system detection bandwidth of over 200MHz.
Since the free-carrier-induced refractive-index perturbation is present in all semiconductor materials, in the future, we expect to be able to detect signals in integrated circuits fabricated in GaAs or any other material, and by taking advantage of the high spatial and temporal resolution of this system, we should be able to observe free-carrier dynamics within most active devices
On optimal entanglement assisted one-shot classical communication
The one-shot success probability of a noisy classical channel for
transmitting one classical bit is the optimal probability with which the bit
can be sent via a single use of the channel. Prevedel et al. (PRL 106, 110505
(2011)) recently showed that for a specific channel, this quantity can be
increased if the parties using the channel share an entangled quantum state. We
completely characterize the optimal entanglement-assisted protocols in terms of
the radius of a set of operators associated with the channel. This
characterization can be used to construct optimal entanglement-assisted
protocols from the given classical channel and to prove the limit of such
protocols. As an example, we show that the Prevedel et al. protocol is optimal
for two-qubit entanglement. We also prove some simple upper bounds on the
improvement that can be obtained from quantum and no-signaling correlations.Comment: 5 pages, plus 7 pages of supplementary material. v2 is significantly
expanded and contains a new result (Theorem 2
Constant Ciphertext-Rate Non-committing Encryption from Standard Assumptions
Non-committing encryption (NCE) is a type of public key encryption which comes with the ability to equivocate ciphertexts to encryptions of arbitrary messages, i.e., it allows one to find coins for key generation and encryption which “explain” a given ciphertext as an encryption of any message. NCE is the cornerstone to construct adaptively secure multiparty computation [Canetti et al. STOC’96] and can be seen as the quintessential notion of security for public key encryption to realize ideal communication channels.
A large body of literature investigates what is the best message-to-ciphertext ratio (i.e., the rate) that one can hope to achieve for NCE. In this work we propose a near complete resolution to this question and we show how to construct NCE with constant rate in the plain model from a variety of assumptions, such as the hardness of the learning with errors (LWE), the decisional Diffie-Hellman (DDH), or the quadratic residuosity (QR) problem. Prior to our work, constructing NCE with constant rate required a trusted setup and indistinguishability obfuscation [Canetti et al. ASIACRYPT’17]
Bird species turnover is related to changing predation risk along a vegetation gradient
Abstract. Turnover in animal species along vegetation gradients is often assumed to reflect adaptive habitat preferences that are narrower than the full gradient. Specifically, animals may decline in abundance where their reproductive success is low, and these poorquality locations differ among species. Yet habitat use does not always appear adaptive. The crucial tests of how abundances and demographic costs of animals vary along experimentally manipulated vegetation gradients are lacking. We examined habitat use and nest predation rates for 16 bird species that exhibited turnover with shifts in deciduous and coniferous vegetation. For most bird species, decreasing abundance was associated with increasing predation rates along both natural and experimentally modified vegetation gradients. This landscape-scale approach strongly supports the idea that vegetation-mediated effects of predation are associated with animal distributions and species turnover
Recommended from our members
The Distance To The Hyades Cluster Based On Hubble Space Telescope Fine Guidance Sensor Parallaxes
Trigonometric parallax observations made with the Hubble Space Telescope (HST) Fine Guidance Sensor (FGS) 3 of seven Hyades members in six fields of view have been analyzed along with their proper motions to determine the distance to the cluster. Knowledge of the convergent point and mean proper motion of the Hyades is critical to the derivation of the distance to the center of the cluster. Depending on the choice of the proper-motion system, the derived cluster center distance varies by 9%. Adopting a reference distance of 46.1 pc or m - M = 3.32, which is derived from the ground-based parallaxes in the General Catalogue of Trigonometric Stellar Parallaxes (1995 edition), the FK5/PPM proper-motion system yields a distance 4% larger, while the Hanson system yields a distance 2% smaller. The HST FGS parallaxes reported here yield either a 14% or 5% larger distance, depending on the choice of the proper-motion system. Orbital parallaxes (Torres et al.) yield an average distance 4% larger than the reference distance. The variation in the distance derived from the HST data illustrates the importance of the proper-motion system and the individual proper motions to the derivation of the distance to the Hyades center; therefore, a full utilization of the HST FGS parallaxes awaits the establishment of an accurate and consistent proper-motion system.NASA HST GTO, HF-1042.01-93A, HF-1046.01-93A, NAS526555Astronom
Atmospheric Density Uncertainty Quantification for Satellite Conjunction Assessment
Conjunction assessment requires knowledge of the uncertainty in the predicted
orbit. Errors in the atmospheric density are a major source of error in the
prediction of low Earth orbits. Therefore, accurate estimation of the density
and quantification of the uncertainty in the density is required. Most
atmospheric density models, however, do not provide an estimate of the
uncertainty in the density. In this work, we present a new approach to quantify
uncertainties in the density and to include these for calculating the
probability of collision Pc. For this, we employ a recently developed dynamic
reduced-order density model that enables efficient prediction of the
thermospheric density. First, the model is used to obtain accurate estimates of
the density and of the uncertainty in the estimates. Second, the density
uncertainties are propagated forward simultaneously with orbit propagation to
include the density uncertainties for Pc calculation. For this, we account for
the effect of cross-correlation in position uncertainties due to density errors
on the Pc. Finally, the effect of density uncertainties and cross-correlation
on the Pc is assessed. The presented approach provides the distinctive
capability to quantify the uncertainty in atmospheric density and to include
this uncertainty for conjunction assessment while taking into account the
dependence of the density errors on location and time. In addition, the results
show that it is important to consider the effect of cross-correlation on the
Pc, because ignoring this effect can result in severe underestimation of the
collision probability.Comment: 15 pages, 6 figures, 5 table
Adaptively Indistinguishable Garbled Circuits
A garbling scheme is used to garble a circuit and an input in a way that reveals the output but hides everything else. An adaptively secure scheme allows the adversary to specify the input after seeing the garbled circuit. Applebaum et al. (CRYPTO \u2713) showed that in any garbling scheme with adaptive simulation-based security, the size of the garbled input must exceed the output size of the circuit. Here we show how to circumvent this lower bound and achieve significantly better efficiency under the minimal assumption that one-way functions exist by relaxing the security notion from simulation-based to indistinguishability-based.
We rely on the recent work of Hemenway et al. (CRYPTO \u2716) which constructed an adaptive simulation-based garbling scheme under one-way functions. The size of the garbled input in their scheme is as large as the output size of the circuit plus a certain pebble complexity of the circuit, where the latter is (e.g.,) bounded by the space complexity of the computation. By building on top of their construction and adapting their proof technique, we show how to remove the output size dependence in their result when considering indistinguishability-based security.
As an application of the above result, we get a symmetric-key functional encryption based on one-way functions, with indistinguishability-based security where the adversary can obtain an unbounded number of function secret keys and then adaptively a single challenge ciphertext. The size of the ciphertext only depends on the maximal pebble complexity of each of the functions but not on the number of functions or their circuit size
- …