394 research outputs found
Not Just a TheoryâThe Utility of Mathematical Models in Evolutionary Biology
Models have made numerous contributions to evolutionary biology, but misunderstandings persist regarding their purpose. By formally testing the logic of verbal hypotheses, proof-of-concept models clarify thinking, uncover hidden assumptions, and spur new directions of study. thumbnail image credit: modified from the Biodiversity Heritage Librar
Fast Quantum Modular Exponentiation
We present a detailed analysis of the impact on modular exponentiation of
architectural features and possible concurrent gate execution. Various
arithmetic algorithms are evaluated for execution time, potential concurrency,
and space tradeoffs. We find that, to exponentiate an n-bit number, for storage
space 100n (twenty times the minimum 5n), we can execute modular exponentiation
two hundred to seven hundred times faster than optimized versions of the basic
algorithms, depending on architecture, for n=128. Addition on a neighbor-only
architecture is limited to O(n) time when non-neighbor architectures can reach
O(log n), demonstrating that physical characteristics of a computing device
have an important impact on both real-world running time and asymptotic
behavior. Our results will help guide experimental implementations of quantum
algorithms and devices.Comment: to appear in PRA 71(5); RevTeX, 12 pages, 12 figures; v2 revision is
substantial, with new algorithmic variants, much shorter and clearer text,
and revised equation formattin
Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction
With the unprecedented photometric precision of the Kepler Spacecraft,
significant systematic and stochastic errors on transit signal levels are
observable in the Kepler photometric data. These errors, which include
discontinuities, outliers, systematic trends and other instrumental signatures,
obscure astrophysical signals. The Presearch Data Conditioning (PDC) module of
the Kepler data analysis pipeline tries to remove these errors while preserving
planet transits and other astrophysically interesting signals. The completely
new noise and stellar variability regime observed in Kepler data poses a
significant problem to standard cotrending methods such as SYSREM and TFA.
Variable stars are often of particular astrophysical interest so the
preservation of their signals is of significant importance to the astrophysical
community. We present a Bayesian Maximum A Posteriori (MAP) approach where a
subset of highly correlated and quiet stars is used to generate a cotrending
basis vector set which is in turn used to establish a range of "reasonable"
robust fit parameters. These robust fit parameters are then used to generate a
Bayesian Prior and a Bayesian Posterior Probability Distribution Function (PDF)
which when maximized finds the best fit that simultaneously removes systematic
effects while reducing the signal distortion and noise injection which commonly
afflicts simple least-squares (LS) fitting. A numerical and empirical approach
is taken where the Bayesian Prior PDFs are generated from fits to the light
curve distributions themselves.Comment: 43 pages, 21 figures, Submitted for publication in PASP. Also see
companion paper "Kepler Presearch Data Conditioning I - Architecture and
Algorithms for Error Correction in Kepler Light Curves" by Martin C. Stumpe,
et a
Localization, Coulomb interactions and electrical heating in single-wall carbon nanotubes/polymer composites
Low field and high field transport properties of carbon nanotubes/polymer
composites are investigated for different tube fractions. Above the percolation
threshold f_c=0.33%, transport is due to hopping of localized charge carriers
with a localization length xi=10-30 nm. Coulomb interactions associated with a
soft gap Delta_CG=2.5 meV are present at low temperature close to f_c. We argue
that it originates from the Coulomb charging energy effect which is partly
screened by adjacent bundles. The high field conductivity is described within
an electrical heating scheme. All the results suggest that using composites
close to the percolation threshold may be a way to access intrinsic properties
of the nanotubes by experiments at a macroscopic scale.Comment: 4 pages, 5 figures, Submitted to Phys. Rev.
Spitzer Infrared Spectrograph Observations of M, L, and T Dwarfs
We present the first mid-infrared spectra of brown dwarfs, together with
observations of a low-mass star. Our targets are the M3.5 dwarf GJ 1001A, the
L8 dwarf DENIS-P J0255-4700, and the T1/T6 binary system epsilon Indi Ba/Bb. As
expected, the mid-infrared spectral morphology of these objects changes rapidly
with spectral class due to the changes in atmospheric chemistry resulting from
their differing effective temperatures and atmospheric structures. By taking
advantage of the unprecedented sensitivity of the Infrared Spectrograph on the
Spitzer Space Telescope we have detected the 7.8 micron methane and 10 micron
ammonia bands for the first time in brown dwarf spectra.Comment: 4 pages, 2 figure
Initial Characteristics of Kepler Short Cadence Data
The Kepler Mission offers two options for observations -- either Long Cadence
(LC) used for the bulk of core mission science, or Short Cadence (SC) which is
used for applications such as asteroseismology of solar-like stars and transit
timing measurements of exoplanets where the 1-minute sampling is critical. We
discuss the characteristics of SC data obtained in the 33.5-day long Quarter 1
(Q1) observations with Kepler which completed on 15 June 2009. The truly
excellent time series precisions are nearly Poisson limited at 11th magnitude
providing per-point measurement errors of 200 parts-per-million per minute. For
extremely saturated stars near 7th magnitude precisions of 40 ppm are reached,
while for background limited measurements at 17th magnitude precisions of 7
mmag are maintained. We note the presence of two additive artifacts, one that
generates regularly spaced peaks in frequency, and one that involves additive
offsets in the time domain inversely proportional to stellar brightness. The
difference between LC and SC sampling is illustrated for transit observations
of TrES-2.Comment: 5 pages, 4 figures, ApJ Letters in pres
Information-theoretic interpretation of quantum error-correcting codes
Quantum error-correcting codes are analyzed from an information-theoretic
perspective centered on quantum conditional and mutual entropies. This approach
parallels the description of classical error correction in Shannon theory,
while clarifying the differences between classical and quantum codes. More
specifically, it is shown how quantum information theory accounts for the fact
that "redundant" information can be distributed over quantum bits even though
this does not violate the quantum "no-cloning" theorem. Such a remarkable
feature, which has no counterpart for classical codes, is related to the
property that the ternary mutual entropy vanishes for a tripartite system in a
pure state. This information-theoretic description of quantum coding is used to
derive the quantum analogue of the Singleton bound on the number of logical
bits that can be preserved by a code of fixed length which can recover a given
number of errors.Comment: 14 pages RevTeX, 8 Postscript figures. Added appendix. To appear in
Phys. Rev.
Extending scientific computing system with structural quantum programming capabilities
We present a basic high-level structures used for developing quantum
programming languages. The presented structures are commonly used in many
existing quantum programming languages and we use quantum pseudo-code based on
QCL quantum programming language to describe them. We also present the
implementation of introduced structures in GNU Octave language for scientific
computing. Procedures used in the implementation are available as a package
quantum-octave, providing a library of functions, which facilitates the
simulation of quantum computing. This package allows also to incorporate
high-level programming concepts into the simulation in GNU Octave and Matlab.
As such it connects features unique for high-level quantum programming
languages, with the full palette of efficient computational routines commonly
available in modern scientific computing systems. To present the major features
of the described package we provide the implementation of selected quantum
algorithms. We also show how quantum errors can be taken into account during
the simulation of quantum algorithms using quantum-octave package. This is
possible thanks to the ability to operate on density matrices
Characterizing two solar-type Kepler subgiants with asteroseismology: KIC10920273 and KIC11395018
Determining fundamental properties of stars through stellar modeling has
improved substantially due to recent advances in asteroseismology. Thanks to
the unprecedented data quality obtained by space missions, particularly CoRoT
and Kepler, invaluable information is extracted from the high-precision stellar
oscillation frequencies, which provide very strong constraints on possible
stellar models for a given set of classical observations. In this work, we have
characterized two relatively faint stars, KIC10920273 and KIC11395018, using
oscillation data from Kepler photometry and atmospheric constraints from
ground-based spectroscopy. Both stars have very similar atmospheric properties;
however, using the individual frequencies extracted from the Kepler data, we
have determined quite distinct global properties, with increased precision
compared to that of earlier results. We found that both stars have left the
main sequence and characterized them as follows: KIC10920273 is a
one-solar-mass star (M=1.00 +/- 0.04 M_sun), but much older than our Sun
(t=7.12 +/- 0.47 Gyr), while KIC11395018 is significantly more massive than the
Sun (M=1.27 +/- 0.04 M_sun) with an age close to that of the Sun (t=4.57 +/-
0.23 Gyr). We confirm that the high lithium abundance reported for these stars
should not be considered to represent young ages, as we precisely determined
them to be evolved subgiants. We discuss the use of surface lithium abundance,
rotation and activity relations as potential age diagnostics.Comment: 12 pages, 3 figures, 5 tables. Accepted by Ap
Non-adaptive Measurement-based Quantum Computation and Multi-party Bell Inequalities
Quantum correlations exhibit behaviour that cannot be resolved with a local
hidden variable picture of the world. In quantum information, they are also
used as resources for information processing tasks, such as Measurement-based
Quantum Computation (MQC). In MQC, universal quantum computation can be
achieved via adaptive measurements on a suitable entangled resource state. In
this paper, we look at a version of MQC in which we remove the adaptivity of
measurements and aim to understand what computational abilities still remain in
the resource. We show that there are explicit connections between this model of
computation and the question of non-classicality in quantum correlations. We
demonstrate this by focussing on deterministic computation of Boolean
functions, in which natural generalisations of the Greenberger-Horne-Zeilinger
(GHZ) paradox emerge; we then explore probabilistic computation, via which
multipartite Bell Inequalities can be defined. We use this correspondence to
define families of multi-party Bell inequalities, which we show to have a
number of interesting contrasting properties.Comment: 13 pages, 4 figures, final version accepted for publicatio
- âŠ