6,980 research outputs found
Quantum communication via a continuously monitored dual spin chain
We analyze a recent protocol for the transmission of quantum states via a
dual spin chain [Burgarth and Bose, Phys. Rev. A 71, 052315 (2005)] under the
constraint that the receiver's measurement strength is finite. That is, we
consider the channel where the ideal, instantaneous and complete von Neumann
measurements are replaced with a more realistic continuous measurement. We show
that for optimal performance the measurement strength must be "tuned" to the
channel spin-spin coupling, and once this is done, one is able to achieve a
similar transmission rate to that obtained with ideal measurements. The spin
chain protocol thus remains effective under measurement constraints.Comment: 5 pages, revtex 4, 3 eps figure
A Two-Process Model for Control of Legato Articulation Across a Wide Range of Tempos During Piano Performance
Prior reports indicated a non-linear increase in key overlap times (KOTs) as tempo slows for scales/arpeggios performed at internote intervals (INIs) of I00-1000 ms. Simulations illustrate that this function can be explained by a two-process model. An oscillating neural network based on dynamics of the vector-integration-to-endpoint model for central generation of voluntary actions, allows performers to compute an estimate of the time remaining before the oscillator's next cycle onset. At fixed successive threshold values of this estimate they first launch keystroke n+l and then lift keystroke n. As tempo slows, time required to pass between threshold crossings elongates, and KOT increases. If only this process prevailed, performers would produce longer than observed KOTs at the slowest tempo. The full data set is explicable if subjects lift keystroke n whenever they cross the second threshold or receive sensory feedback from stroke n+l, whichever comes earlier.Fulbright grant; Office of Naval Research (N00014-92-J-1309, N0014-95-1-0409
Complete Genome Sequences of Lactobacillus Phages J-1 and PL-1
Lactobacillus phages J-1 and PL-1 were isolated during the 1960s from abnormal fermentations of Yakult. The genomes are almost identical, but PL-1 has a deletion in the genetic switch region and also differs in a gene coding for a putative tail protein.Fil: Dieterle, María Eugenia. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales; Argentina. University of Pittsburgh; Estados UnidosFil: Jacobs Sera, Deborah. University of Pittsburgh; Estados UnidosFil: Russel, Daniel. University of Pittsburgh; Estados UnidosFil: Hatfull, Graham. University of Pittsburgh; Estados UnidosFil: Piuri, Mariana. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Química Biológica de la Facultad de Ciencias Exactas y Naturales; Argentin
Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates
System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism’s genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene’s fitness contribution to an organism “here and now” and the same gene’s historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call “function-loss cost”, which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.This work was supported by the National Science Foundation, grant CCF-1219007 to YX; the Natural Sciences and Engineering Research Council of Canada, grant RGPIN-2014-03892 to YX; the National Institute of Health, grants 5R01GM089978 and 5R01GM103502 to DS; the Army Research Office - Multidisciplinary University Research Initiative, grant W911NF-12-1-0390 to DS; the US Department of Energy, grant DE-SC0012627 to DS; and by the Canada Research Chairs Program (YX). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. (CCF-1219007 - National Science Foundation; RGPIN-2014-03892 - Natural Sciences and Engineering Research Council of Canada; 5R01GM089978 - National Institute of Health; 5R01GM103502 - National Institute of Health; W911NF-12-1-0390 - Army Research Office - Multidisciplinary University Research Initiative; DE-SC0012627 - US Department of Energy; Canada Research Chairs Program)Published versio
A Sensitivity and Array-Configuration Study for Measuring the Power Spectrum of 21cm Emission from Reionization
Telescopes aiming to measure 21cm emission from the Epoch of Reionization
must toe a careful line, balancing the need for raw sensitivity against the
stringent calibration requirements for removing bright foregrounds. It is
unclear what the optimal design is for achieving both of these goals. Via a
pedagogical derivation of an interferometer's response to the power spectrum of
21cm reionization fluctuations, we show that even under optimistic scenarios,
first-generation arrays will yield low-SNR detections, and that different
compact array configurations can substantially alter sensitivity. We explore
the sensitivity gains of array configurations that yield high redundancy in the
uv-plane -- configurations that have been largely ignored since the advent of
self-calibration for high-dynamic-range imaging. We first introduce a
mathematical framework to generate optimal minimum-redundancy configurations
for imaging. We contrast the sensitivity of such configurations with
high-redundancy configurations, finding that high-redundancy configurations can
improve power-spectrum sensitivity by more than an order of magnitude. We
explore how high-redundancy array configurations can be tuned to various
angular scales, enabling array sensitivity to be directed away from regions of
the uv-plane (such as the origin) where foregrounds are brighter and where
instrumental systematics are more problematic. We demonstrate that a
132-antenna deployment of the Precision Array for Probing the Epoch of
Reionization (PAPER) observing for 120 days in a high-redundancy configuration
will, under ideal conditions, have the requisite sensitivity to detect the
power spectrum of the 21cm signal from reionization at a 3\sigma level at
k<0.25h Mpc^{-1} in a bin of \Delta ln k=1. We discuss the tradeoffs of low-
versus high-redundancy configurations.Comment: 34 pages, 5 figures, 2 appendices. Version accepted to Ap
Psychological Effects of Thought Acceleration
Six experiments found that manipulations that increase thought speed also yield positive affect. These experiments varied in both the methods used for accelerating thought (i.e., instructions to brainstorm freely, exposure to multiple ideas, encouragement to plagiarize others’ ideas, performance of easy cognitive tasks, narration of a silent video in fast-forward, and experimentally controlled reading speed) and the contents of the thoughts that were induced (from thoughts about money-making schemes to thoughts of five-letter words). The results suggested that effects of thought speed on mood are partially rooted in the subjective experience of thought speed. The results also suggested that these effects can be attributed to the joy-enhancing effects of fast thinking (rather than only to the joy-killing effects of slow thinking). This work is inspired by observations of a link between “racing thoughts” and euphoria in cases of clinical mania, and potential implications of that observed link are discussed.Psycholog
Gait Velocity Estimation using time interleaved between Consecutive Passive IR Sensor Activations
Gait velocity has been consistently shown to be an important indicator and
predictor of health status, especially in older adults. It is often assessed
clinically, but the assessments occur infrequently and do not allow optimal
detection of key health changes when they occur. In this paper, we show that
the time gap between activations of a pair of Passive Infrared (PIR) motion
sensors installed in the consecutively visited room pair carry rich latent
information about a person's gait velocity. We name this time gap transition
time and show that despite a six second refractory period of the PIR sensors,
transition time can be used to obtain an accurate representation of gait
velocity.
Using a Support Vector Regression (SVR) approach to model the relationship
between transition time and gait velocity, we show that gait velocity can be
estimated with an average error less than 2.5 cm/sec. This is demonstrated with
data collected over a 5 year period from 74 older adults monitored in their own
homes.
This method is simple and cost effective and has advantages over competing
approaches such as: obtaining 20 to 100x more gait velocity measurements per
day and offering the fusion of location-specific information with time stamped
gait estimates. These advantages allow stable estimates of gait parameters
(maximum or average speed, variability) at shorter time scales than current
approaches. This also provides a pervasive in-home method for context-aware
gait velocity sensing that allows for monitoring of gait trajectories in space
and time
- …
