4,688 research outputs found

    A Two-Process Model for Control of Legato Articulation Across a Wide Range of Tempos During Piano Performance

    Full text link
    Prior reports indicated a non-linear increase in key overlap times (KOTs) as tempo slows for scales/arpeggios performed at internote intervals (INIs) of I00-1000 ms. Simulations illustrate that this function can be explained by a two-process model. An oscillating neural network based on dynamics of the vector-integration-to-endpoint model for central generation of voluntary actions, allows performers to compute an estimate of the time remaining before the oscillator's next cycle onset. At fixed successive threshold values of this estimate they first launch keystroke n+l and then lift keystroke n. As tempo slows, time required to pass between threshold crossings elongates, and KOT increases. If only this process prevailed, performers would produce longer than observed KOTs at the slowest tempo. The full data set is explicable if subjects lift keystroke n whenever they cross the second threshold or receive sensory feedback from stroke n+l, whichever comes earlier.Fulbright grant; Office of Naval Research (N00014-92-J-1309, N0014-95-1-0409

    Quantum communication via a continuously monitored dual spin chain

    Full text link
    We analyze a recent protocol for the transmission of quantum states via a dual spin chain [Burgarth and Bose, Phys. Rev. A 71, 052315 (2005)] under the constraint that the receiver's measurement strength is finite. That is, we consider the channel where the ideal, instantaneous and complete von Neumann measurements are replaced with a more realistic continuous measurement. We show that for optimal performance the measurement strength must be "tuned" to the channel spin-spin coupling, and once this is done, one is able to achieve a similar transmission rate to that obtained with ideal measurements. The spin chain protocol thus remains effective under measurement constraints.Comment: 5 pages, revtex 4, 3 eps figure

    Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    Get PDF
    System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism’s genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene’s fitness contribution to an organism “here and now” and the same gene’s historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call “function-loss cost”, which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.This work was supported by the National Science Foundation, grant CCF-1219007 to YX; the Natural Sciences and Engineering Research Council of Canada, grant RGPIN-2014-03892 to YX; the National Institute of Health, grants 5R01GM089978 and 5R01GM103502 to DS; the Army Research Office - Multidisciplinary University Research Initiative, grant W911NF-12-1-0390 to DS; the US Department of Energy, grant DE-SC0012627 to DS; and by the Canada Research Chairs Program (YX). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. (CCF-1219007 - National Science Foundation; RGPIN-2014-03892 - Natural Sciences and Engineering Research Council of Canada; 5R01GM089978 - National Institute of Health; 5R01GM103502 - National Institute of Health; W911NF-12-1-0390 - Army Research Office - Multidisciplinary University Research Initiative; DE-SC0012627 - US Department of Energy; Canada Research Chairs Program)Published versio

    Comcast Corp. V. Behrend: Common Questions Versus Individual Answers—Which Will Predominate?

    Get PDF

    A Sensitivity and Array-Configuration Study for Measuring the Power Spectrum of 21cm Emission from Reionization

    Full text link
    Telescopes aiming to measure 21cm emission from the Epoch of Reionization must toe a careful line, balancing the need for raw sensitivity against the stringent calibration requirements for removing bright foregrounds. It is unclear what the optimal design is for achieving both of these goals. Via a pedagogical derivation of an interferometer's response to the power spectrum of 21cm reionization fluctuations, we show that even under optimistic scenarios, first-generation arrays will yield low-SNR detections, and that different compact array configurations can substantially alter sensitivity. We explore the sensitivity gains of array configurations that yield high redundancy in the uv-plane -- configurations that have been largely ignored since the advent of self-calibration for high-dynamic-range imaging. We first introduce a mathematical framework to generate optimal minimum-redundancy configurations for imaging. We contrast the sensitivity of such configurations with high-redundancy configurations, finding that high-redundancy configurations can improve power-spectrum sensitivity by more than an order of magnitude. We explore how high-redundancy array configurations can be tuned to various angular scales, enabling array sensitivity to be directed away from regions of the uv-plane (such as the origin) where foregrounds are brighter and where instrumental systematics are more problematic. We demonstrate that a 132-antenna deployment of the Precision Array for Probing the Epoch of Reionization (PAPER) observing for 120 days in a high-redundancy configuration will, under ideal conditions, have the requisite sensitivity to detect the power spectrum of the 21cm signal from reionization at a 3\sigma level at k<0.25h Mpc^{-1} in a bin of \Delta ln k=1. We discuss the tradeoffs of low- versus high-redundancy configurations.Comment: 34 pages, 5 figures, 2 appendices. Version accepted to Ap

    Feedback cooling of atomic motion in cavity QED

    Get PDF
    We consider the problem of controlling the motion of an atom trapped in an optical cavity using continuous feedback. In order to realize such a scheme experimentally, one must be able to perform state estimation of the atomic motion in real time. While in theory this estimate may be provided by a stochastic master equation describing the full dynamics of the observed system, integrating this equation in real time is impractical. Here we derive an approximate estimation equation for this purpose, and use it as a drive in a feedback algorithm designed to cool the motion of the atom. We examine the effectiveness of such a procedure using full simulations of the cavity QED system, including the quantized motion of the atom in one dimension.Comment: 22 pages, 17 figure

    Paper Session II-B - The International Space Station: Background and Current Status

    Get PDF
    The International Space Station, as the largest international civil program in history, features unprecedented technical, cost, scheduling, managerial, and international complexity. A number of major milestones have been accomplished to date, including the construction of major elements of flight hardware, the development of operations and sustaining engineering centers, astronaut training, and several Space Shuttle/Mir docking missions. Negotiations with all International Parters on initial terms and conditions and Memoranda of Understanding (MOU) have been largely completed, and discussions on bartering arrangements for services and new hardware are ongoing. When the International Space Station is successfully completed, it will pave the way for even bigger, more far-reaching, and more inspiring cooperative achievements in the future
    • …
    corecore