701 research outputs found

    The Safety Challenges of Deep Learning in Real-World Type 1 Diabetes Management

    Full text link
    Blood glucose simulation allows the effectiveness of type 1 diabetes (T1D) management strategies to be evaluated without patient harm. Deep learning algorithms provide a promising avenue for extending simulator capabilities; however, these algorithms are limited in that they do not necessarily learn physiologically correct glucose dynamics and can learn incorrect and potentially dangerous relationships from confounders in training data. This is likely to be more important in real-world scenarios, as data is not collected under strict research protocol. This work explores the implications of using deep learning algorithms trained on real-world data to model glucose dynamics. Free-living data was processed from the OpenAPS Data Commons and supplemented with patient-reported tags of challenging diabetes events, constituting one of the most detailed real-world T1D datasets. This dataset was used to train and evaluate state-of-the-art glucose simulators, comparing their prediction error across safety critical scenarios and assessing the physiological appropriateness of the learned dynamics using Shapley Additive Explanations (SHAP). While deep learning prediction accuracy surpassed the widely-used mathematical simulator approach, the model deteriorated in safety critical scenarios and struggled to leverage self-reported meal and exercise information. SHAP value analysis also indicated the model had fundamentally confused the roles of insulin and carbohydrates, which is one of the most basic T1D management principles. This work highlights the importance of considering physiological appropriateness when using deep learning to model real-world systems in T1D and healthcare more broadly, and provides recommendations for building models that are robust to real-world data constraints.Comment: 15 pages, 3 figure

    Evolution of C2H2-zinc finger genes revisited

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A recent study by Tadepally et al. describes the clustering of zinc finger (ZF) genes in the human genome and traces their evolutionary history among several placental mammals with complete or draft genome sequences. One of the main conclusions from the paper is that there is a dramatic rate of gene duplication and gene loss, including the surprising result that 118 human ZF genes are absent in chimpanzee. The authors also present evidence concerning the ancestral order in which the ZF-associated KRAB and SCAN domains were recruited to ZF proteins.</p> <p>Results</p> <p>Based on our analysis of two of the largest human ZF gene clusters, we find that nearly all of the human genes have plausible orthologs in chimpanzee. The one exception may be a result of the incomplete sequence coverage in the draft chimpanzee genome. The discrepancy in gene content analysis may result from the authors' dependence on the preliminary NCBI gene prediction set for chimpanzee, which appears to either fail to predict or to mispredict many chimpanzee ZF genes. Similar problems may affect the authors' interpretation of the more divergent dog, mouse, and rat ZF gene complements. In addition, we present evidence that the KRAB domain was recruited to ZF genes before the SCAN domain, rather than the reverse as the authors suggest. This discrepancy appears to result from the fact that the SCAN domain did indeed arise before the KRAB domain but is present only in non-ZF genes until a much later date.</p> <p>Conclusion</p> <p>When comparing gene content among species, especially when using draft genome assemblies, dependence on preliminary gene prediction sets can be seriously misleading. In such studies, genic sequences must be identified in a manner that is as independent as possible of prediction sets. In addition, we present evidence that provides a more parsimonious explanation for the large proportion of mammalian KRAB-ZF genes without a SCAN domain.</p

    Characterization of complex quantum dynamics with a scalable NMR information processor

    Get PDF
    We present experimental results on the measurement of fidelity decay under contrasting system dynamics using a nuclear magnetic resonance quantum information processor. The measurements were performed by implementing a scalable circuit in the model of deterministic quantum computation with only one quantum bit. The results show measurable differences between regular and complex behaviour and for complex dynamics are faithful to the expected theoretical decay rate. Moreover, we illustrate how the experimental method can be seen as an efficient way for either extracting coarse-grained information about the dynamics of a large system, or measuring the decoherence rate from engineered environments.Comment: 4pages, 3 figures, revtex4, updated with version closer to that publishe

    Scalable coordination of distributed in-memory transactions

    Get PDF
    Phd ThesisCoordinating transactions involves ensuring serializability in the presence of concurrent data accesses. Accomplishing it in an scalable manner for distributed in-memory transactions is the aim of this thesis work. To this end, the work makes three contributions. It first experimentally demonstrates that transaction latency and throughput scale considerably well when an atomic multicast service is offered to transaction nodes by a crash-tolerant ensemble of dedicated nodes and that using such a service is the most scalable approach compared to practices advocated in the literature. Secondly, we design, implement and evaluate a crash-tolerant and non-blocking atomic broadcast protocol, called ABcast, which is then used as the foundation for building the aforementioned multicast service. ABcast is a hybrid protocol, which consists of a pair of primary and backup protocols executing in parallel. The primary protocol is a deterministic atomic broadcast protocol that provides high performance when node crashes are absent, but blocks in their presence until a group membership service detects such failures. The backup protocol, Aramis, is a probabilistic protocol that does not block in the event of node crashes and allows message delivery to continue post-crash until the primary protocol is able to resume. Aramis design avoids blocking by assuming that message delays remain within a known bound with a high probability that can be estimated in advance, provided that recent delay estimates are used to (i) continually adjust that bound and (ii) regulate flow control. Aramis delivery of broadcasts preserve total order with a probability that can be tuned to be close to 1. Comprehensive evaluations show that this probability can be 99.99% or more. Finally, we assess the effect of low-probability order violations on implementing various isolation levels commonly considered in transaction systems. These three contributions together advance the state-of-art in two major ways: (i) identifying a service based approach to transactional scalability and (ii) establishing a practical alternative to the complex PAXOSiii style approach to building such a service, by using novel but simple protocols and open-source software frameworks.Red Ha

    Randomized benchmarking of single and multi-qubit control in liquid-state NMR quantum information processing

    Full text link
    Being able to quantify the level of coherent control in a proposed device implementing a quantum information processor (QIP) is an important task for both comparing different devices and assessing a device's prospects with regards to achieving fault-tolerant quantum control. We implement in a liquid-state nuclear magnetic resonance QIP the randomized benchmarking protocol presented by Knill et al (PRA 77: 012307 (2008)). We report an error per randomized π2\frac{\pi}{2} pulse of 1.3±0.1×10−41.3 \pm 0.1 \times 10^{-4} with a single qubit QIP and show an experimentally relevant error model where the randomized benchmarking gives a signature fidelity decay which is not possible to interpret as a single error per gate. We explore and experimentally investigate multi-qubit extensions of this protocol and report an average error rate for one and two qubit gates of 4.7±0.3×10−34.7 \pm 0.3 \times 10^{-3} for a three qubit QIP. We estimate that these error rates are still not decoherence limited and thus can be improved with modifications to the control hardware and software.Comment: 10 pages, 6 figures, submitted versio

    Symmetrised Characterisation of Noisy Quantum Processes

    Full text link
    A major goal of developing high-precision control of many-body quantum systems is to realise their potential as quantum computers. Probably the most significant obstacle in this direction is the problem of "decoherence": the extreme fragility of quantum systems to environmental noise and other control limitations. The theory of fault-tolerant quantum error correction has shown that quantum computation is possible even in the presence of decoherence provided that the noise affecting the quantum system satisfies certain well-defined theoretical conditions. However, existing methods for noise characterisation have become intractable already for the systems that are controlled in today's labs. In this paper we introduce a technique based on symmetrisation that enables direct experimental characterisation of key properties of the decoherence affecting a multi-body quantum system. Our method reduces the number of experiments required by existing methods from exponential to polynomial in the number of subsystems. We demonstrate the application of this technique to the optimisation of control over nuclear spins in the solid state.Comment: About 12 pages, 5 figure

    Combined action observation and motor imagery therapy: a novel method for post-stroke motor rehabilitation

    Get PDF
    Cerebral vascular accidents (strokes) are a leading cause of motor deficiency in millions of people worldwide. While a complex range of biological systems is affected following a stroke, in this paper we focus primarily on impairments of the motor system and the recovery of motor skills. We briefly review research that has assessed two types of mental practice, which are currently recommended in stroke rehabilitation. Namely, action observation (AO) therapy and motor imagery (MI) training. We highlight the strengths and limitations in both techniques, before making the case for combined action observation and motor imagery (AO + MI) therapy as a potentially more effective method. This is based on a growing body of multimodal brain imaging research showing advantages for combined AO + MI instructions over the two separate methods of AO and MI. Finally, we offer a series of suggestions and considerations for how combined AO + MI therapy could be employed in neurorehabilitation
    • …
    corecore