5,700 research outputs found

    The diagnostic role of T wave morphology biomarkers in congenital and acquired long QT syndrome: A systematic review

    Get PDF
    Introduction: QTc prolongation is key in diagnosing long QT syndrome (LQTS), however 25%–50% with congenital LQTS (cLQTS) demonstrate a normal resting QTc. T wave morphology (TWM) can distinguish cLQTS subtypes but its role in acquired LQTS (aLQTS) is unclear. Methods: Electronic databases were searched using the terms “LQTS,” “long QT syndrome,” “QTc prolongation,” “prolonged QT,” and “T wave,” “T wave morphology,” “T wave pattern,” “T wave biomarkers.” Whole text articles assessing TWM, independent of QTc, were included. Results: Seventeen studies met criteria. TWM measurements included T-wave amplitude, duration, magnitude, Tpeak-Tend, QTpeak, left and right slope, center of gravity (COG), sigmoidal and polynomial classifiers, repolarizing integral, morphology combination score (MCS) and principal component analysis (PCA); and vectorcardiographic biomarkers. cLQTS were distinguished from controls by sigmoidal and polynomial classifiers, MCS, QTpeak, Tpeak-Tend, left slope; and COG x axis. MCS detected aLQTS more significantly than QTc. Flatness, asymmetry and notching, J-Tpeak; and Tpeak-Tend correlated with QTc in aLQTS. Multichannel block in aLQTS was identified by early repolarization (ERD30%) and late repolarization (LRD30%), with ERD reflecting hERG-specific blockade. Cardiac events were predicted in cLQTS by T wave flatness, notching, and inversion in leads II and V5, left slope in lead V6; and COG last 25% in lead I. T wave right slope in lead I and T-roundness achieved this in aLQTS. Conclusion: Numerous TWM biomarkers which supplement QTc assessment were identified. Their diagnostic capabilities include differentiation of genotypes, identification of concealed LQTS, differentiating aLQTS from cLQTS; and determining multichannel versus hERG channel blockade

    A non-autonomous stochastic discrete time system with uniform disturbances

    Full text link
    The main objective of this article is to present Bayesian optimal control over a class of non-autonomous linear stochastic discrete time systems with disturbances belonging to a family of the one parameter uniform distributions. It is proved that the Bayes control for the Pareto priors is the solution of a linear system of algebraic equations. For the case that this linear system is singular, we apply optimization techniques to gain the Bayesian optimal control. These results are extended to generalized linear stochastic systems of difference equations and provide the Bayesian optimal control for the case where the coefficients of these type of systems are non-square matrices. The paper extends the results of the authors developed for system with disturbances belonging to the exponential family

    Determining Principal Component Cardinality through the Principle of Minimum Description Length

    Full text link
    PCA (Principal Component Analysis) and its variants areubiquitous techniques for matrix dimension reduction and reduced-dimensionlatent-factor extraction. One significant challenge in using PCA, is thechoice of the number of principal components. The information-theoreticMDL (Minimum Description Length) principle gives objective compression-based criteria for model selection, but it is difficult to analytically applyits modern definition - NML (Normalized Maximum Likelihood) - to theproblem of PCA. This work shows a general reduction of NML prob-lems to lower-dimension problems. Applying this reduction, it boundsthe NML of PCA, by terms of the NML of linear regression, which areknown.Comment: LOD 201

    R-process enrichment from a single event in an ancient dwarf galaxy

    Get PDF
    Elements heavier than zinc are synthesized through the (r)apid and (s)low neutron-capture processes. The main site of production of the r-process elements (such as europium) has been debated for nearly 60 years. Initial studies of chemical abundance trends in old Milky Way halo stars suggested continual r-process production, in sites like core-collapse supernovae. But evidence from the local Universe favors r-process production mainly during rare events, such as neutron star mergers. The appearance of a europium abundance plateau in some dwarf spheroidal galaxies has been suggested as evidence for rare r-process enrichment in the early Universe, but only under the assumption of no gas accretion into the dwarf galaxies. Cosmologically motivated gas accretion favors continual r-process enrichment in these systems. Furthermore, the universal r-process pattern has not been cleanly identified in dwarf spheroidals. The smaller, chemically simpler, and more ancient ultra-faint dwarf galaxies assembled shortly after the first stars formed, and are ideal systems with which to study nucleosynthesis events such as the r-process. Reticulum II is one such galaxy. The abundances of non-neutron-capture elements in this galaxy (and others like it) are similar to those of other old stars. Here, we report that seven of nine stars in Reticulum II observed with high-resolution spectroscopy show strong enhancements in heavy neutron-capture elements, with abundances that follow the universal r-process pattern above barium. The enhancement in this "r-process galaxy" is 2-3 orders of magnitude higher than that detected in any other ultra-faint dwarf galaxy. This implies that a single rare event produced the r-process material in Reticulum II. The r-process yield and event rate are incompatible with ordinary core-collapse supernovae, but consistent with other possible sites, such as neutron star mergers.Comment: Published in Nature, 21 Mar 2016: http://dx.doi.org/10.1038/nature1742

    The first maps of κd - the dust mass absorption coefficient - in nearby galaxies, with DustPedia

    Get PDF
    The dust mass absorption coefficient, κd is the conversion function used to infer physical dust masses from observations of dust emission. However, it is notoriously poorly constrained, and it is highly uncertain how it varies, either between or within galaxies. Here we present the results of a proof-of-concept study, using the DustPedia data for two nearby face-on spiral galaxies M 74 (NGC 628) and M 83 (NGC 5236), to create the first ever maps of κd in galaxies. We determine κd using an empirical method that exploits the fact that the dust-to-metals ratio of the interstellar medium is constrained by direct measurements of the depletion of gas-phase metals. We apply this method pixel-by-pixel within M 74 and M 83, to create maps of κd. We also demonstrate a novel method of producing metallicity maps for galaxies with irregularly sampled measurements, using the machine learning technique of Gaussian process regression. We find strong evidence for significant variation in κd. We find values of κd at 500 μm spanning the range 0.11-0.25 m^{2 kg^{-1}} in M 74, and 0.15-0.80 m^{2 kg^{-1}} in M 83. Surprisingly, we find that κd shows a distinct inverse correlation with the local density of the interstellar medium. This inverse correlation is the opposite of what is predicted by standard dust models. However, we find this relationship to be robust against a large range of changes to our method - only the adoption of unphysical or highly unusual assumptions would be able to suppress it

    N-player quantum games in an EPR setting

    Get PDF
    The NN-player quantum game is analyzed in the context of an Einstein-Podolsky-Rosen (EPR) experiment. In this setting, a player's strategies are not unitary transformations as in alternate quantum game-theoretic frameworks, but a classical choice between two directions along which spin or polarization measurements are made. The players' strategies thus remain identical to their strategies in the mixed-strategy version of the classical game. In the EPR setting the quantum game reduces itself to the corresponding classical game when the shared quantum state reaches zero entanglement. We find the relations for the probability distribution for NN-qubit GHZ and W-type states, subject to general measurement directions, from which the expressions for the mixed Nash equilibrium and the payoffs are determined. Players' payoffs are then defined with linear functions so that common two-player games can be easily extended to the NN-player case and permit analytic expressions for the Nash equilibrium. As a specific example, we solve the Prisoners' Dilemma game for general N2 N \ge 2 . We find a new property for the game that for an even number of players the payoffs at the Nash equilibrium are equal, whereas for an odd number of players the cooperating players receive higher payoffs.Comment: 26 pages, 2 figure

    The impacts of Hong Kong Disneyland in the Pearl River Delta

    Get PDF
    2001-2002 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Projective simulation for artificial intelligence

    Get PDF
    We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation.Comment: 22 pages, 18 figures. Close to published version, with footnotes retaine
    corecore