3,398 research outputs found

    Evidence that conflict regarding size of haemodynamic response to interventricular delay optimization of cardiac resynchronization therapy may arise from differences in how atrioventricular delay is kept constant.

    Get PDF
    Aims: Whether adjusting interventricular (VV) delay changes haemodynamic efficacy of cardiac resynchronization therapy (CRT) is controversial, with conflicting results. This study addresses whether the convention for keeping atrioventricular (AV) delay constant during VV optimization might explain these conflicts. / Method and results: Twenty-two patients in sinus rhythm with existing CRT underwent VV optimization using non-invasive systolic blood pressure. Interventricular optimization was performed with four methods for keeping the AV delay constant: (i) atrium and left ventricle delay kept constant, (ii) atrium and right ventricle delay kept constant, (iii) time to the first-activated ventricle kept constant, and (iv) time to the second-activated ventricle kept constant. In 11 patients this was performed with AV delay of 120 ms, and in 11 at AV optimum. At AV 120 ms, time to the first ventricular lead (left or right) was the overwhelming determinant of haemodynamics (13.75 mmHg at ±80 ms, P < 0.001) with no significant effect of time to second lead (0.47 mmHg, P = 0.50), P < 0.001 for difference. At AV optimum, time to first ventricular lead again had a larger effect (5.03 mmHg, P < 0.001) than time to second (2.92 mmHg, P = 0.001), P = 0.02 for difference. / Conclusion: Time to first ventricular activation is the overwhelming determinant of circulatory function, regardless of whether this is the left or right ventricular lead. If this is kept constant, the effect of changing time to the second ventricle is small or nil, and is not beneficial. In practice, it may be advisable to leave VV delay at zero. Specifying how AV delay is kept fixed might make future VV delay research more enlightening

    The authors reply.

    Get PDF

    The authors reply.

    Get PDF

    A Candidate Sub-Parsec Supermassive Binary Black Hole System

    Full text link
    We identify SDSS J153636.22+044127.0, a QSO discovered in the Sloan Digital Sky Survey, as a promising candidate for a binary black hole system. This QSO has two broad-line emission systems separated by 3500 km/sec. The redder system at z=0.3889 also has a typical set of narrow forbidden lines. The bluer system (z=0.3727) shows only broad Balmer lines and UV Fe II emission, making it highly unusual in its lack of narrow lines. A third system, which includes only unresolved absorption lines, is seen at a redshift, z=0.3878, intermediate between the two emission-line systems. While the observational signatures of binary nuclear black holes remain unclear, J1536+0441 is unique among all QSOs known in having two broad-line regions, indicative of two separate black holes presently accreting gas. The interpretation of this as a bound binary system of two black holes having masses of 10^8.9 and 10^7.3 solar masses, yields a separation of ~ 0.1 parsec and an orbital period of ~100 years. The separation implies that the two black holes are orbiting within a single narrow-line region, consistent with the characteristics of the spectrum. This object was identified as an extreme outlier of a Karhunen-Loeve Transform of 17,500 z < 0.7 QSO spectra from the SDSS. The probability of the spectrum resulting from a chance superposition of two QSOs with similar redshifts is estimated at 2X10^-7, leading to the expectation of 0.003 such objects in the sample studied; however, even in this case, the spectrum of the lower redshift QSO remains highly unusual.Comment: 8 pages, 2 figures, Nature in pres

    Hierarchical statistical techniques are necessary to draw reliable conclusions from analysis of isolated cardiomyocyte studies

    Get PDF
    Aims It is generally accepted that post-MI heart failure (HF) changes a variety of aspects of sarcoplasmic reticular Ca2+ fluxes but for some aspects there is disagreement over whether there is an increase or decrease. The commonest statistical approach is to treat data collected from each cell as independent, even though they are really clustered with multiple likely similar cells from each heart. In this study, we test whether this statistical assumption of independence can lead the investigator to draw conclusions that would be considered erroneous if the analysis handled clustering with specific statistical techniques (hierarchical tests). Methods and results Ca2+ transients were recorded in cells loaded with Fura-2AM and sparks were recorded in cells loaded with Fluo-4AM. Data were analysed twice, once with the common statistical approach (assumption of independence) and once with hierarchical statistical methodologies designed to allow for any clustering. The statistical tests found that there was significant hierarchical clustering. This caused the common statistical approach to underestimate the standard error and report artificially small P values. For example, this would have led to the erroneous conclusion that time to 50% peak transient amplitude was significantly prolonged in HF. Spark analysis showed clustering, both within each cell and also within each rat, for morphological variables. This means that a three-level hierarchical model is sometimes required for such measures. Standard statistical methodologies, if used instead, erroneously suggest that spark amplitude is significantly greater in HF and spark duration is reduced in HF. Conclusion Ca2+ fluxes in isolated cardiomyocytes show so much clustering that the common statistical approach that assumes independence of each data point will frequently give the false appearance of statistically significant changes. Hierarchical statistical methodologies need a little more effort, but are necessary for reliable conclusions. We present cost-free simple tools for performing these analyses

    Providing Self-Aware Systems with Reflexivity

    Full text link
    We propose a new type of self-aware systems inspired by ideas from higher-order theories of consciousness. First, we discussed the crucial distinction between introspection and reflexion. Then, we focus on computational reflexion as a mechanism by which a computer program can inspect its own code at every stage of the computation. Finally, we provide a formal definition and a proof-of-concept implementation of computational reflexion, viewed as an enriched form of program interpretation and a way to dynamically "augment" a computational process.Comment: 12 pages plus bibliography, appendices with code description, code of the proof-of-concept implementation, and examples of executio

    Efficient labelling for efficient deep learning: the benefit of a multiple-image-ranking method to generate high volume training data applied to ventricular slice level classification in cardiac MRI

    Get PDF
    BACKGROUND: Getting the most value from expert clinicians' limited labelling time is a major challenge for artificial intelligence (AI) development in clinical imaging. We present a novel method for ground-truth labelling of cardiac magnetic resonance imaging (CMR) image data by leveraging multiple clinician experts ranking multiple images on a single ordinal axis, rather than manual labelling of one image at a time. We apply this strategy to train a deep learning (DL) model to classify the anatomical position of CMR images. This allows the automated removal of slices that do not contain the left ventricular (LV) myocardium. METHODS: Anonymised LV short-axis slices from 300 random scans (3,552 individual images) were extracted. Each image's anatomical position relative to the LV was labelled using two different strategies performed for 5 hours each: (I) 'one-image-at-a-time': each image labelled according to its position: 'too basal', 'LV', or 'too apical' individually by one of three experts; and (II) 'multiple-image-ranking': three independent experts ordered slices according to their relative position from 'most-basal' to 'most apical' in batches of eight until each image had been viewed at least 3 times. Two convolutional neural networks were trained for a three-way classification task (each model using data from one labelling strategy). The models' performance was evaluated by accuracy, F1-score, and area under the receiver operating characteristics curve (ROC AUC). RESULTS: After excluding images with artefact, 3,323 images were labelled by both strategies. The model trained using labels from the 'multiple-image-ranking strategy' performed better than the model using the 'one-image-at-a-time' labelling strategy (accuracy 86% vs. 72%, P=0.02; F1-score 0.86 vs. 0.75; ROC AUC 0.95 vs. 0.86). For expert clinicians performing this task manually the intra-observer variability was low (Cohen's κ=0.90), but the inter-observer variability was higher (Cohen's κ=0.77). CONCLUSIONS: We present proof of concept that, given the same clinician labelling effort, comparing multiple images side-by-side using a 'multiple-image-ranking' strategy achieves ground truth labels for DL more accurately than by classifying images individually. We demonstrate a potential clinical application: the automatic removal of unrequired CMR images. This leads to increased efficiency by focussing human and machine attention on images which are needed to answer clinical questions
    corecore