38 research outputs found

    Data Fusion for QRS Complex Detection in Multi-Lead Electrocardiogram Recordings

    Get PDF
    Heart diseases are the main cause of death worldwide. The first step in the diagnose of these diseases is the analysis of the electrocardiographic (ECG) signal. In turn, the ECG analysis begins with the detection of the QRS complex, which is the one with the most energy in the cardiac cycle. Numerous methods have been proposed in the bibliography for QRS complex detection, but few authors have analyzed the possibility of taking advantage of the information redundancy present in multiple ECG leads (simultaneously acquired) to produce accurate QRS detection. In our previous work we presented such an approach, proposing various data fusion techniques to combine the detections made by an algorithm on multiple ECG leads. In this paper we present further studies that show the advantages of this multi-lead detection approach, analyzing how many leads are necessary in order to observe an improvement in the detection performance. A well known QRS detection algorithm was used to test the fusion techniques on the St. Petersburg Institute of Cardiological Technics database. Results show improvement in the detection performance with as little as three leads, but the reliability of these results becomes interesting only after using seven or more leads. Results were evaluated using the detection error rate (DER). The multi-lead detection approach allows an improvement from DER = 3:04% to DER = 1:88%. Further works are to be made in order to improve the detection performance by implementing further fusion steps

    Coordinated optimization of visual cortical maps (I) Symmetry-based analysis

    Get PDF
    In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of OP columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about an hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference.Comment: 90 pages, 16 figure

    Coordinated optimization of visual cortical maps (II) Numerical studies

    Get PDF
    It is an attractive hypothesis that the spatial structure of visual cortical architecture can be explained by the coordinated optimization of multiple visual cortical maps representing orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we defined a class of analytically tractable coordinated optimization models and solved representative examples in which a spatially complex organization of the orientation preference map is induced by inter-map interactions. We found that attractor solutions near symmetry breaking threshold predict a highly ordered map layout and require a substantial OD bias for OP pinwheel stabilization. Here we examine in numerical simulations whether such models exhibit biologically more realistic spatially irregular solutions at a finite distance from threshold and when transients towards attractor states are considered. We also examine whether model behavior qualitatively changes when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. Our numerical results support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the spatially irregular architecture of the visual cortex. We discuss several alternative scenarios and additional factors that may improve the agreement between model solutions and biological observations.Comment: 55 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:1102.335

    High-Dimensional Similarity Search with Quantum-Assisted Variational Autoencoder

    Full text link
    Recent progress in quantum algorithms and hardware indicates the potential importance of quantum computing in the near future. However, finding suitable application areas remains an active area of research. Quantum machine learning is touted as a potential approach to demonstrate quantum advantage within both the gate-model and the adiabatic schemes. For instance, the Quantum-assisted Variational Autoencoder has been proposed as a quantum enhancement to the discrete VAE. We extend on previous work and study the real-world applicability of a QVAE by presenting a proof-of-concept for similarity search in large-scale high-dimensional datasets. While exact and fast similarity search algorithms are available for low dimensional datasets, scaling to high-dimensional data is non-trivial. We show how to construct a space-efficient search index based on the latent space representation of a QVAE. Our experiments show a correlation between the Hamming distance in the embedded space and the Euclidean distance in the original space on the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset. Further, we find real-world speedups compared to linear search and demonstrate memory-efficient scaling to half a billion data points

    Spatial lipidomics reveals sphingolipid metabolism as anti-fibrotic target in the liver

    Get PDF
    \ua9 2025 The Authors. Background and aims: Steatotic liver disease (SLD), which encompasses various causes of fat accumulation in the liver, is a major cause of liver fibrosis. Understanding the specific mechanisms of lipotoxicity, dysregulated lipid metabolism, and the role of different hepatic cell types involved in fibrogenesis is crucial for therapy development. Methods: We analysed liver tissue from SLD patients and 3 mouse models. We combined bulk/spatial lipidomics, transcriptomics, imaging mass cytometry (IMC) and analysis of published spatial and single-cell RNA sequencing (scRNA-seq) data to explore the metabolic microenvironment in fibrosis. Pharmacological inhibition of sphingolipid metabolism with myriocin, fumonisin B1, miglustat and D-PDMP was carried out in hepatic stellate cells (HSCs) and human precision cut liver slices (hPCLSs). Results: Bulk lipidomics revealed increased glycosphingolipids, ether lipids and saturated phosphatidylcholines in fibrotic samples. Spatial lipidomics detected >40 lipid species enriched within fibrotic regions, notably sphingomyelin (SM) 34:1. Using bulk transcriptomics (mouse) and analysis of published spatial transcriptomics data (human) we found that sphingolipid metabolism was also dysregulated in fibrosis at transcriptome level, with increased gene expression for ceramide and glycosphingolipid synthesis. Analysis of human scRNA-seq data showed that sphingolipid-related genes were widely expressed in non-parenchymal cells. By integrating spatial lipidomics with IMC of hepatic cell markers, we found excellent spatial correlation between sphingolipids, such as SM(34:1), and myofibroblasts. Inhibiting sphingolipid metabolism resulted in anti-fibrotic effects in HSCs and hPCLSs. Conclusions: Our spatial multi-omics approach suggests cell type-specific mechanisms of fibrogenesis involving sphingolipid metabolism. Importantly, sphingolipid metabolic pathways are modifiable targets, which may have potential as an anti-fibrotic therapeutic strategy

    The additional value of patient-reported health status in predicting 1-year mortality after invasive coronary procedures: A report from the Euro Heart Survey on Coronary Revascularisation

    Get PDF
    Objective: Self-perceived health status may be helpful in identifying patients at high risk for adverse outcomes. The Euro Heart Survey on Coronary Revascularization (EHS-CR) provided an opportunity to explore whether impaired health status was a predictor of 1-year mortality in patients with coronary artery disease (CAD) undergoing angiographic procedures. Methods: Data from the EHS-CR that included 5619 patients from 31 member countries of the European Society of Cardiology were used. Inclusion criteria for the current study were completion of a self-report measure of health status, the EuroQol Questionnaire (EQ-5D) at discharge and information on 1-year follow-up, resulting in a study population of 3786 patients. Results: The 1-year mortality was 3.2% (n = 120). Survivors reported fewer problems on the five dimensions of the EQ-5D as compared with non-survivors. A broad range of potential confounders were adjusted for, which reached a p<0.10 in the unadjusted analyses. In the adjusted analyses, problems with self-care (OR 3.45; 95% CI 2.14 to 5.59) and a low rating (≤ 60) on health status (OR 2.41; 95% CI 1.47 to 3.94) were the most powerful independent predictors of mortality, among the 22 clinical variables included in the analysis. Furthermore, patients who reported no problems on all five dimensions had significantly lower 1-year mortality rates (OR 0.47; 95% CI 0.28 to 0.81). Conclusions: This analysis shows that impaired health status is associated with a 2-3-fold increased risk of all-cause mortality in patients with CAD, independent of other conventional risk factors. These results highlight the importance of including patients' subjective experience of their own health status in the evaluation strategy to optimise risk stratification and management in clinical practice

    On the Origins of Suboptimality in Human Probabilistic Inference

    Get PDF
    Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior

    On Contrastive Divergence Learning

    No full text
    Maximum-likelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov chain is only run for a few steps, the learning can still work well and it approximately minimizes a di#erent function called &quot;contrastive divergence&quot; (CD). CD learning has been successfully applied to various types of random fields. Here, we study the properties of CD learning and show that it provides biased estimates in general, but that the bias is typically very small. Fast CD learning can therefore be used to get close to an ML solution and slow ML learning can then be used to fine-tune the CD solution
    corecore