35 research outputs found

    A Multiscale Cardiac Model for Fast Personalisation and Exploitation

    Get PDF
    International audienceComputer models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However a single 3D simulation can be computationally expensive and long, which can make some practical applications such as the personalisation phase, or a sensitivity analysis of mechanical parameters over the simulated behaviour quite slow. In this manuscript we present a multiscale 0D/3D model which allows us to have a reliable (and extremely fast) approximation of the behaviour of the 3D model under a few simplifying assumptions. We first detail the two different models, then explain the coupling of the two models to get fast 0D approximation of 3D simulations. Finally we demonstrated how the multiscale model can speed-up an efficient optimization algorithm, which enables a fast personalisation of the 3D simulations by leveraging on the advantages of each scale

    Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models

    Get PDF
    International audiencePersonalised computational models of theheart are of increasing interest for clinical applica-tions due to their discriminative and predictive abili-ties. However, the simulation of a single heartbeat witha 3D cardiac electromechanical model can be long andcomputationally expensive, which makes some practicalapplications, such as the estimation of model parame-ters from clinical data (the personalisation), very slow.Here we introduce an original multidelity approachbetween a 3D cardiac model and a simplied "0D" ver-sion of this model, which enables to get reliable (andextremely fast) approximations of the global behaviorof the 3D model using 0D simulations. We then usethis multidelity approximation to speed-up an ecientparameter estimation algorithm, leading to a fast andcomputationally ecient personalisation method of the3D model. In particular, we show results on a cohort of121 dierent heart geometries and measurements. Fi-nally, an exploitable code of the 0D model with scriptsto perform parameter estimation will be released to thecommunity

    Longitudinal Parameter Estimation in 3D Electromechanical Models: Application to Cardiovascular Changes in Digestion

    Get PDF
    International audienceComputer models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However the number of simulation parameters in these models can be high and expert knowledge is required to properly design studies involving these models, and analyse the results. In particular it is important to know how the parameters vary in various clinical or physiological settings. In this paper we build a data-driven model of cardiovascular parameter evolution during digestion, from a clinical study involving more than 80 patients. We first present a method for longitudinal parameter estimation in 3D cardiac models, which we apply to 21 patient-specific hearts geometries at two instants of the study, for 6 parameters (two fixed and four time-varying parameters). From these personalised hearts, we then extract and validate a law which links the changes of cardiac output and heart rate under constant arterial pressure to the evolution of these parameters, thus enabling the fast simulation of hearts during digestion for future patients

    Policy needs and options for a common approach towards modelling and simulation of human physiology and diseases with a focus on the virtual physiological human.

    Get PDF
    Life is the result of an intricate systemic interaction between many processes occurring at radically different spatial and temporal scales. Every day, worldwide biomedical research and clinical practice produce a huge amount of information on such processes. However, this information being highly fragmented, its integration is largely left to the human actors who find this task increasingly and ever more demanding in a context where the information available continues to increase exponentially. Investments in the Virtual Physiological Human (VPH) research are largely motivated by the need for integration in healthcare. As all health information becomes digital, the complexity of health care will continue to evolve, translating into an ever increasing pressure which will result from a growing demand in parallel to limited budgets. Hence, the best way to achieve the dream of personalised, preventive, and participative medicine at sustainable costs will be through the integration of all available data, information and knowledge

    Inference of ventricular activation properties from non-invasive electrocardiography

    Full text link
    The realisation of precision cardiology requires novel techniques for the non-invasive characterisation of individual patients' cardiac function to inform therapeutic and diagnostic decision-making. The electrocardiogram (ECG) is the most widely used clinical tool for cardiac diagnosis. Its interpretation is, however, confounded by functional and anatomical variability in heart and torso. In this study, we develop new computational techniques to estimate key ventricular activation properties for individual subjects by exploiting the synergy between non-invasive electrocardiography and image-based torso-biventricular modelling and simulation. More precisely, we present an efficient sequential Monte Carlo approximate Bayesian computation-based inference method, integrated with Eikonal simulations and torso-biventricular models constructed based on clinical cardiac magnetic resonance (CMR) imaging. The method also includes a novel strategy to treat combined continuous (conduction speeds) and discrete (earliest activation sites) parameter spaces, and an efficient dynamic time warping-based ECG comparison algorithm. We demonstrate results from our inference method on a cohort of twenty virtual subjects with cardiac volumes ranging from 74 cm3 to 171 cm3 and considering low versus high resolution for the endocardial discretisation (which determines possible locations of the earliest activation sites). Results show that our method can successfully infer the ventricular activation properties from non-invasive data, with higher accuracy for earliest activation sites, endocardial speed, and sheet (transmural) speed in sinus rhythm, rather than the fibre or sheet-normal speeds.Comment: Submitted to Medical Image Analysi

    Personnalisation basée sur l'imagerie de modèles cardiaques électrophysiologiques pour la planification du traitement de la tachycardie ventriculaire

    Get PDF
    Acute infarct survival rates have drastically improved over the last decades, mechanically increasing chronic infarct related affections.Among these affections, ischaemic ventricular tachycardia (VT) is a particularly serious arrhythmia that can lead to the often lethal ventricular fibrillation. VT can be treated by radio frequency ablation of the arrhythmogenic substrate.The first phase of this long and risky interventional cardiology procedure is an electrophysiological (EP) exploration of the heart.This phase aims at localising the ablation targets, notably by inducing the arrhythmia in a controlled setting. In this work we propose to re-create this exploration phase in silico, by personalising cardiac EP models.We show that key information about infarct scar location and heterogeneity can be automatically obtained by a deep learning-based automated segmentation of the myocardium on computed tomography (CT) images.Our goal is to use this information to run patient-specific simulations of depolarisation wave propagation in the myocardium, mimicking the interventional cardiology exploration phase.We start by studying the relationship between the depolarisation wave propagation velocity and the left ventricular wall thickness to personalise an Eikonal model, an approach that can successfully reproduce periodic activation maps of the left ventricle recorded during VT.We then propose efficient algorithms to detect the repolarisation wave on unipolar electrograms (UEG), that we use to analyse the UEGs embedded in such intra-cardiac recordings.Thanks to a multimodal registration between these recordings and CT images, we establish relationships between action potential durations/restitution properties and left ventricular wall thickness.These relationships are finally used to parametrise a reaction-diffusion model able to reproduce interventional cardiologists' induction protocols that trigger realistic and documented VTs. inteinterventional cardiologists' induction protocols that trigger realistic and documented VTs.La survie lors de la phase aiguë de l'infarctus du myocarde a énormément progressé au cours des dernières décennies, augmentant ainsi la mortalité des affections liées à l'infarctus chronique.Parmi ces pathologies, la tachycardie ventriculaire (TV) est une arythmie particulièrement grave qui peut conduire à la fibrillation ventriculaire, souvent fatale.La TV peut être traitée par ablation par radio-fréquences du substrat arythmogène.La première phase de cette procédure, longue et risquée, est une exploration électrophysiologique (EP) du cœur consistant à déterminer les cibles de cette ablation, notamment en provoquant l'arythmie dans un environnement contrôléDans cette thèse, nous proposons de re-créer in silico cette phase exploratoire, en personnalisation des modèles cardiaques EP.Nous montrons que des informations clefs à propos de la localisation et de l'hétérogénéité de la cicatrice d'infarctus peuvent être obtenues automatiquement par une segmentation d'images tomodensitométriques (TDM) utilisant un réseau de neurones artificiels.Notre but est alors d'utiliser ces informations pour réaliser des simulations spécifiques à un patient de la propagation de l'onde de dépolarisation dans le myocarde, reproduisant la phase exploratoire décrite plus haut.Nous commençons par étudier la relation entre la vitesse de l'onde de dépolarisation et l'épaisseur du ventricule gauche, relation qui permet de personnaliser un modèle EP Eikonal; cette approche permet fr reproduire des cartes d'activations périodiques du ventricule gauche obtenues durant des TV.Nous proposons ensuite des algorithmes efficaces pour détecter l'onde de repolarisation sur les électrogrammes unipolaires (EGU), que nous utilisons pour analyser les EGU contenus dans les enregistrements intra-cardiaques à notre disposition.Grâce à un recalage multimodal entre ces enregistrements et des images TDM, nous établissons des relations entre durées de potentiels d'action (DPA)/propriétés de restitutions de DPA et épaisseur du ventricule gauche.Enfin, ces relations sont utilisés pour paramétrer un modèle de réaction-diffusion capable de reproduire fidèlement les protocoles d'induction des cardiologues interventionnels qui provoquent des TV réalistes et documentées

    Measuring trustworthiness of image data in the internet of things environment

    Get PDF
    Internet of Things (IoT) image sensors generate huge volumes of digital images every day. However, easy availability and usability of photo editing tools, the vulnerability in communication channels and malicious software have made forgery attacks on image sensor data effortless and thus expose IoT systems to cyberattacks. In IoT applications such as smart cities and surveillance systems, the smooth operation depends on sensors’ sharing data with other sensors of identical or different types. Therefore, a sensor must be able to rely on the data it receives from other sensors; in other words, data must be trustworthy. Sensors deployed in IoT applications are usually limited to low processing and battery power, which prohibits the use of complex cryptography and security mechanism and the adoption of universal security standards by IoT device manufacturers. Hence, estimating the trust of the image sensor data is a defensive solution as these data are used for critical decision-making processes. To our knowledge, only one published work has estimated the trustworthiness of digital images applied to forensic applications. However, that study’s method depends on machine learning prediction scores returned by existing forensic models, which limits its usage where underlying forensics models require different approaches (e.g., machine learning predictions, statistical methods, digital signature, perceptual image hash). Multi-type sensor data correlation and context awareness can improve the trust measurement, which is absent in that study’s model. To address these issues, novel techniques are introduced to accurately estimate the trustworthiness of IoT image sensor data with the aid of complementary non-imagery (numeric) data-generating sensors monitoring the same environment. The trust estimation models run in edge devices, relieving sensors from computationally intensive tasks. First, to detect local image forgery (splicing and copy-move attacks), an innovative image forgery detection method is proposed based on Discrete Cosine Transformation (DCT), Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. Using Support Vector Machine (SVM), the proposed method is extensively tested on four well-known publicly available greyscale and colour image forgery datasets and on an IoT-based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples. Second, a robust trust estimation framework for IoT image data is proposed, leveraging numeric data-generating sensors deployed in the same area of interest (AoI) in an indoor environment. As low-cost sensors allow many IoT applications to use multiple types of sensors to observe the same AoI, the complementary numeric data of one sensor can be exploited to measure the trust value of another image sensor’s data. A theoretical model is developed using Shannon’s entropy to derive the uncertainty associated with an observed event and Dempster-Shafer theory (DST) for decision fusion. The proposed model’s efficacy in estimating the trust score of image sensor data is analysed by observing a fire event using IoT image and temperature sensor data in an indoor residential setup under different scenarios. The proposed model produces highly accurate trust scores in all scenarios with authentic and forged image data. Finally, as the outdoor environment varies dynamically due to different natural factors (e.g., lighting condition variations in day and night, presence of different objects, smoke, fog, rain, shadow in the scene), a novel trust framework is proposed that is suitable for the outdoor environments with these contextual variations. A transfer learning approach is adopted to derive the decision about an observation from image sensor data, while also a statistical approach is used to derive the decision about the same observation from numeric data generated from other sensors deployed in the same AoI. These decisions are then fused using CertainLogic and compared with DST-based fusion. A testbed was set up using Raspberry Pi microprocessor, image sensor, temperature sensor, edge device, LoRa nodes, LoRaWAN gateway and servers to evaluate the proposed techniques. The results show that CertainLogic is more suitable for measuring the trustworthiness of image sensor data in an outdoor environment.Doctor of Philosoph

    Affective Computing for Emotion Detection using Vision and Wearable Sensors

    Get PDF
    The research explores the opportunities, challenges, limitations, and presents advancements in computing that relates to, arises from, or deliberately influences emotions (Picard, 1997). The field is referred to as Affective Computing (AC) and is expected to play a major role in the engineering and development of computationally and cognitively intelligent systems, processors and applications in the future. Today the field of AC is bolstered by the emergence of multiple sources of affective data and is fuelled on by developments under various Internet of Things (IoTs) projects and the fusion potential of multiple sensory affective data streams. The core focus of this thesis involves investigation into whether the sensitivity and specificity (predictive performance) of AC, based on the fusion of multi-sensor data streams, is fit for purpose? Can such AC powered technologies and techniques truly deliver increasingly accurate emotion predictions of subjects in the real world? The thesis begins by presenting a number of research justifications and AC research questions that are used to formulate the original thesis hypothesis and thesis objectives. As part of the research conducted, a detailed state of the art investigations explored many aspects of AC from both a scientific and technological perspective. The complexity of AC as a multi-sensor, multi-modality, data fusion problem unfolded during the state of the art research and this ultimately led to novel thinking and origination in the form of the creation of an AC conceptualised architecture that will act as a practical and theoretical foundation for the engineering of future AC platforms and solutions. The AC conceptual architecture developed as a result of this research, was applied to the engineering of a series of software artifacts that were combined to create a prototypical AC multi-sensor platform known as the Emotion Fusion Server (EFS) to be used in the thesis hypothesis AC experimentation phases of the research. The thesis research used the EFS platform to conduct a detailed series of AC experiments to investigate if the fusion of multiple sensory sources of affective data from sensory devices can significantly increase the accuracy of emotion prediction by computationally intelligent means. The research involved conducting numerous controlled experiments along with the statistical analysis of the performance of sensors for the purposes of AC, the findings of which serve to assess the feasibility of AC in various domains and points to future directions for the AC field. The AC experiments data investigations conducted in relation to the thesis hypothesis used applied statistical methods and techniques, and the results, analytics and evaluations are presented throughout the two thesis research volumes. The thesis concludes by providing a detailed set of formal findings, conclusions and decisions in relation to the overarching research hypothesis on the sensitivity and specificity of the fusion of vision and wearables sensor modalities and offers foresights and guidance into the many problems, challenges and projections for the AC field into the future
    corecore