16 research outputs found

    DiffECG: A Generalized Probabilistic Diffusion Model for ECG Signals Synthesis

    Full text link
    In recent years, deep generative models have gained attention as a promising data augmentation solution for heart disease detection using deep learning approaches applied to ECG signals. In this paper, we introduce a novel approach based on denoising diffusion probabilistic models for ECG synthesis that covers three scenarios: heartbeat generation, partial signal completion, and full heartbeat forecasting. Our approach represents the first generalized conditional approach for ECG synthesis, and our experimental results demonstrate its effectiveness for various ECG-related tasks. Moreover, we show that our approach outperforms other state-of-the-art ECG generative models and can enhance the performance of state-of-the-art classifiers.Comment: under revie

    Leveraging Statistical Shape Priors in GAN-based ECG Synthesis

    Full text link
    Due to the difficulty of collecting electrocardiogram (ECG) data during emergency situations, ECG data generation is an efficient solution for dealing with highly imbalanced ECG training datasets. However, due to the complex dynamics of ECG signals, the synthesis of such signals is a challenging task. In this paper, we present a novel approach for ECG signal generation based on Generative Adversarial Networks (GANs). Our approach combines GANs with statistical ECG data modeling to leverage prior knowledge about ECG dynamics in the generation process. To validate the proposed approach, we present experiments using ECG signals from the MIT-BIH arrhythmia database. The obtained results show the benefits of modeling temporal and amplitude variations of ECG signals as 2-D shapes in generating realistic signals and also improving the performance of state-of-the-art arrhythmia classification baselines.Comment: 6 figures, 26 page

    Phrase-Based Language Model in Statistical Machine Translation

    Get PDF
    La date de publication ne nous a pas encore été communiquéeInternational audienceAs one of the most important modules in statistical machine translation (SMT), language model measures whether one translation hypothesis is more grammatically correct than other hypotheses. Currently the state-of-the-art SMT systems use standard word n-gram models, whereas the translation model is phrase-based. In this paper, the idea is to use a phrase-based language model. For that, target portion of the translation table are retrieved and used to rewrite the training corpus and to calculate a phrase n-gram language model. In this work, weperform experiments with two language models word-based (WBLM) and phrase-based (PBLM). The different SMT are trained with threeoptimization algorithms MERT, MIRA and PRO. Thus, the PBLM systems are compared to the baseline system in terms of BLUE and TER.The experimental results show that the use of a phrase-based language model in SMT can improve results and is especially able to reduce theerror rate

    3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge

    Full text link
    Teeth localization, segmentation, and labeling from intra-oral 3D scans are essential tasks in modern dentistry to enhance dental diagnostics, treatment planning, and population-based studies on oral health. However, developing automated algorithms for teeth analysis presents significant challenges due to variations in dental anatomy, imaging protocols, and limited availability of publicly accessible data. To address these challenges, the 3DTeethSeg'22 challenge was organized in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022, with a call for algorithms tackling teeth localization, segmentation, and labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans from 900 patients was prepared, and each tooth was individually annotated by a human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this dataset. In this study, we present the evaluation results of the 3DTeethSeg'22 challenge. The 3DTeethSeg'22 challenge code can be accessed at: https://github.com/abenhamadou/3DTeethSeg22_challengeComment: 29 pages, MICCAI 2022 Singapore, Satellite Event, Challeng

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    Full text link
    Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbackComment: 16 page

    Contribution Ă  la cartographie 3D des parois internes de la vessie par cystoscopie Ă  vision active

    No full text
    Cystoscopy is currently the reference clinical examination for visual exploration of the inner walls of the bladder. A cystoscope (instrument used in this examination) allows for video acquisition of the bladder epithelium. Nonetheless, each frame of the video displays only a small area of few squared centimeters. This work aims to build 3D maps representing the 3D shape and the texture of the inner walls of the bladder. Such maps should improve and facilitate the interpretation of the cystoscopic data. To reach this purpose, a new flexible algorithm is proposed for the calibration of cystoscopic active vision systems. This algorithm provides the required parameters to achieve accurate reconstruction of 3D points on the surface part imaged at each given moment of the video cystoscopy. Thus, available data for each acquisition are a set of few 3D points (and their corresponding 2D projections) and a 2D image. The aim of the second algorithm described in this work is to place all the data obtained for a sequence in a global coordinate system to generate a 3D point cloud and a 2D panoramic image representing respectively the 3D shape and the texture of the bladder wall imaged in the video. This 3D cartography method allows for the simultaneous estimation of 3D rigid transformations and 2D perspective transformations. These transformations give respectively the link between cystoscope positions and between images of consecutive acquisitions. The results obtained on realistic bladder phantoms show that the proposed method generates 3D surfaces recovering the ground truth shapes.La cystoscopie est actuellement l'examen clinique de référence permettant l'exploration visuelle des parois internes de la vessie. Le cystoscope (instrument utilisé pour cet examen) permet d'acquérir une séquence vidéo des parois épithéliales de la vessie. Cependant, chaque image de la séquence vidéo ne visualise qu'une surface réduite de quelques centimètres carrés de la paroi. Les travaux réalisés dans le cadre de cette thèse ont pour objectif de construire une carte 3D reproduisant d'une manière fidèle les formes et les textures des parois internes de la vessie. Une telle représentation de l'intérieur de la vessie permettrait d'améliorer l'interprétation des données acquises lors d'un examen cystoscopique. Pour atteindre cet objectif, un nouvel algorithme flexible est proposé pour le calibrage de systèmes cystoscopiques à vision active. Cet algorithme fournit les paramètres nécessaires à la reconstruction précise de points 3D sur la portion de surface imagée à chaque instant donné de la séquence vidéo cystoscopique. Ainsi, pour chaque acquisition de la séquence vidéo, un ensemble de quelques points 3D/2D et une image 2D est disponible. L'objectif du deuxième algorithme proposé dans cette thèse est de ramener l'ensemble des données obtenues pour une séquence dans un repère global pour générer un nuage de points 3D et une image panoramique 2D représentant respectivement la forme 3D et la texture de la totalité de la paroi imagée dans la séquence vidéo. Cette méthode de cartographie 3D permet l'estimation simultanée des transformations 3D rigides et 2D perspectives liant respectivement les positions du cystoscope et les images de paires d'acquisitions consécutives. Les résultats obtenus sur des fantômes réalistes de vessie montrent que ces algorithmes permettent de calculer des surfaces 3D reproduisant les formes à retrouver

    Contribution to the 3D mapping of internal walls of the bladder by active vision cystoscopy

    No full text
    La cystoscopie est actuellement l'examen clinique de référence permettant l'exploration visuelle des parois internes de la vessie. Le cystoscope (instrument utilisé pour cet examen) permet d'acquérir une séquence vidéo des parois épithéliales de la vessie. Cependant, chaque image de la séquence vidéo ne visualise qu'une surface réduite de quelques centimètres carrés de la paroi. Les travaux réalisés dans le cadre de cette thèse ont pour objectif de construire une carte 3D reproduisant d'une manière fidèle les formes et les textures des parois internes de la vessie. Une telle représentation de l'intérieur de la vessie permettrait d'améliorer l'interprétation des données acquises lors d'un examen cystoscopique. Pour atteindre cet objectif, un nouvel algorithme flexible est proposé pour le calibrage de systèmes cystoscopiques à vision active. Cet algorithme fournit les paramètres nécessaires à la reconstruction précise de points 3D sur la portion de surface imagée à chaque instant donné de la séquence vidéo cystoscopique. Ainsi, pour chaque acquisition de la séquence vidéo, un ensemble de quelques points 3D/2D et une image 2D est disponible. L'objectif du deuxième algorithme proposé dans cette thèse est de ramener l'ensemble des données obtenues pour une séquence dans un repère global pour générer un nuage de points 3D et une image panoramique 2D représentant respectivement la forme 3D et la texture de la totalité de la paroi imagée dans la séquence vidéo. Cette méthode de cartographie 3D permet l'estimation simultanée des transformations 3D rigides et 2D perspectives liant respectivement les positions du cystoscope et les images de paires d'acquisitions consécutives. Les résultats obtenus sur des fantômes réalistes de vessie montrent que ces algorithmes permettent de calculer des surfaces 3D reproduisant les formes à retrouverCystoscopy is currently the reference clinical examination for visual exploration of the inner walls of the bladder. A cystoscope (instrument used in this examination) allows for video acquisition of the bladder epithelium. Nonetheless, each frame of the video displays only a small area of few squared centimeters. This work aims to build 3D maps representing the 3D shape and the texture of the inner walls of the bladder. Such maps should improve and facilitate the interpretation of the cystoscopic data. To reach this purpose, a new flexible algorithm is proposed for the calibration of cystoscopic active vision systems. This algorithm provides the required parameters to achieve accurate reconstruction of 3D points on the surface part imaged at each given moment of the video cystoscopy. Thus, available data for each acquisition are a set of few 3D points (and their corresponding 2D projections) and a 2D image. The aim of the second algorithm described in this work is to place all the data obtained for a sequence in a global coordinate system to generate a 3D point cloud and a 2D panoramic image representing respectively the 3D shape and the texture of the bladder wall imaged in the video. This 3D cartography method allows for the simultaneous estimation of 3D rigid transformations and 2D perspective transformations. These transformations give respectively the link between cystoscope positions and between images of consecutive acquisitions. The results obtained on realistic bladder phantoms show that the proposed method generates 3D surfaces recovering the ground truth shape
    corecore