3,701 research outputs found

    OnUVS: Online Feature Decoupling Framework for High-Fidelity Ultrasound Video Synthesis

    Full text link
    Ultrasound (US) imaging is indispensable in clinical practice. To diagnose certain diseases, sonographers must observe corresponding dynamic anatomic structures to gather comprehensive information. However, the limited availability of specific US video cases causes teaching difficulties in identifying corresponding diseases, which potentially impacts the detection rate of such cases. The synthesis of US videos may represent a promising solution to this issue. Nevertheless, it is challenging to accurately animate the intricate motion of dynamic anatomic structures while preserving image fidelity. To address this, we present a novel online feature-decoupling framework called OnUVS for high-fidelity US video synthesis. Our highlights can be summarized by four aspects. First, we introduced anatomic information into keypoint learning through a weakly-supervised training strategy, resulting in improved preservation of anatomical integrity and motion while minimizing the labeling burden. Second, to better preserve the integrity and textural information of US images, we implemented a dual-decoder that decouples the content and textural features in the generator. Third, we adopted a multiple-feature discriminator to extract a comprehensive range of visual cues, thereby enhancing the sharpness and fine details of the generated videos. Fourth, we constrained the motion trajectories of keypoints during online learning to enhance the fluidity of generated videos. Our validation and user studies on in-house echocardiographic and pelvic floor US videos showed that OnUVS synthesizes US videos with high fidelity.Comment: 14 pages, 13 figures and 6 table

    Feature-Conditioned Cascaded Video Diffusion Models for Precise Echocardiogram Synthesis

    Full text link
    Image synthesis is expected to provide value for the translation of machine learning methods into clinical practice. Fundamental problems like model robustness, domain transfer, causal modelling, and operator training become approachable through synthetic data. Especially, heavily operator-dependant modalities like Ultrasound imaging require robust frameworks for image and video generation. So far, video generation has only been possible by providing input data that is as rich as the output data, e.g., image sequence plus conditioning in, video out. However, clinical documentation is usually scarce and only single images are reported and stored, thus retrospective patient-specific analysis or the generation of rich training data becomes impossible with current approaches. In this paper, we extend elucidated diffusion models for video modelling to generate plausible video sequences from single images and arbitrary conditioning with clinical parameters. We explore this idea within the context of echocardiograms by looking into the variation of the Left Ventricle Ejection Fraction, the most essential clinical metric gained from these examinations. We use the publicly available EchoNet-Dynamic dataset for all our experiments. Our image to sequence approach achieves an R2R^2 score of 93%, which is 38 points higher than recently proposed sequence to sequence generation methods. Code and models will be available at: https://github.com/HReynaud/EchoDiffusion.Comment: Published in MICCAI 2023 proceedings. https://link.springer.com/chapter/10.1007/978-3-031-43999-5_1

    Software for Modeling Ultrasound Breast Cancer Imaging

    Get PDF
    Computer-based models are increasingly used in biomedical imaging research to clarify links between anatomical structure, imaging physics, and the information content of medical images. A few three-dimensional breast tissue software models have been developed for mammography simulations to optimize current mammography systems or to test novel systems. It would be beneficial in the development of ultrasound breast imaging to have a similar computational model for simulation. A three-dimensional breast anatomy model with the lobular ducts, periductal and intralobular loose fibrous tissue, interlobular dense fibrous tissue, fat, and skin has been implemented. The parenchymal density of the model can be varied from about 20 to 75% to represent a range of clinically relevant densities. The anatomical model was used as a foundation for a three-dimensional breast tumour model. The tumour model was designed to mimic the ultrasound appearance of features used in tumour classification. Simulated two-dimensional ultrasound images were synthesized from the models using a first-order k-space propagation simulator. Similar to clinical ultrasound images, the simulated images of normal breast tissue exhibited non-Rayleigh speckle in regions of interest consisting of primarily fatty, primarily fibroglandular, and mixed tissue types. The simulated images of tumours reproduced several shape and margin features used in breast tumour diagnosis. The ultrasound wavefront distortion produced in simulations using the anatomical model was evaluated and a second method of modeling wavefront distortion was also proposed in which 10 to 12 irregularly shaped, strongly scattering inclusions were iii superimposed on multiple parallel time-shift screens to create the screen-inclusion model. Simulations of planar pulsed wave propagation through the two proposed models, a conventional parallel time-shift screen model, and digitized breast tissue specimens were compared. The anatomical model and screen-inclusion model were able to produce arrival-time fluctuation and energy-level fluctuation characteristics comparable to the digitized tissue specimens that the parallel-screen model was unable to reproduce. This software is expected to be valuable for imaging simulations that require accurate and detailed representation of the ultrasound characteristics of breast tumours

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Constrained CycleGAN for Effective Generation of Ultrasound Sector Images of Improved Spatial Resolution

    Full text link
    Objective. A phased or a curvilinear array produces ultrasound (US) images with a sector field of view (FOV), which inherently exhibits spatially-varying image resolution with inferior quality in the far zone and towards the two sides azimuthally. Sector US images with improved spatial resolutions are favorable for accurate quantitative analysis of large and dynamic organs, such as the heart. Therefore, this study aims to translate US images with spatially-varying resolution to ones with less spatially-varying resolution. CycleGAN has been a prominent choice for unpaired medical image translation; however, it neither guarantees structural consistency nor preserves backscattering patterns between input and generated images for unpaired US images. Approach. To circumvent this limitation, we propose a constrained CycleGAN (CCycleGAN), which directly performs US image generation with unpaired images acquired by different ultrasound array probes. In addition to conventional adversarial and cycle-consistency losses of CycleGAN, CCycleGAN introduces an identical loss and a correlation coefficient loss based on intrinsic US backscattered signal properties to constrain structural consistency and backscattering patterns, respectively. Instead of post-processed B-mode images, CCycleGAN uses envelope data directly obtained from beamformed radio-frequency signals without any other non-linear postprocessing. Main Results. In vitro phantom results demonstrate that CCycleGAN successfully generates images with improved spatial resolution as well as higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared with benchmarks. Significance. CCycleGAN-generated US images of the in vivo human beating heart further facilitate higher quality heart wall motion estimation than benchmarks-generated ones, particularly in deep regions
    • …
    corecore