1,711 research outputs found

    Searching Collaborative Agents for Multi-plane Localization in 3D Ultrasound

    Full text link
    3D ultrasound (US) is widely used due to its rich diagnostic information, portability and low cost. Automated standard plane (SP) localization in US volume not only improves efficiency and reduces user-dependence, but also boosts 3D US interpretation. In this study, we propose a novel Multi-Agent Reinforcement Learning (MARL) framework to localize multiple uterine SPs in 3D US simultaneously. Our contribution is two-fold. First, we equip the MARL with a one-shot neural architecture search (NAS) module to obtain the optimal agent for each plane. Specifically, Gradient-based search using Differentiable Architecture Sampler (GDAS) is employed to accelerate and stabilize the training process. Second, we propose a novel collaborative strategy to strengthen agents' communication. Our strategy uses recurrent neural network (RNN) to learn the spatial relationship among SPs effectively. Extensively validated on a large dataset, our approach achieves the accuracy of 7.05 degree/2.21mm, 8.62 degree/2.36mm and 5.93 degree/0.89mm for the mid-sagittal, transverse and coronal plane localization, respectively. The proposed MARL framework can significantly increase the plane localization accuracy and reduce the computational cost and model size.Comment: Early accepted by MICCAI 202

    Recent Advances in Artificial Intelligence-Assisted Ultrasound Scanning

    Get PDF
    Funded by the Spanish Ministry of Economic Affairs and Digital Transformation (Project MIA.2021.M02.0005 TARTAGLIA, from the Recovery, Resilience, and Transformation Plan financed by the European Union through Next Generation EU funds). TARTAGLIA takes place under the R&D Missions in Artificial Intelligence program, which is part of the Spain Digital 2025 Agenda and the Spanish National Artificial Intelligence Strategy.Ultrasound (US) is a flexible imaging modality used globally as a first-line medical exam procedure in many different clinical cases. It benefits from the continued evolution of ultrasonic technologies and a well-established US-based digital health system. Nevertheless, its diagnostic performance still presents challenges due to the inherent characteristics of US imaging, such as manual operation and significant operator dependence. Artificial intelligence (AI) has proven to recognize complicated scan patterns and provide quantitative assessments for imaging data. Therefore, AI technology has the potential to help physicians get more accurate and repeatable outcomes in the US. In this article, we review the recent advances in AI-assisted US scanning. We have identified the main areas where AI is being used to facilitate US scanning, such as standard plane recognition and organ identification, the extraction of standard clinical planes from 3D US volumes, and the scanning guidance of US acquisitions performed by humans or robots. In general, the lack of standardization and reference datasets in this field makes it difficult to perform comparative studies among the different proposed methods. More open-access repositories of large US datasets with detailed information about the acquisition are needed to facilitate the development of this very active research field, which is expected to have a very positive impact on US imaging.Depto. de Estructura de la Materia, Física Térmica y ElectrónicaFac. de Ciencias FísicasTRUEMinistry of Economic Affairs and Digital Transformation from the Recovery, Resilience, and Transformation PlanNext Generation EU fundspu

    Intelligent Robotic Sonographer: Mutual Information-based Disentangled Reward Learning from Few Demonstrations

    Full text link
    Ultrasound (US) imaging is widely used for biometric measurement and diagnosis of internal organs due to the advantages of being real-time and radiation-free. However, due to high inter-operator variability, resulting images highly depend on operators' experience. In this work, an intelligent robotic sonographer is proposed to autonomously "explore" target anatomies and navigate a US probe to a relevant 2D plane by learning from expert. The underlying high-level physiological knowledge from experts is inferred by a neural reward function, using a ranked pairwise image comparisons approach in a self-supervised fashion. This process can be referred to as understanding the "language of sonography". Considering the generalization capability to overcome inter-patient variations, mutual information is estimated by a network to explicitly extract the task-related and domain features in latent space. Besides, a Gaussian distribution-based filter is developed to automatically evaluate and take the quality of the expert's demonstrations into account. The robotic localization is carried out in coarse-to-fine mode based on the predicted reward associated to B-mode images. To demonstrate the performance of the proposed approach, representative experiments for the "line" target and "point" target are performed on vascular phantom and two ex-vivo animal organ phantoms (chicken heart and lamb kidney), respectively. The results demonstrated that the proposed advanced framework can robustly work on different kinds of known and unseen phantoms

    FetusMapV2: Enhanced Fetal Pose Estimation in 3D Ultrasound

    Full text link
    Fetal pose estimation in 3D ultrasound (US) involves identifying a set of associated fetal anatomical landmarks. Its primary objective is to provide comprehensive information about the fetus through landmark connections, thus benefiting various critical applications, such as biometric measurements, plane localization, and fetal movement monitoring. However, accurately estimating the 3D fetal pose in US volume has several challenges, including poor image quality, limited GPU memory for tackling high dimensional data, symmetrical or ambiguous anatomical structures, and considerable variations in fetal poses. In this study, we propose a novel 3D fetal pose estimation framework (called FetusMapV2) to overcome the above challenges. Our contribution is three-fold. First, we propose a heuristic scheme that explores the complementary network structure-unconstrained and activation-unreserved GPU memory management approaches, which can enlarge the input image resolution for better results under limited GPU memory. Second, we design a novel Pair Loss to mitigate confusion caused by symmetrical and similar anatomical structures. It separates the hidden classification task from the landmark localization task and thus progressively eases model learning. Last, we propose a shape priors-based self-supervised learning by selecting the relatively stable landmarks to refine the pose online. Extensive experiments and diverse applications on a large-scale fetal US dataset including 1000 volumes with 22 landmarks per volume demonstrate that our method outperforms other strong competitors.Comment: 16 pages, 11 figures, accepted by Medical Image Analysis(2023

    Towards autonomous diagnostic systems with medical imaging

    Get PDF
    Democratizing access to high quality healthcare has highlighted the need for autonomous diagnostic systems that a non-expert can use. Remote communities, first responders and even deep space explorers will come to rely on medical imaging systems that will provide them with Point of Care diagnostic capabilities. This thesis introduces the building blocks that would enable the creation of such a system. Firstly, we present a case study in order to further motivate the need and requirements of autonomous diagnostic systems. This case study primarily concerns deep space exploration where astronauts cannot rely on communication with earth-bound doctors to help them through diagnosis, nor can they make the trip back to earth for treatment. Requirements and possible solutions about the major challenges faced with such an application are discussed. Moreover, this work describes how a system can explore its perceived environment by developing a Multi Agent Reinforcement Learning method that allows for implicit communication between the agents. Under this regime agents can share the knowledge that benefits them all in achieving their individual tasks. Furthermore, we explore how systems can understand the 3D properties of 2D depicted objects in a probabilistic way. In Part II, this work explores how to reason about the extracted information in a causally enabled manner. A critical view on the applications of causality in medical imaging, and its potential uses is provided. It is then narrowed down to estimating possible future outcomes and reasoning about counterfactual outcomes by embedding data on a pseudo-Riemannian manifold and constraining the latent space by using the relativistic concept of light cones. By formalizing an approach to estimating counterfactuals, a computationally lighter alternative to the abduction-action-prediction paradigm is presented through the introduction of Deep Twin Networks. Appropriate partial identifiability constraints for categorical variables are derived and the method is applied in a series of medical tasks involving structured data, images and videos. All methods are evaluated in a wide array of synthetic and real life tasks that showcase their abilities, often achieving state-of-the-art performance or matching the existing best performance while requiring a fraction of the computational cost.Open Acces

    GUARDIANS final report

    Get PDF
    Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings

    Ultrasound Imaging

    Get PDF
    In this book, we present a dozen state of the art developments for ultrasound imaging, for example, hardware implementation, transducer, beamforming, signal processing, measurement of elasticity and diagnosis. The editors would like to thank all the chapter authors, who focused on the publication of this book

    Acoustic Lens Design Using Machine Learning

    Get PDF
    This thesis aims to contribute to the development of a novel approach and efficient method for the inverse design of acoustic metamaterial lenses using machine learning, specifically, deep learning, generative modeling, and reinforcement learning. Acoustic lenses can focus incident plane waves at the focal point, enabling them to detect structures non-intrusively. These lenses can be utilized in biomedical engineering, medical devices, structural engineering, ultrasound imaging, health monitoring, etc. Finding the global optimum through a traditional iterative optimization process for designing the acoustic lens is challenging. It may become infeasible due to high dimensional parameter space and the compute resources needed. Machine learning techniques have been shown promising for finding the global optimum. Generative modeling is a powerful technique enabling recent advancements in drug discoveries, organic molecule development, and photonics. We combined generative modeling with global optimization and an analytical form of gradients computed by means of multiple scattering theory. In addition, reinforcement learning can potentially outperform traditional optimization algorithms. Thus, in this thesis, the acoustic lens is modeled using two machine learning techniques, such as generative modeling, using 2D-Global Topology Optimization Networks (2D-GLOnets), and reinforcement learning using the Deep Deterministic Policy Gradient (DDPG) algorithm. Results from the aforementioned methods are compared with traditional optimization algorithms
    corecore