20 research outputs found

    Unsupervised Domain Transfer with Conditional Invertible Neural Networks

    Full text link
    Synthetic medical image generation has evolved as a key technique for neural network training and validation. A core challenge, however, remains in the domain gap between simulations and real data. While deep learning-based domain transfer using Cycle Generative Adversarial Networks and similar architectures has led to substantial progress in the field, there are use cases in which state-of-the-art approaches still fail to generate training images that produce convincing results on relevant downstream tasks. Here, we address this issue with a domain transfer approach based on conditional invertible neural networks (cINNs). As a particular advantage, our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood training. To showcase our method's generic applicability, we apply it to two spectral imaging modalities at different scales, namely hyperspectral imaging (pixel-level) and photoacoustic tomography (image-level). According to comprehensive experiments, our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks (binary and multi-class). cINN-based domain transfer could thus evolve as an important method for realistic synthetic data generation in the field of spectral imaging and beyond

    Development and evaluation of a new image-based user interface for robot-assisted needle placements with the robopsy system

    No full text
    The main challenges of Computed Tomography (CT)-guided organ puncture are the mental registration of the medical imaging data with the patient anatomy, required when planning a trajectory, and the subsequent precise insertion of a needle along it. An interventional telerobotic system, such as Robopsy, enables precise needle insertion, however, in order to minimize procedure time and number of CT scans, this system should be driven by an interface that is directly integrated with the medical imaging data. In this study we have developed and evaluated such an interface that provides the user with a point-and-click functionality for specifying the desired trajectory, segmenting the needle and automatically calculating the insertion parameters (angles and depth). In order to highlight the advantages of such an interface, we compared robotic-assisted targeting using the old interface (non-image-based) where the path planning was performed on the CT console and transferred manually to the interface with the targeting procedure using the new interface (image-based). We found that the mean procedure time (n=5) was 22±5 min (non-image-based) and 19±1 min (image-based) with a mean number of CT scans of 6±1 (non-image-based) and 5±1 (image-based). Although the targeting experiments were performed in gelatin with homogenous properties our results indicate that an image-based interface can reduce procedure time as well as number of CT scans for percutaneous needle biopsies

    The SPECTRAL Perfusion Arm Clamping dAtaset (SPECTRALPACA) for video-rate functional imaging of the skin

    No full text
    Abstract Spectral imaging has the potential to become a key technique in interventional medicine as it unveils much richer optical information compared to conventional RBG (red, green, and blue)-based imaging. Thus allowing for high-resolution functional tissue analysis in real time. Its higher information density particularly shows promise for the development of powerful perfusion monitoring methods for clinical use. However, even though in vivo validation of such methods is crucial for their clinical translation, the biomedical field suffers from a lack of publicly available datasets for this purpose. Closing this gap, we generated the SPECTRAL Perfusion Arm Clamping dAtaset (SPECTRALPACA). It comprises ten spectral videos (∼20 Hz, approx. 20,000 frames each) systematically recorded of the hands of ten healthy human participants in different functional states. We paired each spectral video with concisely tracked regions of interest, and corresponding diffuse reflectance measurements recorded with a spectrometer. Providing the first openly accessible in human spectral video dataset for perfusion monitoring, our work facilitates the development and validation of new functional imaging methods

    Tattoo tomography: Freehand 3D photoacoustic image reconstruction with an optical pattern

    No full text
    Purpose!#!Photoacoustic tomography (PAT) is a novel imaging technique that can spatially resolve both morphological and functional tissue properties, such as vessel topology and tissue oxygenation. While this capacity makes PAT a promising modality for the diagnosis, treatment, and follow-up of various diseases, a current drawback is the limited field of view provided by the conventionally applied 2D probes.!##!Methods!#!In this paper, we present a novel approach to 3D reconstruction of PAT data (Tattoo tomography) that does not require an external tracking system and can smoothly be integrated into clinical workflows. It is based on an optical pattern placed on the region of interest prior to image acquisition. This pattern is designed in a way that a single tomographic image of it enables the recovery of the probe pose relative to the coordinate system of the pattern, which serves as a global coordinate system for image compounding.!##!Results!#!To investigate the feasibility of Tattoo tomography, we assessed the quality of 3D image reconstruction with experimental phantom data and in vivo forearm data. The results obtained with our prototype indicate that the Tattoo method enables the accurate and precise 3D reconstruction of PAT data and may be better suited for this task than the baseline method using optical tracking.!##!Conclusions!#!In contrast to previous approaches to 3D ultrasound (US) or PAT reconstruction, the Tattoo approach neither requires complex external hardware nor training data acquired for a specific application. It could thus become a valuable tool for clinical freehand PAT

    MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions

    No full text
    PURPOSE: Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. METHODS: MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. RESULTS: We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively. CONCLUSION: With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org
    corecore