186,230 research outputs found

    DiffusionCT: Latent Diffusion Model for CT Image Standardization

    Full text link
    Computed tomography (CT) imaging is a widely used modality for early lung cancer diagnosis, treatment, and prognosis. Features extracted from CT images are now accepted to quantify spatial and temporal variations in tumor architecture and function. However, CT images are often acquired using scanners from different vendors with customized acquisition standards, resulting in significantly different texture features even for the same patient, posing a fundamental challenge to downstream studies. Existing CT image harmonization models rely on supervised or semi-supervised techniques, with limited performance. In this paper, we have proposed a diffusion-based CT image standardization model called DiffusionCT which works on latent space by mapping latent distribution into a standard distribution. DiffusionCT incorporates an Unet-based encoder-decoder and a diffusion model embedded in its bottleneck part. The Unet first trained without the diffusion model to learn the latent representation of the input data. The diffusion model is trained in the next training phase. All the trained models work together on image standardization. The encoded representation outputted from the Unet encoder passes through the diffusion model, and the diffusion model maps the distribution in to target standard image domain. Finally, the decode takes that transformed latent representation to synthesize a standardized image. The experimental results show that DiffusionCT significantly improves the performance of the standardization task.Comment: 6 pages, 03 figures and 01 table

    Unsupervised Medical Image Translation with Adversarial Diffusion Models

    Full text link
    Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.Comment: M. Ozbey and O. Dalmaz contributed equally to this stud

    Image Encryption Based on Diffusion and Multiple Chaotic Maps

    Full text link
    In the recent world, security is a prime important issue, and encryption is one of the best alternative way to ensure security. More over, there are many image encryption schemes have been proposed, each one of them has its own strength and weakness. This paper presents a new algorithm for the image encryption/decryption scheme. This paper is devoted to provide a secured image encryption technique using multiple chaotic based circular mapping. In this paper, first, a pair of sub keys is given by using chaotic logistic maps. Second, the image is encrypted using logistic map sub key and in its transformation leads to diffusion process. Third, sub keys are generated by four different chaotic maps. Based on the initial conditions, each map may produce various random numbers from various orbits of the maps. Among those random numbers, a particular number and from a particular orbit are selected as a key for the encryption algorithm. Based on the key, a binary sequence is generated to control the encryption algorithm. The input image of 2-D is transformed into a 1- D array by using two different scanning pattern (raster and Zigzag) and then divided into various sub blocks. Then the position permutation and value permutation is applied to each binary matrix based on multiple chaos maps. Finally the receiver uses the same sub keys to decrypt the encrypted images. The salient features of the proposed image encryption method are loss-less, good peak signal-to-noise ratio (PSNR), Symmetric key encryption, less cross correlation, very large number of secret keys, and key-dependent pixel value replacement.Comment: 14 pages,9 figures and 5 tables; http://airccse.org/journal/jnsa11_current.html, 201

    Multi-scale and multi-spectral shape analysis: from 2d to 3d

    Get PDF
    Shape analysis is a fundamental aspect of many problems in computer graphics and computer vision, including shape matching, shape registration, object recognition and classification. Since the SIFT achieves excellent matching results in 2D image domain, it inspires us to convert the 3D shape analysis to 2D image analysis using geometric maps. However, the major disadvantage of geometric maps is that it introduces inevitable, large distortions when mapping large, complex and topologically complicated surfaces to a canonical domain. It is demanded for the researchers to construct the scale space directly on the 3D shape. To address these research issues, in this dissertation, in order to find the multiscale processing for the 3D shape, we start with shape vector image diffusion framework using the geometric mapping. Subsequently, we investigate the shape spectrum field by introducing the implementation and application of Laplacian shape spectrum. In order to construct the scale space on 3D shape directly, we present a novel idea to solve the diffusion equation using the manifold harmonics in the spectral point of view. Not only confined on the mesh, by using the point-based manifold harmonics, we rigorously derive our solution from the diffusion equation which is the essential of the scale space processing on the manifold. Built upon the point-based manifold harmonics transform, we generalize the diffusion function directly on the point clouds to create the scale space. In virtue of the multiscale structure from the scale space, we can detect the feature points and construct the descriptor based on the local neighborhood. As a result, multiscale shape analysis directly on the 3D shape can be achieved

    Building connectomes using diffusion MRI: why, how and but

    Get PDF
    Why has diffusion MRI become a principal modality for mapping connectomes in vivo? How do different image acquisition parameters, fiber tracking algorithms and other methodological choices affect connectome estimation? What are the main factors that dictate the success and failure of connectome reconstruction? These are some of the key questions that we aim to address in this review. We provide an overview of the key methods that can be used to estimate the nodes and edges of macroscale connectomes, and we discuss open problems and inherent limitations. We argue that diffusion MRI-based connectome mapping methods are still in their infancy and caution against blind application of deep white matter tractography due to the challenges inherent to connectome reconstruction. We review a number of studies that provide evidence of useful microstructural and network properties that can be extracted in various independent and biologically-relevant contexts. Finally, we highlight some of the key deficiencies of current macroscale connectome mapping methodologies and motivate future developments

    Scanner Invariant Representations for Diffusion MRI Harmonization

    Get PDF
    Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation. Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data. Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context. Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data

    Effective Real Image Editing with Accelerated Iterative Diffusion Inversion

    Full text link
    Despite all recent progress, it is still challenging to edit and manipulate natural images with modern generative models. When using Generative Adversarial Network (GAN), one major hurdle is in the inversion process mapping a real image to its corresponding noise vector in the latent space, since its necessary to be able to reconstruct an image to edit its contents. Likewise for Denoising Diffusion Implicit Models (DDIM), the linearization assumption in each inversion step makes the whole deterministic inversion process unreliable. Existing approaches that have tackled the problem of inversion stability often incur in significant trade-offs in computational efficiency. In this work we propose an Accelerated Iterative Diffusion Inversion method, dubbed AIDI, that significantly improves reconstruction accuracy with minimal additional overhead in space and time complexity. By using a novel blended guidance technique, we show that effective results can be obtained on a large range of image editing tasks without large classifier-free guidance in inversion. Furthermore, when compared with other diffusion inversion based works, our proposed process is shown to be more robust for fast image editing in the 10 and 20 diffusion steps' regimes.Comment: Accepted to ICCV 2023 (Oral

    Doctor of Philosophy

    Get PDF
    dissertationMagnetic resonance imaging (MRI) techniques are widely applied in various disease diagnoses and scientific research projects as noninvasive methods. However, lower signal-to-noise ratio (SNR), B1 inhomogeneity, motion-related artifact, susceptibility artifact, chemical shift artifact and Gibbs ring still play a negative role in image quality improvement. Various techniques and methods were developed to minimize and remove the degradation of image quality originating from artifacts. In the first part of this dissertation, a motion artifact reduction technique based on a novel real time self-gated pulse sequence is presented. Diffusion weighted and diffusion tensor magnetic resonance imaging techniques are generally performed with signal averaging of multiple measurements to improve the signal-to-noise ratio and the accuracy of diffusion measurement. Any discrepancy in images between different averages causes errors that reduce the accuracy of diffusion MRI measurements. The new scheme is capable of detecting a subject's motion and reacquiring motion-corrupted data in real time and helps to improve the accuracy of diffusion MRI measurements. In the second part of this dissertation, a rapid T1 mapping technique (two dimensional singleshot spin echo stimulated echo planar image--2D ss-SESTEPI), which is an EPI-based singleshot imaging technique that simultaneously acquires a spin-EPI (SEPI) and a stimulated-EPI (STEPI) after a single RF excitation, is discussed. The magnitudes of SEPI and STEPI differ by T1 decay for perfect 90o RF pulses and can be used to rapidly measure the T1 relaxation time. However, the spatial variation of B1 amplitude induces uneven splitting of the transverse magnetization for SEPI and STEPI within the imaging FOV. Therefore, correction for B1 inhomogeneity is critical for 2D ss-SESTEPI to be used for T1 measurement. In general, the EPI-based pulse sequence suffers from geometric distortion around the boundary of air-tissue or bone tissue. In the third part of this dissertation, a novel pulse sequence is discussed, which was developed based on three dimensional singleshot diffusion weighted stimulated echo planar imaging (3D ss-DWSTEPI). A parallel imaging technique was combined with 3D ss-DWSTEPI to reduce the image distortion, and the secondary spin echo formed by three RF pulses (900-1800-900) is used to improve the SNR. Image quality is improved
    • 

    corecore