1,485 research outputs found

    TranSMS: Transformers for Super-Resolution Calibration in Magnetic Particle Imaging

    Full text link
    Magnetic particle imaging (MPI) offers exceptional contrast for magnetic nanoparticles (MNP) at high spatio-temporal resolution. A common procedure in MPI starts with a calibration scan to measure the system matrix (SM), which is then used to set up an inverse problem to reconstruct images of the MNP distribution during subsequent scans. This calibration enables the reconstruction to sensitively account for various system imperfections. Yet time-consuming SM measurements have to be repeated under notable changes in system properties. Here, we introduce a novel deep learning approach for accelerated MPI calibration based on Transformers for SM super-resolution (TranSMS). Low-resolution SM measurements are performed using large MNP samples for improved signal-to-noise ratio efficiency, and the high-resolution SM is super-resolved via model-based deep learning. TranSMS leverages a vision transformer module to capture contextual relationships in low-resolution input images, a dense convolutional module for localizing high-resolution image features, and a data-consistency module to ensure measurement fidelity. Demonstrations on simulated and experimental data indicate that TranSMS significantly improves SM recovery and MPI reconstruction for up to 64-fold acceleration in two-dimensional imaging

    L1 data fitting for robust reconstruction in magnetic particle imaging: quantitative evaluation on Open MPI dataset

    Get PDF
    Magnetic particle imaging is an emerging quantitative imaging modality, exploiting the unique nonlinear magnetization phenomenon of superparamagnetic iron oxide nanoparticles for recovering the concentration. Traditionally the reconstruction is formulated into a penalized least-squares problem with nonnegativity constraint, and then solved using a variant of Kaczmarz method which is often stopped early after a small number of iterations. Besides the phantom signal, measurements additionally include a background signal and a noise signal. In order to obtain good reconstructions, a preprocessing step of frequency selection to remove the deleterious influences of the noise is often adopted. In this work, we propose a complementary pure variational approach to noise treatment, by viewing highly noisy measurements as outliers, and employing the l1 data fitting, one popular approach from robust statistics. When compared with the standard approach, it is easy to implement with a comparable computational complexity. Experiments with a public domain dataset, i.e., Open MPI dataset, show that it can give accurate reconstructions, and is less prone to noisy measurements, which is illustrated by quantitative (PSNR / SSIM) and qualitative comparisons with the Kaczmarz method. We also investigate the performance of the Kaczmarz method for small iteration numbers quantitatively

    High Performance Reconstruction Framework for Straight Ray Tomography:from Micro to Nano Resolution Imaging

    Get PDF
    We develop a high-performance scheme to reconstruct straight-ray tomographic scans. We preserve the quality of the state-of-the-art schemes typically found in traditional computed tomography but reduce the computational cost substantially. Our approach is based on 1) a rigorous discretization of the forward model using a generalized sampling scheme; 2) a variational formulation of the reconstruction problem; and 3) iterative reconstruction algorithms that use the alternating-direction method of multipliers. To improve the quality of the reconstruction, we take advantage of total-variation regularization and its higher-order variants. In addition, the prior information on the support and the positivity of the refractive index are both considered, which yields significant improvements. The two challenging applications to which we apply the methods of our framework are grating-based \mbox{x-ray} imaging (GI) and single-particle analysis (SPA). In the context of micro-resolution GI, three complementary characteristics are measured: the conventional absorption contrast, the differential phase contrast, and the small-angle scattering contrast. While these three measurements provide powerful insights on biological samples, up to now they were calling for a large-dose deposition which potentially was harming the specimens ({\textit{e.g.}}, in small-rodent scanners). As it turns out, we are able to preserve the image quality of filtered back-projection-type methods despite the fewer acquisition angles and the lower signal-to-noise ratio implied by a reduction in the total dose of {\textit{in-vivo}} grating interferometry. To achieve this, we first apply our reconstruction framework to differential phase-contrast imaging (DPCI). We then add Jacobian-type regularization to simultaneously reconstruct phase and absorption. The experimental results confirm the power of our method. This is a crucial step toward the deployment of DPCI in medicine and biology. Our algorithms have been implemented in the TOMCAT laboratory of the Paul Scherrer Institute. In the context of near-atomic-resolution SPA, we need to cope with hundreds or thousands of noisy projections of macromolecules onto different micrographs. Moreover, each projection has an unknown orientation and is blurred by some space-dependent point-spread function of the microscope. Consequently, the determination of the structure of a macromolecule involves not only a reconstruction task, but also the deconvolution of each projection image. We formulate this problem as a constrained regularized reconstruction. We are able to directly include the contrast transfer function in the system matrix without any extra computational cost. The experimental results suggest that our approach brings a significant improvement in the quality of the reconstruction. Our framework also provides an important step toward the application of SPA for the {\textit{de novo}} generation of macromolecular models. The corresponding algorithms have been implemented in Xmipp

    Using the â„“1-norm for Image-based tomographic reconstruction

    Get PDF
    This paper introduces an â„“1-norm model based on Total Variation Minimization for tomographic reconstruction. The reconstructions produced by the proposed model are more accurate than those obtained with classical reconstruction models based on the â„“2-norm. This model can be linearized and solved by linear programming techniques. Furthermore, the complementary slackness conditions can be exploited to reduce the dimension of the resulting formulation by removing unnecessary variables and constraints. Since the efficacy of the reduced formulation strongly depends on the quality of the dual-multipliers used when applying the reduction method, Lagrangian relaxation is used to obtain near-optimal multipliers. This allows solving larger instances in an efficient way
    • …
    corecore