22 research outputs found
Correlation between Geometrically induced oxygen octahedral tilts and multiferroic behaviors in BiFeO3 films
The equilibrium position of atoms in a unit cell is directly connected to crystal functionalities, e.g., ferroelectricity, ferromagnetism, and piezoelectricity. The artificial tuning of the energy landscape can involve repositioning atoms as well as manipulating the functionalities of perovskites (ABO3), which are good model systems to test this legacy. Mechanical energy from external sources accommodating various clamping substrates is utilized to perturb the energy state of perovskite materials fabricated on the substrates and consequently change their functionalities; however, this approach yields undesired complex behaviors of perovskite crystals, such as lattice distortion, displacement of B atoms, and/or tilting of oxygen octahedra. Owing to complimentary collaborations between experimental and theoretical studies, the effects of both lattice distortion and displacement of B atoms are well understood so far, which leaves us a simple question: Can we exclusively control the positions of oxygen atoms in perovskites for functionality manipulation? Here the artificial manipulation of oxygen octahedral tilt angles within multiferroic BiFeO3 thin films using strong oxygen octahedral coupling with bottom SrRuO3 layers is reported, which opens up new possibilities of oxygen octahedral engineering
Likelihood-based bilateral filters for pre-estimated basis sinograms using photon-counting CT
Background: Noise amplification in material decomposition is an issue for exploiting photon-counting computed tomography (PCCT). Regularization techniques and neighborhood filters have been widely used, but degraded spatial resolution and bias are concerns. Purpose: This paper proposes likelihood-based bilateral filters that can be applied to pre-estimated basis sinograms to reduce the noise while minimally affecting spatial resolution and accuracy. Methods: The proposed method needs system models (e.g., incident spectrum, detector response) to calculate the likelihood. First, it performs maximum likelihood (ML)-based estimation in the projection domain to obtain basis sinograms. The estimated basis sinograms suffer from severe noise but are asymptotically unbiased without degrading spatial resolution. Then it calculates the neighborhood likelihoods for a given measurement at the center pixel using the neighborhood estimates and designs the weights based on the distance of likelihoods. It is also analyzed in terms of statistical inference, and then two variations of the filter are introduced: one that requires the significance level instead of the empirical hyperparameter. The other is a measurement-based filter, which can be applied when accurate estimates are given without the system models. The proposed methods were validated by analyzing the local property of noise and spatial resolution and the global trend of noise and bias using numerical thorax and abdominal phantoms for a two-material decomposition (water and bone). They were compared to the conventional neighborhood filters and the model-based iterative reconstruction with an edge-preserving penalty applied in the basis images. Results: The proposed method showed comparable or superior performance for the local and global properties to conventional methods in many cases. The thorax phantom: The full width at half maximum (FWHM) decreased by −2%–31% (−2 indicates that it increased by 2% compared to the best performance from conventional methods), and the global bias was reduced by 2%–19% compared to other methods for similar noise levels (local: 51% of the ML, global: 49%) in the water basis image. The FWHM decreased by 8%–31%, and the global bias was reduced by 9%–44% for similar noise levels (local: 44% of the ML, global: 36%) in the CT image at 65 keV. The abdominal phantom: The FWHM decreased by 10%–32%, and the global bias was reduced by 3%–35% compared to other methods for similar noise levels (local: 66% of the ML, global: 67%) in the water basis image. The FWHM decreased by up to −11%–47%, and the global bias was reduced by 13%–35% for similar noise levels (local: 71% of the ML, global: 70%) in the CT image at 60 keV. Conclusions: This paper introduced the likelihood-based bilateral filters as a post-processing method applied to the ML-based estimates of basis sinograms. The proposed filters effectively reduced the noise in the basis images and the synthesized monochromatic CT images. It showed the potential of using likelihood-based filters in the projection domain as a substitute for conventional regularization or filtering methods. © 2023 American Association of Physicists in Medicine.FALS
Likelihood-based bilateral filtration in material decomposition for photon counting CT
The maximum likelihood (ML) principle has been a gold standard for estimating basis line-integrals due to the optimal statistical property. However, the estimates are sensitive to noise from large attenuations or low dose levels. One may apply filtering in the estimated basis sinograms or model-based iterative reconstruction. Both methods effectively reduce noise, but the degraded spatial resolution is a concern. In this study, we propose a likelihood-based bilateral filter (LBF) for the estimated basis sinograms to reduce noise while preserving spatial resolution. It is a post-processing filtration applied to the ML-based basis line-integrals, the estimates with a high noise level but minimal degradation of spatial resolution. The proposed filter considers likelihood in neighbours instead of weighting by pixel values as in the original bilateral filtration. Two-material decomposition (water and bone) results demonstrate that the proposed method shows improved noise-to-spatial resolution tendency compared to conventional methods. © 2022 SPIE
A data-driven maximum likelihood classification for nanoparticle agent identification in photon-counting CT
The nanoparticle agent, combined with a targeting factor reacting with lesions, enables specific CT imaging. Thus, the identification of the nanoparticle agents has the potential to improve clinical diagnosis. Thanks to the energy sensitivity of the photon-counting detector (PCD), it can exploit the K-edge of the nanoparticle agents in the clinical x-ray energy range to identify the agents. In this paper, we propose a novel data-driven approach for nanoparticle agent identification using the PCD. We generate two sets of training data consisting of PCD measurements from calibration phantoms, one in the presence of nanoparticle agent and the other in the absence of the agent. For a given sinogram of PCD counts, the proposed method calculates the normalized log-likelihood sinogram for each class (class 1: with the agent, class 2: without the agent) using the K nearest neighbors (KNN) estimator, backproject the sinograms, and compare the backprojection images to identify the agent. We also proved that the proposed algorithm is equivalent to the maximum likelihood-based classification. We studied the robustness of dose reduction with gold nanoparticles as the K-edge contrast media and demonstrated that the proposed method identifies targets with different concentrations of the agents without background noise.1
Technical Note: The nearest neighborhood-based approach for estimating basis line-integrals using photon-counting detector
Purpose: This study aims to develop a calibration-based estimator for the photon-counting detector (PCD)-based x-ray computed tomography. Methods: We propose the nearest neighborhood (NN)-based estimator, which searches for the nearest calibration data for a given PCD output and sets the associated basis line-integrals as the estimate. Searching for the nearest neighbors can be accelerated using the pre-calculated k-d tree for the data. Results: The proposed method is compared to the model-based maximum likelihood (ML) estimator. For slab phantom study, both ML and NN-based methods achieve the Cramér-Rao lower bound and are unbiased for various combinations of three basis materials (water, bone, and gold). The proposed method is also validated for K-edge imaging and presents almost unbiased Au concentrations in the region of interest. Conclusions: The proposed NN-based method is demonstrated to be as accurate as the model-based ML estimator, but it is computationally efficient and requires only calibration measurements. © 2021 American Association of Physicists in Medicine1
Iodine-enhanced Liver Vessel Segmentation in Photon Counting Detector-based Computed Tomography using Deep Learning
Liver vessel segmentation is important in diagnosing and treating liver diseases. Iodine-based contrast agents are typically used to improve liver vessel segmentation by enhancing vascular structure contrast. However, conventional computed tomography (CT) is still limited with low contrast due to energy-integrating detectors. Photon counting detector-based computed tomography (PCD-CT) shows the high vascular structure contrast in CT images using multi-energy information, thereby allowing accurate liver vessel segmentation. In this paper, we propose a deep learning-based liver vessel segmentation method which takes advantages of the multi-energy information from PCD-CT. We develop a 3D UNet to segment vascular structures within the liver from 4 multi-energy bin images which separates iodine contrast agents. The experimental results on simulated abdominal phantom dataset demonstrated that our proposed method for the PCD-CT outperformed the standard deep learning segmentation method with conventional CT in terms of dice overlap score and 3D vascular structure visualization. © 2022 SPIE
Deep Learning-based Prior toward Normalized Metal Artifact Reduction in Computed Tomography
X-ray computed tomography (CT) often suffers from scatter and beam-hardening artifacts in the presence of metal. These metal artifacts are problematic as severe distortions in the CT images deteriorate the diagnostic quality in clinical applications such as orthopedic arthroplasty. The normalized metal artifact reduction (NMAR) method effectively reduces the artifacts by normalizing the sinogram with the metal traces through the forward projection of the prior image. Because the prior image is the thresholded CT image with the values of the air and soft tissues replaced, the image is noticeably different from the ideal CT thereby making normalized sinogram not completely flat. In this paper, we propose the novel NMAR method with the deep learning-enhanced prior image which is denoised by learning the relationship between NMAR and clean image without metal artifact. The denoised prior image is then forward projected to correct the sinogram with the metal trace. The experimental results on simulated rat phantom dataset demonstrate that our proposed deep prior NMAR achieves higher structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) than the original NMAR. © 2022 SPIE