13,772 research outputs found

    Bringing Grayscale to Ghost Translation

    Get PDF
    In recent decades, physicists developed ghost imaging, which is an alternative technique to the conventional imaging used everywhere by cameras. Ghost imaging was given its name because none of the light that reached the camera-like detector ever interacted with the subject of the image, yet this technique was able to produce an image of that object. This method initially utilized two correlated beams of light; one beam interacted with the object and was collected by a bucket detector without spatial resolution, and the other was simply collected by a spatially-resolved detector. Then, a single-beam method was developed known as computational ghost imaging, where a device modulated the spatial pattern of the only beam. The propagation of the light could be computed, allowing the object to be imaged without a spatially-resolved detector at all. Deep learning techniques were then adopted from computer science to improve computational ghost imaging. In the past few years, researchers have begun utilizing a type of deep learning network called a Transformer network, resulting in a regime known as ghost translation. This regime appears to be robust to noise and could enable major computational shortcuts when compared to ones that utilize other types of neural networks. However, ghost translation has only been developed to work on simple binary images, which do not resemble most applications found in real-life or laboratory settings. I build upon this recent work, exploring the feasibility of extending this regime from binary images to grayscale ones. I take concrete steps toward creating a network that can use such images and identify promising new directions, suggesting that such a network is in the near future. Grayscale images are found in common imaging applications, and they are the step immediately preceding full-color images that are the hallmark of conventional imaging. Hence, improving this regime to give it compatibility with grayscale images opens the door to useful laboratory applications and promotes the discovery of further uses for computational ghost imaging. While a fully-capable grayscale Transformer network is not yet here, I bring it several steps closer in this work

    k-Space Deep Learning for Reference-free EPI Ghost Correction

    Full text link
    Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.Comment: To appear in Magnetic Resonance in Medicin

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip
    • …
    corecore