41 research outputs found

    Learning to Distill Global Representation for Sparse-View CT

    Full text link
    Sparse-view computed tomography (CT) -- using a small number of projections for tomographic reconstruction -- enables much lower radiation dose to patients and accelerated data acquisition. The reconstructed images, however, suffer from strong artifacts, greatly limiting their diagnostic value. Current trends for sparse-view CT turn to the raw data for better information recovery. The resultant dual-domain methods, nonetheless, suffer from secondary artifacts, especially in ultra-sparse view scenarios, and their generalization to other scanners/protocols is greatly limited. A crucial question arises: have the image post-processing methods reached the limit? Our answer is not yet. In this paper, we stick to image post-processing methods due to great flexibility and propose global representation (GloRe) distillation framework for sparse-view CT, termed GloReDi. First, we propose to learn GloRe with Fourier convolution, so each element in GloRe has an image-wide receptive field. Second, unlike methods that only use the full-view images for supervision, we propose to distill GloRe from intermediate-view reconstructed images that are readily available but not explored in previous literature. The success of GloRe distillation is attributed to two key components: representation directional distillation to align the GloRe directions, and band-pass-specific contrastive distillation to gain clinically important details. Extensive experiments demonstrate the superiority of the proposed GloReDi over the state-of-the-art methods, including dual-domain ones. The source code is available at https://github.com/longzilicart/GloReDi.Comment: ICCV 202

    Structural texture similarity metric based on intra-class variances

    Get PDF
    acceptedVersionPeer reviewe

    Fast Single Image Super-Resolution Using a New Analytical Solution for l2–l2 Problems

    Get PDF
    International audienceThis paper addresses the problem of single image super-resolution (SR), which consists of recovering a high- resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strate- gies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 –l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques

    Image Analysis via Applied Harmonic Analysis : Perceptual Image Quality Assessment, Visual Servoing, and Feature Detection

    Get PDF
    Certain systems of analyzing functions developed in the field of applied harmonic analysis are specifically designed to yield efficient representations of structures which are characteristic of common classes of two-dimensional signals, like images. In particular, functions in these systems are typically sensitive to features that define the geometry of a signal, like edges and curves in the case of images. These properties make them ideal candidates for a wide variety of tasks in image processing and image analysis. This thesis discusses three recently developed approaches to utilizing systems of wavelets, shearlets, and alpha-molecules in specific image analysis tasks. First, a perceptual image similarity measure is introduced that is solely based on the coefficients obtained from six discrete Haar wavelet filters but yields state of the art correlations with human opinion scores on large benchmark databases. The second application concerns visual servoing, which is a technique for controlling the motion of a robot by using feedback from a visual sensor. In particular, it will be investigated how the coefficients yielded by discrete wavelet and shearlet transforms can be used as the visual features that control the motion of a robot with six degrees of freedom. Finally, a novel framework for the detection and characterization of features such as edges, ridges, and blobs in two-dimensional images is presented and evaluated in extensive numerical experiments. Here, versatile and robust feature detectors are obtained by exploiting the special symmetry properties of directionally sensitive analyzing functions in systems created within the recently introduced alpha-molecule framework

    Image Analysis via Applied Harmonic Analysis : Perceptual Image Quality Assessment, Visual Servoing, and Feature Detection

    Get PDF
    Certain systems of analyzing functions developed in the field of applied harmonic analysis are specifically designed to yield efficient representations of structures which are characteristic of common classes of two-dimensional signals, like images. In particular, functions in these systems are typically sensitive to features that define the geometry of a signal, like edges and curves in the case of images. These properties make them ideal candidates for a wide variety of tasks in image processing and image analysis. This thesis discusses three recently developed approaches to utilizing systems of wavelets, shearlets, and alpha-molecules in specific image analysis tasks. First, a perceptual image similarity measure is introduced that is solely based on the coefficients obtained from six discrete Haar wavelet filters but yields state of the art correlations with human opinion scores on large benchmark databases. The second application concerns visual servoing, which is a technique for controlling the motion of a robot by using feedback from a visual sensor. In particular, it will be investigated how the coefficients yielded by discrete wavelet and shearlet transforms can be used as the visual features that control the motion of a robot with six degrees of freedom. Finally, a novel framework for the detection and characterization of features such as edges, ridges, and blobs in two-dimensional images is presented and evaluated in extensive numerical experiments. Here, versatile and robust feature detectors are obtained by exploiting the special symmetry properties of directionally sensitive analyzing functions in systems created within the recently introduced alpha-molecule framework

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Learned Interferometric Imaging for the SPIDER Instrument

    Full text link
    The Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is an optical interferometric imaging device that aims to offer an alternative to the large space telescope designs of today with reduced size, weight and power consumption. This is achieved through interferometric imaging. State-of-the-art methods for reconstructing images from interferometric measurements adopt proximal optimization techniques, which are computationally expensive and require handcrafted priors. In this work we present two data-driven approaches for reconstructing images from measurements made by the SPIDER instrument. These approaches use deep learning to learn prior information from training data, increasing the reconstruction quality, and significantly reducing the computation time required to recover images by orders of magnitude. Reconstruction time is reduced to ∌10{\sim} 10 milliseconds, opening up the possibility of real-time imaging with SPIDER for the first time. Furthermore, we show that these methods can also be applied in domains where training data is scarce, such as astronomical imaging, by leveraging transfer learning from domains where plenty of training data are available.Comment: 21 pages, 14 figure

    Dimensionality reduction and sparse representations in computer vision

    Get PDF
    The proliferation of camera equipped devices, such as netbooks, smartphones and game stations, has led to a significant increase in the production of visual content. This visual information could be used for understanding the environment and offering a natural interface between the users and their surroundings. However, the massive amounts of data and the high computational cost associated with them, encumbers the transfer of sophisticated vision algorithms to real life systems, especially ones that exhibit resource limitations such as restrictions in available memory, processing power and bandwidth. One approach for tackling these issues is to generate compact and descriptive representations of image data by exploiting inherent redundancies. We propose the investigation of dimensionality reduction and sparse representations in order to accomplish this task. In dimensionality reduction, the aim is to reduce the dimensions of the space where image data reside in order to allow resource constrained systems to handle them and, ideally, provide a more insightful description. This goal is achieved by exploiting the inherent redundancies that many classes of images, such as faces under different illumination conditions and objects from different viewpoints, exhibit. We explore the description of natural images by low dimensional non-linear models called image manifolds and investigate the performance of computer vision tasks such as recognition and classification using these low dimensional models. In addition to dimensionality reduction, we study a novel approach in representing images as a sparse linear combination of dictionary examples. We investigate how sparse image representations can be used for a variety of tasks including low level image modeling and higher level semantic information extraction. Using tools from dimensionality reduction and sparse representation, we propose the application of these methods in three hierarchical image layers, namely low-level features, mid-level structures and high-level attributes. Low level features are image descriptors that can be extracted directly from the raw image pixels and include pixel intensities, histograms, and gradients. In the first part of this work, we explore how various techniques in dimensionality reduction, ranging from traditional image compression to the recently proposed Random Projections method, affect the performance of computer vision algorithms such as face detection and face recognition. In addition, we discuss a method that is able to increase the spatial resolution of a single image, without using any training examples, according to the sparse representations framework. In the second part, we explore mid-level structures, including image manifolds and sparse models, produced by abstracting information from low-level features and offer compact modeling of high dimensional data. We propose novel techniques for generating more descriptive image representations and investigate their application in face recognition and object tracking. In the third part of this work, we propose the investigation of a novel framework for representing the semantic contents of images. This framework employs high level semantic attributes that aim to bridge the gap between the visual information of an image and its textual description by utilizing low level features and mid level structures. This innovative paradigm offers revolutionary possibilities including recognizing the category of an object from purely textual information without providing any explicit visual example
    corecore