458 research outputs found

    Sparsity based stereoscopic image quality assessment

    Get PDF
    In this work, we present a full-reference stereo image quality assessment algorithm that is based on the sparse representations of luminance images and depth maps. The primary challenge lies in dealing with the sparsity of disparity maps in conjunction with the sparsity of luminance images. Although analysing the sparsity of images is sufficient to bring out the quality of luminance images, the effectiveness of sparsity in quantifying depth quality is yet to be fully understood. We present a full reference Sparsity-based Quality Assessment of Stereo Images (SQASI) that is aimed at this understanding

    No reference quality assessment of stereo video based on saliency and sparsity

    Get PDF
    With the popularity of video technology, stereoscopic video quality assessment (SVQA) has become increasingly important. Existing SVQA methods cannot achieve good performance because the videos' information is not fully utilized. In this paper, we consider various information in the videos together, construct a simple model to combine and analyze the diverse features, which is based on saliency and sparsity. First, we utilize the 3-D saliency map of sum map, which remains the basic information of stereoscopic video, as a valid tool to evaluate the videos' quality. Second, we use the sparse representation to decompose the sum map of 3-D saliency into coefficients, then calculate the features based on sparse coefficients to obtain the effective expression of videos' message. Next, in order to reduce the relevance between the features, we put them into stacked auto-encoder, mapping vectors to higher dimensional space, and adding the sparse restraint, then input them into support vector machine subsequently, and finally, get the quality assessment scores. Within that process, we take the advantage of saliency and sparsity to extract and simplify features. Through the later experiment, we can see the proposed method is fitting well with the subjective scores

    Single-image Tomography: 3D Volumes from 2D Cranial X-Rays

    Get PDF
    As many different 3D volumes could produce the same 2D x-ray image, inverting this process is challenging. We show that recent deep learning-based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which is then fused in a second step with the input x-ray into a high-resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer-simulated 2D x-ray images of 3D volumes scanned from 175 mammalian species. Applications of our approach include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x-rays

    Patch-based Denoising Algorithms for Single and Multi-view Images

    Get PDF
    In general, all single and multi-view digital images are captured using sensors, where they are often contaminated with noise, which is an undesired random signal. Such noise can also be produced during transmission or by lossy image compression. Reducing the noise and enhancing those images is among the fundamental digital image processing tasks. Improving the performance of image denoising methods, would greatly contribute to single or multi-view image processing techniques, e.g. segmentation, computing disparity maps, etc. Patch-based denoising methods have recently emerged as the state-of-the-art denoising approaches for various additive noise levels. This thesis proposes two patch-based denoising methods for single and multi-view images, respectively. A modification to the block matching 3D algorithm is proposed for single image denoising. An adaptive collaborative thresholding filter is proposed which consists of a classification map and a set of various thresholding levels and operators. These are exploited when the collaborative hard-thresholding step is applied. Moreover, the collaborative Wiener filtering is improved by assigning greater weight when dealing with similar patches. For the denoising of multi-view images, this thesis proposes algorithms that takes a pair of noisy images captured from two different directions at the same time (stereoscopic images). The structural, maximum difference or the singular value decomposition-based similarity metrics is utilized for identifying locations of similar search windows in the input images. The non-local means algorithm is adapted for filtering these noisy multi-view images. The performance of both methods have been evaluated both quantitatively and qualitatively through a number of experiments using the peak signal-to-noise ratio and the mean structural similarity measure. Experimental results show that the proposed algorithm for single image denoising outperforms the original block matching 3D algorithm at various noise levels. Moreover, the proposed algorithm for multi-view image denoising can effectively reduce noise and assist to estimate more accurate disparity maps at various noise levels

    Sparse representation based stereoscopic image quality assessment accounting for perceptual cognitive process

    Get PDF
    In this paper, we propose a sparse representation based Reduced-Reference Image Quality Assessment (RR-IQA) index for stereoscopic images from the following two perspectives: 1) Human visual system (HVS) always tries to infer the meaningful information and reduces uncertainty from the visual stimuli, and the entropy of primitive (EoP) can well describe this visual cognitive progress when perceiving natural images. 2) Ocular dominance (also known as binocularity) which represents the interaction between two eyes is quantified by the sparse representation coefficients. Inspired by previous research, the perception and understanding of an image is considered as an active inference process determined by the level of “surprise”, which can be described by EoP. Therefore, the primitives learnt from natural images can be utilized to evaluate the visual information by computing entropy. Meanwhile, considering the binocularity in stereo image quality assessment, a feasible way is proposed to characterize this binocular process according to the sparse representation coefficients of each view. Experimental results on LIVE 3D image databases and MCL database further demonstrate that the proposed algorithm achieves high consistency with subjective evaluation
    corecore