114 research outputs found

    Adaptively Quadratic (AQua) Image Interpolation

    Full text link

    New Digital Images Magnification Algorithm Based on Integration of Mapping and Synthesis Concept

    Get PDF
    Pembesaran imej adalah proses pembinaan semula imej resolusi tinggi (HR) dari versi resolusi rendah (LR). Proses pembesaran imej adalah salah satu proses penting yang digunakan untuk memenuhi keperluan manusia. Proses ini digunakan dalam beberapa aplikasi seperti dalam pengimejan perubatan, penderiaan jauh, mempertingkatkan butiran imej dan percetakan. Pada umumnya, algoritma pembesaran yang biasa menggunakan konsep penentudalaman. Walau bagaimanapun, algoritma pembesaran berasaskan penentudalaman ini mengalami masalah seperti kehadiran artifak-artifak yang tidak diingini dalam imej yang diperbesarkan seperti pinggir terhalang dan pinggir kabur. Artifak-artifak ini kebanyakannya muncul pada pinggir yang jelas. Oleh itu, selain menggunakan konsep penentudalaman, kajian ini memberi fokus kepada memperkenalkan algoritmapembesaran yang baharuberasaskan konsep sintesis. Disebabkan oleh konsep sintesis telah digunakan dalam algoritma sintesis tekstur berasaskan tampalan, pengubahsuaian kepada algoritma sintesis tekstur barasaskan tampalan perlu dilakukan agar dapatdigunakanuntuktujuan pembesaran imej. Pengubahsuaian yang dicadangkan menghasilkan algoritma pembesaran baharu yang dipanggil Algoritma Pembesaran Barasaskan Pemetaan (MBMA). Algoritma MBMA menggantikan setiap piksel imej LR dengan blok HRdua dimensi untuk membina imej HR. Algoritma yang dicadangkan pada asasnya direka untuk memelihara pinggir yangjelas. Dua variasi cadangan MBMA diperkenalkan, iaitu MBMA_Average dan MBMA_Direct. Variasi MBMA yang dicadangkan telah dibandingkan dengan teknologiterkini pembesaran algoritma lain menggunakan 100 imej piawai dan 200 imej plat kereta lesen Malaysia (MLCP). MBMA_Average menghasilkan imej pembesaran yang lebih baik dengan pengurangan artifak yang tidak diingini (iaitu pengurangan pinggir kabur dan pinggir terhalang) berbanding dengan teknologi algoritma yang lain. Seterusnya, analisis kuantitatif menunjukkan bahawa MBMA_Average yang dicadangkan juga menghasilkan nilai yang terbaik dalam pengukuran PSNR, SSIM, MSE dan FSIM berbanding algoritma-algoritma tersebut. ________________________________________________________________________________________________________________________ Image magnification is the process of reconstructing High Resolution (HR) image from its Low Resolution (LR) version. Image magnification process is one of the most important processes that is used to fulfill human needs. This process is used in several applications such as in medical imaging, remote sensing, enhancing image details and printing. In general, the common magnification algorithms employ interpolation concept. However, these interpolation-based magnification algorithms suffer from the appearance of undesirable artifacts in magnified images such as edge blocking and edge blurring. These artifacts mostly appear around the strong edges.Therefore, instead of employing interpolation concept, this study focuses in introducing new magnification algorithm based on synthesis concept. As the synthesis concept has been used in patch based texture synthesis algorithms, a modification to the patch based texture synthesis algorithms has to be carried out in order to use it for the image magnification purpose. The proposed modification produces a new magnification algorithm called the Mapping Based Magnification Algorithm (MBMA). The proposed MBMA replaces each pixel in the LR image with a two dimensional HR block to reconstruct the HR image. The proposed algorithm is basically designed to preserve the strong edges. Two variants of the proposed MBMA are introduced, namely MBMA_Average and MBMA_Direct.The proposed MBMA variants have been compared with other state-of-the-art magnification algorithms by using 100 standard images and 200 Malaysian License Car Plate (MLCP) images. The proposed MBMA_Average produces the best magnified images with less undesirable artifacts (i.e. less of edge blurring and edge blocking) compared with the other state-of-the-art algorithms. Furthermore, the quantitative analyses show that the proposed MBMA_Average also produces the best value of the PSNR, MSE, SSIM and FSIM measurements compared to those algorithms

    Fast artifacts-free image interpolation

    Get PDF
    In this paper we describe a novel general purpose image interpolation method based on the combination of two different procedures. First, an adaptive algorithm is applied interpolating locally pixel values along the direction where second order image derivative is lower. Then interpolated values are modified using an iterative refinement minimizing differences in second order image derivatives, maximizing second order derivative values and smoothing isolevel curves. The first algorithm itself provides edge preserving images that are measurably better than those obtained with similarly fast methods presented in the literature. The full method provides interpolated images with a ”natural ” appearance that do not present the artifacts affecting linear and nonlinear methods. Objective and subjective tests on a wide series of natural images clearly show the advantages of the proposed technique over existing approaches.

    A Block-Based Regularized Approach for Image Interpolation

    Get PDF
    This paper presents a new efficient algorithm for image interpolation based on regularization theory. To render a high-resolution (HR) image from a low-resolution (LR) image, classical interpolation techniques estimate the missing pixels from the surrounding pixels based on a pixel-by-pixel basis. In contrast, the proposed approach formulates the interpolation problem into the optimization of a cost function. The proposed cost function consists of a data fidelity term and regularization functional. The closed-form solution to the optimization problem is derived using the framework of constrained least squares minimization, by incorporating Kronecker product and singular value decomposition (SVD) to reduce the computational cost of the algorithm. The effect of regularization on the interpolation results is analyzed, and an adaptive strategy is proposed for selecting the regularization parameter. Experimental results show that the proposed approach is able to reconstruct high-fidelity HR images, while suppressing artifacts such as edge distortion and blurring, to produce superior interpolation results to that of conventional image interpolation techniques

    Color Filter Array Image Analysis for Joint Denoising and Demosaicking

    Get PDF
    Noise is among the worst artifacts that affect the perceptual quality of the output from a digital camera. While cost-effective and popular, single-sensor solutions to camera architectures are not adept at noise suppression. In this scheme, data are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby each pixel location measures the intensity of the light corresponding to only a single color. Aside from undersampling, observations made under noisy conditions typically deteriorate the estimates of the full-color image in the reconstruction process commonly referred to as demosaicking or CFA interpolation in the literature. A typical CFA scheme involves the canonical color triples (i.e., red, green, blue), and the most prevalent arrangement is called Bayer pattern. As the general trend of increased image resolution continues due to prevalence of multimedia, the importance of interpolation is de-emphasized while the concerns for computational efficiency, noise, and color fidelity play an increasingly prominent role in the decision making of a digital camera architect. For instance, the interpolation artifacts become less noticeable as the size of the pixel shrinks with respect to the image features, while the decreased dimensionality of the pixel sensors on the complementary metal oxide semiconductor (CMOS) and charge coupled device (CCD) sensors make the pixels more susceptible to noise. Photon-limited influences are also evident in low-light photography, ranging from a specialty camera for precision measurement to indoor consumer photography. Sensor data, which can be interpreted as subsampled or incomplete image data, undergo a series of image processing procedures in order to produce a digital photograph. However, these same steps may amplify noise introduced during image acquisition. Specifically, the demosaicking step is a major source of conflict between the image processing pipeline and image sensor noise characterization because the interpolation methods give high priority to preserving the sharpness of edges and textures. In the presence of noise, noise patterns may form false edge structures; therefore, the distortions at the output are typically correlated with the signal in a complicated manner that makes noise modelling mathematically intractable. Thus, it is natural to conceive of a rigorous tradeoff between demosaicking and image denoising

    Multiband Atmospheric Correction Algorithm for Ocean Color Retrievals

    Get PDF
    National Aeronautics and Space Administration's (NASA's) current atmospheric correction (AC) algorithm for ocean color utilizes two bands and their ratio in the near infrared (NIR) to estimate aerosol reflectance and aerosol type. The algorithm then extrapolates the spectral dependence of aerosol reflectance to the visible wavelengths based on modeled spectral dependence of the identified aerosol type. Future advanced ocean color sensors, such as the Ocean Color Instrument (OCI) that will be carried on the Plankton, Aerosol, Cloud, and ocean Ecosystem (PACE) satellite, will be capable of measuring the hyperspectral radiance from 340 to 890 nm at 5-nm spectral resolution and at seven discrete short-wave infrared (SWIR) channels: 940, 1,038, 1,250, 1,378, 1,615, 2,130, and 2,260 nm. To optimally employ this unprecedented instrument capability, we propose an improved AC algorithm that utilizes all atmospheric-window channels in the NIR to SWIR spectral range to reduce the uncertainty in the AC process. A theoretical uncertainty analysis of this, namely, multiband AC (MBAC), indicates that the algorithm can reduce the uncertainty in remote sensing reflectance (Rrs) retrievals of the ocean caused by sensor random noise. Furthermore, in optically complex waters, where the NIR signal is affected by contributions from highly reflective turbid waters, the MBAC algorithm can be adaptively weighted to the strongly absorbing SWIR channels to enable improved ocean color retrievals in coastal waters. We provide here a description of the algorithm and demonstrate the improved performance in ocean color retrievals, relative to the current NASA standard AC algorithm, through comparison with field measurements and assessment of propagated uncertainties in applying the MBAC algorithm to MODIS and simulated PACE OCI data

    Statistical Tools for Digital Image Forensics

    Get PDF
    A digitally altered image, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic image. The tampering, however, may disturb some underlying statistical properties of the image. Under this assumption, we propose five techniques that quantify and detect statistical perturbations found in different forms of tampered images: (1) re-sampled images (e.g., scaled or rotated); (2) manipulated color filter array interpolated images; (3) double JPEG compressed images; (4) images with duplicated regions; and (5) images with inconsistent noise patterns. These techniques work in the absence of any embedded watermarks or signatures. For each technique we develop the theoretical foundation, show its effectiveness on credible forgeries, and analyze its sensitivity and robustness to simple counter-attacks

    Homotopy Based Reconstruction from Acoustic Images

    Get PDF

    Algorithms for enhanced artifact reduction and material recognition in computed tomography

    Full text link
    Computed tomography (CT) imaging provides a non-destructive means to examine the interior of an object which is a valuable tool in medical and security applications. The variety of materials seen in the security applications is higher than in the medical applications. Factors such as clutter, presence of dense objects, and closely placed items in a bag or a parcel add to the difficulty of the material recognition in security applications. Metal and dense objects create image artifacts which degrade the image quality and deteriorate the recognition accuracy. Conventional CT machines scan the object using single source or dual source spectra and reconstruct the effective linear attenuation coefficient of voxels in the image which may not provide the sufficient information to identify the occupying materials. In this dissertation, we provide algorithmic solutions to enhance CT material recognition. We provide a set of algorithms to accommodate different classes of CT machines. First, we provide a metal artifact reduction algorithm for conventional CT machines which perform the measurements using single X-ray source spectrum. Compared to previous methods, our algorithm is robust to severe metal artifacts and accurately reconstructs the regions that are in proximity to metal. Second, we propose a novel joint segmentation and classification algorithm for dual-energy CT machines which extends prior work to capture spatial correlation in material X-ray attenuation properties. We show that the classification performance of our method surpasses the prior work's result. Third, we propose a new framework for reconstruction and classification using a new class of CT machines known as spectral CT which has been recently developed. Spectral CT uses multiple energy windows to scan the object, thus it captures data across higher energy dimensions per detector. Our reconstruction algorithm extracts essential features from the measured data by using spectral decomposition. We explore the effect of using different transforms in performing the measurement decomposition and we develop a new basis transform which encapsulates the sufficient information of the data and provides high classification accuracy. Furthermore, we extend our framework to perform the task of explosive detection. We show that our framework achieves high detection accuracy and it is robust to noise and variations. Lastly, we propose a combined algorithm for spectral CT, which jointly reconstructs images and labels each region in the image. We offer a tractable optimization method to solve the proposed discrete tomography problem. We show that our method outperforms the prior work in terms of both reconstruction quality and classification accuracy
    corecore