48 research outputs found

    Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

    Get PDF
    Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelets descriptors have been widely used in multi-resolution image analysis. However, making the wavelets transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other theories or information, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling an image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing

    Texture-Detail Preservation Measurement in Camera Phones: An Updated Approach

    Get PDF
    Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions

    A Detection Method of Ectocervical Cell Nuclei for Pap test Images, Based on Adaptive Thresholds and Local Derivatives

    Get PDF
    Cervical cancer is one of the main causes of death by disease worldwide. In Peru, it holds the first place in frequency and represents 8% of deaths caused by sickness. To detect the disease in the early stages, one of the most used screening tests is the cervix Papanicolaou test. Currently, digital images are increasingly being used to improve Pap test efficiency. This work develops an algorithm based on adaptive thresholds, which will be used in Pap smear assisted quality control software. The first stage of the method is a pre-processing step, in which noise and background removal is done. Next, a block is segmented for each one of the points selected as not background, and a local threshold per block is calculated to search for cell nuclei. If a nucleus is detected, an artifact rejection follows, where only cell nuclei and inflammatory cells are left for the doctors to interpret. The method was validated with a set of 55 images containing 2317 cells. The algorithm successfully recognized 92.3% of the total nuclei in all images collected.Revisón por pare

    Wavelet-Based Multicomponent Denoising Profile for the Classification of Hyperspectral Images

    Get PDF
    The high resolution of the hyperspectral remote sensing images available allows the detailed analysis of even small spatial structures. As a consequence, the study of techniques to efficiently extract spatial information is a very active realm. In this paper, we propose a novel denoising wavelet-based profile for the extraction of spatial information that does not require parameters fixed by the user. Over each band obtained by a wavelet-based feature extraction technique, a denoising profile (DP) is built through the recursive application of discrete wavelet transforms followed by a thresholding process. Each component of the DP consists of features reconstructed by recursively applying inverse wavelet transforms to the thresholded coefficients. Several thresholding methods are explored. In order to show the effectiveness of the extended DP (EDP), we propose a classification scheme based on the computation of the EDP and supervised classification by extreme learning machine. The obtained results are compared to other state-of-the-art methods based on profiles in the literature. An additional study of behavior in the presence of added noise is also performed showing the high reliability of the EDP proposedThis work was supported in part by the Consellería de Educación, Universidade e Formación Profesional under Grants GRC2014/008 and ED431C 2018/2019 and the Ministerio de Economía y Empresa, Gobierno de España under Grant TIN2016-76373-P. Both are cofunded by the European Regional Development FundS

    Research on digital image watermark encryption based on hyperchaos

    Get PDF
    The digital watermarking technique embeds meaningful information into one or more watermark images hidden in one image, in which it is known as a secret carrier. It is difficult for a hacker to extract or remove any hidden watermark from an image, and especially to crack so called digital watermark. The combination of digital watermarking technique and traditional image encryption technique is able to greatly improve anti-hacking capability, which suggests it is a good method for keeping the integrity of the original image. The research works contained in this thesis include: (1)A literature review the hyperchaotic watermarking technique is relatively more advantageous, and becomes the main subject in this programme. (2)The theoretical foundation of watermarking technologies, including the human visual system (HVS), the colour space transform, discrete wavelet transform (DWT), the main watermark embedding algorithms, and the mainstream methods for improving watermark robustness and for evaluating watermark embedding performance. (3) The devised hyperchaotic scrambling technique it has been applied to colour image watermark that helps to improve the image encryption and anti-cracking capabilities. The experiments in this research prove the robustness and some other advantages of the invented technique. This thesis focuses on combining the chaotic scrambling and wavelet watermark embedding to achieve a hyperchaotic digital watermark to encrypt digital products, with the human visual system (HVS) and other factors taken into account. This research is of significant importance and has industrial application value

    Smart Nanoscopy: A Review of Computational Approaches to Achieve Super-Resolved Optical Microscopy

    Get PDF
    The field of optical nanoscopy , a paradigm referring to the recent cutting-edge developments aimed at surpassing the widely acknowledged 200nm-diffraction limit in traditional optical microscopy, has gained recent prominence & traction in the 21st century. Numerous optical implementations allowing for a new frontier in traditional confocal laser scanning fluorescence microscopy to be explored (termed super-resolution fluorescence microscopy ) have been realized through the development of techniques such as stimulated emission and depletion (STED) microscopy, photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), amongst others. Nonetheless, it would be apt to mention at this juncture that optical nanoscopy has been explored since the mid-late 20th century, through several computational techniques such as deblurring and deconvolution algorithms. In this review, we take a step back in the field, evaluating the various in silico methods used to achieve optical nanoscopy today, ranging from traditional deconvolution algorithms (such as the Nearest Neighbors algorithm) to the latest developments in the field of computational nanoscopy, founded on artificial intelligence (AI). An insight is provided into some of the commercial applications of AI-based super-resolution imaging, prior to delving into the potentially promising future implications of computational nanoscopy. This is facilitated by recent advancements in the field of AI, deep learning (DL) and convolutional neural network (CNN) architectures, coupled with the growing size of data sources and rapid improvements in computing hardware, such as multi-core CPUs & GPUs, low-latency RAM and hard-drive capacitie
    corecore