2,462 research outputs found

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Integrating IoT and Novel Approaches to Enhance Electromagnetic Image Quality using Modern Anisotropic Diffusion and Speckle Noise Reduction Techniques

    Get PDF
    Electromagnetic imaging is becoming more important in many sectors, and this requires high-quality pictures for reliable analysis. This study makes use of the complementary relationship between IoT and current image processing methods to improve the quality of electromagnetic images. The research presents a new framework for connecting Internet of Things sensors to imaging equipment, allowing for instantaneous input and adjustment. At the same time, the suggested system makes use of sophisticated anisotropic diffusion algorithms to bring out key details and hide noise in electromagnetic pictures. In addition, a cutting-edge technique for reducing speckle noise is used to combat this persistent issue in electromagnetic imaging. The effectiveness of the suggested system was determined via a comparison to standard imaging techniques. There was a noticeable improvement in visual sharpness, contrast, and overall clarity without any loss of information, as shown by the results. Incorporating IoT sensors also facilitated faster calibration and real-time modifications, which opened up new possibilities for use in contexts with a high degree of variation. In fields where electromagnetic imaging plays a crucial role, such as medicine, remote sensing, and aerospace, the ramifications of this study are far-reaching. Our research demonstrates how the Internet of Things (IoT) and cutting-edge image processing have the potential to dramatically improve the functionality and versatility of electromagnetic imaging systems

    Computational intelligence-based steganalysis comparison for RCM-DWT and PVA-MOD methods

    Get PDF
    This research article proposes data hiding technique for improving the data hiding procedure and securing the data transmission with the help of contrast mapping technique along with advanced data encryption standard. High data hiding capacity, image quality and security are the measures of steganography. Of these three measures, number of bits that can be hidden in a single cover pixel, bits per pixel (bpp), is very important and many researchers are working to improve the bpp. We propose an improved high capacity data hiding method that maintains the acceptable image quality that is more than 30 dB and improves the embedding capacity higher than that of the methods proposed in recent years. The method proposed in this paper uses notational system and achieves higher embedding rate of 4 bpp and also maintain the good visual quality. To measure the efficiency of the proposed information hiding methodology, a simulation system was developed with some of impairments caused by a communication system. PSNR (Peak Signal to Noise ratio) is used to verify the robustness of the images. The proposed research work is verified in accordance to noise analysis. To evaluate the defencing performance during attack RS steganalysis is used

    Multi-view human action recognition using 2D motion templates based on MHIs and their HOG description

    Get PDF
    In this study, a new multi-view human action recognition approach is proposed by exploiting low-dimensional motion information of actions. Before feature extraction, pre-processing steps are performed to remove noise from silhouettes, incurred due to imperfect, but realistic segmentation. Two-dimensional motion templates based on motion history image (MHI) are computed for each view/action video. Histograms of oriented gradients (HOGs) are used as an efficient description of the MHIs which are classified using nearest neighbor (NN) classifier. As compared with existing approaches, the proposed method has three advantages: (i) does not require a fixed number of cameras setup during training and testing stages hence missing camera-views can be tolerated, (ii) requires less memory and bandwidth requirements and hence (iii) is computationally efficient which makes it suitable for real-time action recognition. As far as the authors know, this is the first report of results on the MuHAVi-uncut dataset having a large number of action categories and a large set of camera-views with noisy silhouettes which can be used by future workers as a baseline to improve on. Experimentation results on multi-view with this dataset gives a high-accuracy rate of 95.4% using leave-one-sequence-out cross-validation technique and compares well to similar state-of-the-art approachesSergio A Velastin acknowledges the Chilean National Science and Technology Council (CONICYT) for its funding under grant CONICYT-Fondecyt Regular no. 1140209 (“OBSERVE”). He is currently funded by the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement nÂș 600371, el Ministerio de EconomĂ­a y Competitividad (COFUND2013-51509) and Banco Santander

    Progression approach for image denoising

    Get PDF
    Removing noise from the image by retaining the details and features of this treated image remains a standing challenge for the researchers in this field. Therefore, this study is carried out to propose and implement a new denoising technique for removing impulse noise from the digital image, using a new way. This technique permits the narrowing of the gap between the original and the restored images, visually and quantitatively by adopting the mathematical concept ''arithmetic progression''. Through this paper, this concept is integrated into the image denoising, due to its ability in modelling the variation of pixels’ intensity in the image. The principle of the proposed denoising technique relies on the precision, where it keeps the uncorrupted pixels by using effective noise detection and converts the corrupted pixels by replacing them with other closest pixels from the original image at lower cost and with more simplicity

    Mathematical approaches to digital color image denoising

    Get PDF
    Many mathematical models have been designed to remove noise from images. Most of them focus on grey value images with additive artificial noise. Only very few specifically target natural color photos taken by a digital camera with real noise. Noise in natural color photos have special characteristics that are substantially different from those that have been added artificially. In this thesis previous denoising models are reviewed. We analyze the strengths and weakness of existing denoising models by showing where they perform well and where they don't. We put special focus on two models: The steering kernel regression model and the non-local model. For Kernel Regression model, an adaptive bilateral ïŹlter is introduced as complementary to enhance it. Also a non-local bilateral filter is proposed as an application of the idea of non-local means ïŹlter. Then the idea of cross-channel denoising is proposed in this thesis. It is effective in denoising monochromatic images by understanding the characteristics of digital noise in natural color images. A non-traditional color space is also introduced specifically for this purpose. The cross-channel paradigm can be applied to most of the exisiting models to greatly improve their performance for denoising natural color images.Ph.D.Committee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wan

    Frequency and Spatial Domains Adaptive-based Enhancement Technique for Thermal Infrared Images

    Get PDF
    Low contrast and noisy image limits the amount of information conveyed to the user. With the proliferation of digital imagery and computer interface between man-and-machine, it is now viable to consider digital enhancement in the image before presenting it to the user, thus increasing the information throughput. With better contrast, target detection and discrimination can be improved. The paper presents a sequence of filtering operations in frequency and spatial domains to improve the quality of the thermal infrared (IR) images. Basically, two filters – homomorphic filter followed by adaptive Gaussian filter are applied to improve the quality of the thermal IR images. We have systematically evaluated the algorithm on a variety of images and carefully compared it with the techniques presented in the literature. We performed an evaluation of three filter banks such as homomorphic, Gaussian 5×5 and the proposed method, and we have seen that the proposed method yields optimal PSNR for all the thermal images. The results demonstrate that the proposed algorithm is efficient for enhancement of thermal IR images.Defence Science Journal, Vol. 64, No. 5, September 2014, pp.451-457, DOI:http://dx.doi.org/10.14429/dsj.64.687

    Analysis and optimisation of a variational model for mixed Gaussian and Salt & Pepper noise removal

    Get PDF
    We analyse a variational regularisation problem for mixed noise removal that was recently proposed in [14]. The data discrepancy term of the model combines L1 and L2 terms in an infimal convolution fashion and it is appropriate for the joint removal of Gaussian and Salt & Pepper noise. In this work we perform a finer analysis of the model which emphasises on the balancing effect of the two parameters appearing in the discrepancy term. Namely, we study the asymptotic behaviour of the model for large and small values of these parameters and we compare it to the corresponding variational models with L1 and L2 data fidelity. Furthermore, we compute exact solutions for simple data functions taking the total variation as regulariser. Using these theoretical results, we then analytically study a bilevel optimisation strategy for automatically selecting the parameters of the model by means of a training set. Finally, we report some numerical results on the selection of the optimal noise model via such strategy which confirm the validity of our analysis and the use of popular data models in the case of "blind'' model selection
    • 

    corecore