18,959 research outputs found

    Single frame super-resolution image system

    Get PDF
    The estimation of some unknown quantity information from known observable information can be viewed as a specific statistical process which needs an extra source of information prediction strategy. In this regard, image super-resolution is an important application In this thesis, we proposed a new image interpolation method based on Redundant Discrete Wavelet Transform (RDWT) and self-adaptive processes in which edge direction details are considered to solve single-frame image super-resolution task. Information about sharp variations, both in horizontal and vertical directions derived from wavelet transform sub-bands are considered, followed by detection and modification of the aliasing part in the preliminary output in order to increase the visual effect. By exploiting fundamental properties of images such as property of edge direction, different parts of the source image are considered separately in order to predict the vertical and horizontal details accurately, helping to consummate the whole framework in reconstructing the high-resolution image. Extensive tests of the proposed method show that both objective quality (PSNR) and subjective quality are obviously improved compared to several other state-of-the-art methods. And this work also leaved capacious space for further research, not only theoretical but also practical. Some of the related research applications based on this algorithm strategy are also briefly introduced

    An Improved Approach for Contrast Enhancement of Spinal Cord Images based on Multiscale Retinex Algorithm

    Full text link
    This paper presents a new approach for contrast enhancement of spinal cord medical images based on multirate scheme incorporated into multiscale retinex algorithm. The proposed work here uses HSV color space, since HSV color space separates color details from intensity. The enhancement of medical image is achieved by down sampling the original image into five versions, namely, tiny, small, medium, fine, and normal scale. This is due to the fact that the each versions of the image when independently enhanced and reconstructed results in enormous improvement in the visual quality. Further, the contrast stretching and MultiScale Retinex (MSR) techniques are exploited in order to enhance each of the scaled version of the image. Finally, the enhanced image is obtained by combining each of these scales in an efficient way to obtain the composite enhanced image. The efficiency of the proposed algorithm is validated by using a wavelet energy metric in the wavelet domain. Reconstructed image using proposed method highlights the details (edges and tissues), reduces image noise (Gaussian and Speckle) and improves the overall contrast. The proposed algorithm also enhances sharp edges of the tissue surrounding the spinal cord regions which is useful for diagnosis of spinal cord lesions. Elaborated experiments are conducted on several medical images and results presented show that the enhanced medical pictures are of good quality and is found to be better compared with other researcher methods.Comment: 13 pages, 6 figures, International Journal of Imaging and Robotics. arXiv admin note: text overlap with arXiv:1406.571

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Application of Fractal and Wavelets in Microcalcification Detection

    Get PDF
    Breast cancer has been recognized as one or the most frequent, malignant tumors in women, clustered microcalcifications in mammogram images has been widely recognized as an early sign of breast cancer. This work is devote to review the application of Fractal and Wavelets in microcalcifications detection

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    Improved Stroke Detection at Early Stages Using Haar Wavelets and Laplacian Pyramid

    Get PDF
    Stroke merupakan pembunuh nomor tiga di dunia, namun hanya sedikit metode tentang deteksi dini. Oleh karena itu dibutuhkan metode untuk mendeteksi hal tersebut. Penelitian ini mengusulkan sebuah metode gabungan untuk mendeteksi dua jenis stroke secara simultan. Haar wavelets untuk mendeteksi stroke hemoragik dan Laplacian pyramid untuk mendeteksi stroke iskemik. Tahapan dalam penelitian ini terdiri dari pra proses tahap 1 dan 2, Haar wavelets, Laplacian pyramid, dan perbaikan kualitas citra. Pra proses adalah menghilangkan bagian tulang tengkorak, reduksi derau, perbaikan kontras, dan menghilangkan bagian selain citra otak. Kemudian dilakukan perbaikan citra. Selanjutnya Haar wavelet digunakan untuk ekstraksi daerah hemoragik sedangkan Laplacian pyramid untuk ekstraksi daerah iskemik. Tahapan terakhir adalah menghitung fitur Grey Level Cooccurrence Matrix (GLCM) sebagai fitur untuk proses klasifikasi. Hasil visualisasi diproses lanjut untuk ekstrasi fitur menggunakan GLCM dengan 12 fitur dan kemudian GLCM dengan 4 fitur. Untuk proses klasifikasi digunakan SVM dan KNN, sedangkan pengukuran performa menggunakan akurasi. Jumlah data hemoragik dan iskemik adalah 45 citra yang dibagi menjadi 2 bagian, 28 citra untuk pengujian dan 17 citra untuk pelatihan. Hasil akhir menunjukkan akurasi tertinggi yang dicapai menggunakan SVM adalah 82% dan KNN adalah 88%
    corecore