377 research outputs found

    Image De-noising using 2-D Circular-Support Wavelet Transform

    Get PDF
    Images are often suffering from two main corruptions (unwanted modifications). These modifications in image accuracy are categorized as blur and noise. Noise appears during different image processing phases of acquisition, transmission, and retrieval. The purpose of any de-noising algorithm is to remove such noise while maintaining as much as possible image details. A 2-D circular-support wavelet transform (2-D CSWT) is anticipated in this paper to be utilized as an image de-noising algorithm. The realization of such de-noising algorithm is accomplished in the form of some competent mask filters. De-noising by thresholding processes can be applied on all 2-D high-pass coefficient channels with different thresholding levels. Lena noisy image with different levels of noise (Salt and Pepper, and Gaussian) has been used to assess the performance of such de-noising scheme. Test are done in terms of PSNR and correlation factor of the reconstructed image. A comparative study between the Conventional wavelet transform and the 2-D CSWT done in this paper

    Teleporting digital images

    Get PDF
    During the last 25 years the scientific community has coexisted with the most fascinating protocol due to Quantum Physics: quantum teleportation (QTele), which would have been impossible if quantum entanglement, so questioned by Einstein, did not exist. In this work, a complete architecture for the teleportation of Computational Basis States (CBS) is presented. Such CBS will represent each of the possible 24 classical bits commonly used to encode every pixel of a 3-color-channel-image (red-green-blue, or cyan-yellow-magenta). For this purpose, a couple of interfaces: classical-to-quantum (Cl2Qu) and quantum-to-classical (Qu2Cl) are presented with two versions of the teleportation protocol: standard and simplified

    Discrete Denoising Diffusion Approach to Integer Factorization

    Full text link
    Integer factorization is a famous computational problem unknown whether being solvable in the polynomial time. With the rise of deep neural networks, it is interesting whether they can facilitate faster factorization. We present an approach to factorization utilizing deep neural networks and discrete denoising diffusion that works by iteratively correcting errors in a partially-correct solution. To this end, we develop a new seq2seq neural network architecture, employ relaxed categorical distribution and adapt the reverse diffusion process to cope better with inaccuracies in the denoising step. The approach is able to find factors for integers of up to 56 bits long. Our analysis indicates that investment in training leads to an exponential decrease of sampling steps required at inference to achieve a given success rate, thus counteracting an exponential run-time increase depending on the bit-length.Comment: International Conference on Artificial Neural Networks ICANN 202
    • …
    corecore