341 research outputs found
Enhancing Hyperspectral Image Quality using Nonlinear PCA
International audienceIn this paper, we propose a new method aiming at reducing the noise in hyperspectral images. It is based on the nonlinear generalization of Principal Component Analysis (NLPCA). The NLPCA is performed by an auto associative neural network that have the hyperspectral image as input and is trained to reconstruct the same image at the output. Thanks to its bottleneck structure, the AANN forces the hyper spectral image to be projected in a lower dimensionality feature space where noise as well as both linear and nonlinear correlations between spectral bands are removed. This process permits to obtain enhancements in terms of hyperspectral image quality. Experiments are conducted on different real hyper spectral images, with different contexts and resolutions. The results are qualitatively and quantitatively discussed and demonstrate the interest of the proposed method as compared to traditional approaches
Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats
Invisible watermarks safeguard images' copyrights by embedding hidden
messages detectable by owners. It also prevents people from misusing images,
especially those generated by AI models. Malicious adversaries can violate
these rights by removing the watermarks. In order to remove watermarks without
damaging the visual quality, the adversary needs to erase them while retaining
the essential information in the image. This is analogous to the encoding and
decoding process of generative autoencoders, especially variational
autoencoders (VAEs) and diffusion models. We propose a framework using
generative autoencoders to remove invisible watermarks and test it using VAEs
and diffusions. Our results reveal that, even without specific training,
off-the-shelf Stable Diffusion effectively removes most watermarks, surpassing
all current attackers. The result underscores the vulnerabilities in existing
watermarking schemes and calls for more robust methods for copyright
protection
Audio-Video Detection and Fusion of Broad Casting Information
In the last few decade of multimedia information systems, audio-video data has become an glowing part in many digital computer applications. Audio-video classification has been becoming a focus in the research of audio-video processing and pattern recognition. Automatic audio-video classification is very useful to audio-video indexing, content-based audio-video retrieval and on-line audio-video distribution such as online audio-video shopping, but it is a challenge to extract the most similar and salient themes from huge data of audio-video. In this paper, we propose effective algorithms to automatically segmentation and classify audio-video clips into one of Six classes: advertisement, cartoon, songs, serial, movie and news. For these categories a number of acoustic and visual features that include Mel Frequency Cepstral Coefficients, Color Histogram are extracted to characterize the audio and video data. The autoassociative neural network model (AANN) is used to capture the distribution of the acoustic and visual feature vectors. The AANN model captures the distribution of the acoustic and visual features of a class, and the back propagation learning algorithm is used to adjust the weights of the network to minimize the mean square error for each feature vector. Keywords: - Audio and Video detection, Audio and Video fusion, Mel Frequency Cepstral Coefficient, Color Histogram, Autoassociative Neural Network Model(AANN
Radar signal categorization using a neural network
Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications
Importance of Watermark Lossless Compression in Digital Medical Image Watermarking
Large size data requires more storage space, communication time, communication bandwidth and degrades host image
quality when it is embedded into it as watermark. Lossless compression reduces data size better than lossless one but with permanent loss of important part of data. Data lossless compression reduces data size contrast to lossy one without any data loss. Medical image data is very sensitive and needs lossless compression otherwise it will result in erroneous input for the health recovery process. This paper focuses on Ultrasound medical image region of interest(ROI) lossless compression as watermark using different techniques; PNG, GIF, JPG, JPEG2000 and Lempel Ziv Welsh (LZW). LZW technique was found 86% better than other tabulated techniques. Compression ratio and more bytes reduction were the parameters considered for the selection of better compression technique. In this work LZW has been used successfully for watermark lossless compression to watermark medical images in teleradiology to ensure less payload encapsulation into images to preserve
their perceptual and diagnostic qualities unchanged. On the other side in teleradiology the extracted lossless decompressed watermarks ensure the images authentication and their lossless recoveries in case of any tamper occurrences
Image Compression Using Cascaded Neural Networks
Images are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Many techniques developed for this purpose include transform coding, vector quantization and neural networks. In this thesis, a new neural network method is used to achieve image compression. This work extends the use of 2-layer neural networks to a combination of cascaded networks with one node in the hidden layer. A redistribution of the gray levels in the training phase is implemented in a random fashion to make the minimization of the mean square error applicable to a broad range of images. The computational complexity of this approach is analyzed in terms of overall number of weights and overall convergence. Image quality is measured objectively, using peak signal-to-noise ratio and subjectively, using perception. The effects of different image contents and compression ratios are assessed. Results show the performance superiority of cascaded neural networks compared to that of fixedarchitecture training paradigms especially at high compression ratios. The proposed new method is implemented in MATLAB. The results obtained, such as compression ratio and computing time of the compressed images, are presented
Image Compression Using Cascaded Neural Networks
Images are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Many techniques developed for this purpose include transform coding, vector quantization and neural networks. In this thesis, a new neural network method is used to achieve image compression. This work extends the use of 2-layer neural networks to a combination of cascaded networks with one node in the hidden layer. A redistribution of the gray levels in the training phase is implemented in a random fashion to make the minimization of the mean square error applicable to a broad range of images. The computational complexity of this approach is analyzed in terms of overall number of weights and overall convergence. Image quality is measured objectively, using peak signal-to-noise ratio and subjectively, using perception. The effects of different image contents and compression ratios are assessed. Results show the performance superiority of cascaded neural networks compared to that of fixedarchitecture training paradigms especially at high compression ratios. The proposed new method is implemented in MATLAB. The results obtained, such as compression ratio and computing time of the compressed images, are presented
- …