3,481 research outputs found

    Information Theoretic Principles of Universal Discrete Denoising

    Full text link
    Today, the internet makes tremendous amounts of data widely available. Often, the same information is behind multiple different available data sets. This lends growing importance to latent variable models that try to learn the hidden information from the available imperfect versions. For example, social media platforms can contain an abundance of pictures of the same person or object, yet all of which are taken from different perspectives. In a simplified scenario, one may consider pictures taken from the same perspective, which are distorted by noise. This latter application allows for a rigorous mathematical treatment, which is the content of this contribution. We apply a recently developed method of dependent component analysis to image denoising when multiple distorted copies of one and the same image are available, each being corrupted by a different and unknown noise process. In a simplified scenario, we assume that the distorted image is corrupted by noise that acts independently on each pixel. We answer completely the question of how to perform optimal denoising, when at least three distorted copies are available: First we define optimality of an algorithm in the presented scenario, and then we describe an aymptotically optimal universal discrete denoising algorithm (UDDA). In the case of binary data and binary symmetric noise, we develop a simplified variant of the algorithm, dubbed BUDDA, which we prove to attain universal denoising uniformly.Comment: 10 pages, 6 figure

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    Discrete Denoising with Shifts

    Full text link
    We introduce S-DUDE, a new algorithm for denoising DMC-corrupted data. The algorithm, which generalizes the recently introduced DUDE (Discrete Universal DEnoiser) of Weissman et al., aims to compete with a genie that has access, in addition to the noisy data, also to the underlying clean data, and can choose to switch, up to mm times, between sliding window denoisers in a way that minimizes the overall loss. When the underlying data form an individual sequence, we show that the S-DUDE performs essentially as well as this genie, provided that mm is sub-linear in the size of the data. When the clean data is emitted by a piecewise stationary process, we show that the S-DUDE achieves the optimum distribution-dependent performance, provided that the same sub-linearity condition is imposed on the number of switches. To further substantiate the universal optimality of the S-DUDE, we show that when the number of switches is allowed to grow linearly with the size of the data, \emph{any} (sequence of) scheme(s) fails to compete in the above senses. Using dynamic programming, we derive an efficient implementation of the S-DUDE, which has complexity (time and memory) growing only linearly with the data size and the number of switches mm. Preliminary experimental results are presented, suggesting that S-DUDE has the capacity to significantly improve on the performance attained by the original DUDE in applications where the nature of the data abruptly changes in time (or space), as is often the case in practice.Comment: 30 pages, 3 figures, submitted to IEEE Trans. Inform. Theor

    Scanning and Sequential Decision Making for Multidimensional Data -- Part II: The Noisy Case

    Get PDF
    We consider the problem of sequential decision making for random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of scanning and sequentially filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance, and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of scanning and prediction for noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested. This paper is the second part of a two-part paper. The first paper dealt with scanning and sequential decision making on noiseless data arrays

    Effective denoising and adaptive equalization of indoor optical wireless channel with artificial light using the discrete wavelet transform and artificial neural network

    Get PDF
    Indoor diffuse optical wireless (OW) communication systems performance is limited due to a number of effects; interference from natural and artificial light sources and multipath induced intersymbol interference (ISI). Artificial light interference (ALI) is a periodic signal with a spectrum profile extending up to the MHz range. It is the dominant source of performance degradation at low data rates, which can be removed using a high-pass filter (HPF). On the other hand, ISI is more severe at high data rates and an equalizing filter is incorporated at the receiver to compensate for the ISI. This paper provides the simulation results for a discrete wavelet transform (DWT)—artificial neural network (ANN)-based receiver architecture for on-and-off keying (OOK) non-return-to-zero (NRZ) scheme for a diffuse indoor OW link in the presence of ALI and ISI. ANN is adopted for classification acting as an efficient equalizer compared to the traditional equalizers. The ALI is effectively reduced by proper selection of the DWT coefficients resulting in improved receiver performance compared to the digital HPF. The simulated bit error rate (BER) performance of proposed DWT-ANN receiver structure for a diffuse indoor OW link operating at a data range of 10-200 Mbps is presented and discussed. The results are compared with performance of a diffuse link with an HPF-equalizer, ALI with/without filtering, and a line-of-sight (LOS) without filtering. We show that the DWT-ANN display a lower power requirement when compared to the receiver with an HPF-equalizer over a full range of delay spread in presence of ALI. However, as expected compared to the ideal LOS link the power penalty is higher reaching to 6 dB at 200 Mbps data rate

    Universal Compressed Sensing

    Full text link
    In this paper, the problem of developing universal algorithms for compressed sensing of stochastic processes is studied. First, R\'enyi's notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) optimization approach is proposed, and it is proven that it can reliably recover any stationary process satisfying some mixing constraints from sufficient number of randomized linear measurements, without having any prior information about the distribution of the process. It is proved that a Lagrangian-type approximation of the MEP optimization problem, referred to as Lagrangian-MEP problem, is identical to a heuristic implementable algorithm proposed by Baron et al. It is shown that for the right choice of parameters the Lagrangian-MEP algorithm, in addition to having the same asymptotic performance as MEP optimization, is also robust to the measurement noise. For memoryless sources with a discrete-continuous mixture distribution, the fundamental limits of the minimum number of required measurements by a non-universal compressed sensing decoder is characterized by Wu et al. For such sources, it is proved that there is no loss in universal coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the optimal performance

    Universal Minimax Discrete Denoising under Channel Uncertainty

    Full text link
    The goal of a denoising algorithm is to recover a signal from its noise-corrupted observations. Perfect recovery is seldom possible and performance is measured under a given single-letter fidelity criterion. For discrete signals corrupted by a known discrete memoryless channel, the DUDE was recently shown to perform this task asymptotically optimally, without knowledge of the statistical properties of the source. In the present work we address the scenario where, in addition to the lack of knowledge of the source statistics, there is also uncertainty in the channel characteristics. We propose a family of discrete denoisers and establish their asymptotic optimality under a minimax performance criterion which we argue is appropriate for this setting. As we show elsewhere, the proposed schemes can also be implemented computationally efficiently.Comment: Submitted to IEEE Transactions of Information Theor

    Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

    Full text link
    Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.Comment: 16 pages, 10 figures. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS '19
    corecore