26,543 research outputs found

    Universal communication part II: channels with memory

    Full text link
    Consider communication over a channel whose probabilistic model is completely unknown vector-wise and is not assumed to be stationary. Communication over such channels is challenging because knowing the past does not indicate anything about the future. The existence of reliable feedback and common randomness is assumed. In a previous paper it was shown that the Shannon capacity cannot be attained, in general, if the channel is not known. An alternative notion of "capacity" was defined, as the maximum rate of reliable communication by any block-coding system used over consecutive blocks. This rate was shown to be achievable for the modulo-additive channel with an individual, unknown noise sequence, and not achievable for some channels with memory. In this paper this "capacity" is shown to be achievable for general channel models possibly including memory, as long as this memory fades with time. In other words, there exists a system with feedback and common randomness that, without knowledge of the channel, asymptotically performs as well as any block code, which may be designed knowing the channel. For non-fading memory channels a weaker type of "capacity" is shown to be achievable

    Achieving the Empirical Capacity Using Feedback Part I: Memoryless Additive Models

    Full text link
    We address the problem of universal communications over an unknown channel with an instantaneous noiseless feedback, and show how rates corresponding to the empirical behavior of the channel can be attained, although no rate can be guaranteed in advance. First, we consider a discrete modulo-additive channel with alphabet X\mathcal{X}, where the noise sequence ZnZ^n is arbitrary and unknown and may causally depend on the transmitted and received sequences and on the encoder's message, possibly in an adversarial fashion. Although the classical capacity of this channel is zero, we show that rates approaching the empirical capacity logXHemp(Zn)\log|\mathcal{X}|-H_{emp}(Z^n) can be universally attained, where Hemp(Zn)H_{emp}(Z^n) is the empirical entropy of ZnZ^n. For the more general setting where the channel can map its input to an output in an arbitrary unknown fashion subject only to causality, we model the empirical channel actions as the modulo-addition of a realized noise sequence, and show that the same result applies if common randomness is available. The results are proved constructively, by providing a simple sequential transmission scheme approaching the empirical capacity. In part II of this work we demonstrate how even higher rates can be attained by using more elaborate models for channel actions, and by utilizing possible empirical dependencies in its behavior.Comment: Submitted to the IEEE Transactions on Information Theor

    Universal Decoding for Gaussian Intersymbol Interference Channels

    Full text link
    A universal decoding procedure is proposed for the intersymbol interference (ISI) Gaussian channels. The universality of the proposed decoder is in the sense of being independent of the various channel parameters, and at the same time, attaining the same random coding error exponent as the optimal maximum-likelihood (ML) decoder, which utilizes full knowledge of these unknown parameters. The proposed decoding rule can be regarded as a frequency domain version of the universal maximum mutual information (MMI) decoder. Contrary to previously suggested universal decoders for ISI channels, our proposed decoding metric can easily be evaluated.Comment: Submitted to IEEE Trans. on Information Theor

    Competitive minimax universal decoding for several ensembles of random codes

    Full text link
    Universally achievable error exponents pertaining to certain families of channels (most notably, discrete memoryless channels (DMC's)), and various ensembles of random codes, are studied by combining the competitive minimax approach, proposed by Feder and Merhav, with Chernoff bound and Gallager's techniques for the analysis of error exponents. In particular, we derive a single--letter expression for the largest, universally achievable fraction ξ\xi of the optimum error exponent pertaining to the optimum ML decoding. Moreover, a simpler single--letter expression for a lower bound to ξ\xi is presented. To demonstrate the tightness of this lower bound, we use it to show that ξ=1\xi=1, for the binary symmetric channel (BSC), when the random coding distribution is uniform over: (i) all codes (of a given rate), and (ii) all linear codes, in agreement with well--known results. We also show that ξ=1\xi=1 for the uniform ensemble of systematic linear codes, and for that of time--varying convolutional codes in the bit-error--rate sense. For the latter case, we also show how the corresponding universal decoder can be efficiently implemented using a slightly modified version of the Viterbi algorithm which em employs two trellises.Comment: 41 pages; submitted to IEEE Transactions on Information Theor

    Gaussian Intersymbol Interference Channels With Mismatch

    Full text link
    This paper considers the problem of channel coding over Gaussian intersymbol interference (ISI) channels with a given metric decoding rule. Specifically, it is assumed that the mismatched decoder has an incorrect assumption on the impulse response function. The mismatch capacity is the highest achievable rate for a given decoding rule. Existing lower bounds to the mismatch capacity for channels and decoding metrics with memory (as in our model) are presented only in the form of multi-letter expressions that have not been calculated in practice. Consequently, they provide little insight on the mismatch problem. In this paper, we derive computable single-letter lower bounds to the mismatch capacity, and discuss some implications of our results. Our achievable rates are based on two ensembles, the ensemble of codewords generated by an autoregressive process, and the ensemble of codewords drawn uniformly over a "type class" of real-valued sequences. Computation of our achievable rates demonstrates non-trivial behavior of the achievable rates as a function of the mismatched parameters. As a simple application of our technique, we derive also the random coding exponent associated with a mismatched decoder which assumes that there is no ISI at all. Finally, we compare our results with universal decoders which are designed outside the true class of channels that we consider in this paper

    Universal decoding with an erasure option

    Full text link
    Motivated by applications of rateless coding, decision feedback, and ARQ, we study the problem of universal decoding for unknown channels, in the presence of an erasure option. Specifically, we harness the competitive minimax methodology developed in earlier studies, in order to derive a universal version of Forney's classical erasure/list decoder, which in the erasure case, optimally trades off between the probability of erasure and the probability of undetected error. The proposed universal erasure decoder guarantees universal achievability of a certain fraction ξ\xi of the optimum error exponents of these probabilities (in a sense to be made precise in the sequel). A single--letter expression for ξ\xi, which depends solely on the coding rate and the threshold, is provided. The example of the binary symmetric channel is studied in full detail, and some conclusions are drawn.Comment: 23 pages. submitted to the IEEE Transactions on Information Theor

    Universal Anomaly Detection: Algorithms and Applications

    Full text link
    Modern computer threats are far more complicated than those seen in the past. They are constantly evolving, altering their appearance, perpetually changing disguise. Under such circumstances, detecting known threats, a fortiori zero-day attacks, requires new tools, which are able to capture the essence of their behavior, rather than some fixed signatures. In this work, we propose novel universal anomaly detection algorithms, which are able to learn the normal behavior of systems and alert for abnormalities, without any prior knowledge on the system model, nor any knowledge on the characteristics of the attack. The suggested method utilizes the Lempel-Ziv universal compression algorithm in order to optimally give probability assignments for normal behavior (during learning), then estimate the likelihood of new data (during operation) and classify it accordingly. The suggested technique is generic, and can be applied to different scenarios. Indeed, we apply it to key problems in computer security. The first is detecting Botnets Command and Control (C&C) channels. A Botnet is a logical network of compromised machines which are remotely controlled by an attacker using a C&C infrastructure, in order to perform malicious activities. We derive a detection algorithm based on timing data, which can be collected without deep inspection, from open as well as encrypted flows. We evaluate the algorithm on real-world network traces, showing how a universal, low complexity C&C identification system can be built, with high detection rates and low false-alarm probabilities. Further applications include malicious tools detection via system calls monitoring and data leakage identification

    The Porosity of Additive Noise Sequences

    Full text link
    Consider a binary additive noise channel with noiseless feedback. When the noise is a stationary and ergodic process Z\mathbf{Z}, the capacity is 1H(Z)1-\mathbb{H}(\mathbf{Z}) (H()\mathbb{H}(\cdot) denoting the entropy rate). It is shown analogously that when the noise is a deterministic sequence zz^\infty, the capacity under finite-state encoding and decoding is 1ρˉ(z)1-\bar{\rho}(z^\infty), where ρˉ()\bar{\rho}(\cdot) is Lempel and Ziv's finite-state compressibility. This quantity is termed the \emph{porosity} σ()\underline{\sigma}(\cdot) of an individual noise sequence. A sequence of schemes are presented that universally achieve porosity for any noise sequence. These converse and achievability results may be interpreted both as a channel-coding counterpart to Ziv and Lempel's work in universal source coding, as well as an extension to the work by Lomnitz and Feder and Shayevitz and Feder on communication across modulo-additive channels. Additionally, a slightly more practical architecture is suggested that draws a connection with finite-state predictability, as introduced by Feder, Gutman, and Merhav.Comment: 22 pages, 9 figure

    Multi-User MIMO Receivers With Partial State Information

    Full text link
    We consider a multi-user multiple-input multiple-output (MU-MIMO) system that uses orthogonal frequency division multiplexing (OFDM). Several receivers are developed for data detection of MU-MIMO transmissions where two users share the same OFDM time and frequency resources. The receivers have partial state information about the MU-MIMO transmission with each receiver having knowledge of the MU-MIMO channel, however the modulation constellation of the co-scheduled user is unknown. We propose a joint maximum likelihood (ML) modulation classification of the co-scheduled user and data detection receiver using the max-log-MAP approximation. It is shown that the decision metric for the modulation classification is an accumulation over a set of tones of Euclidean distance computations that are also used by the max-log-MAP detector for bit log-likelihood ratio (LLR) soft decision generation. An efficient hardware implementation emerges that exploits this commonality between the classification and detection steps and results in sharing of the hardware resources. Comparisons of the link performance of the proposed receiver to several linear receivers is demonstrated through computer simulations. It is shown that the proposed receiver offers \unit[1.5]{dB} improvement in signal-to-noise ratio (SNR) over the nulling projection receiver at 1%1\% block error rate (BLER) for 6464-QAM with turbo code rate of 1/21/2 in the case of zero transmit and receiver antenna correlations. However, in the case of high antenna correlation, the linear receiver approaches suffer significant loss relative to the optimal receiver

    Universal Randomized Guessing with Application to Asynchronous Decentralized Brute-Force Attacks

    Full text link
    Consider the problem of guessing the realization of a random vector X\textbf{X} by repeatedly submitting queries (guesses) of the form "Is X\textbf{X} equal to x\textbf{x}?" until an affirmative answer is obtained. In this setup, a key figure of merit is the number of queries required until the right vector is identified, a number that is termed the \emph{guesswork}. Typically, one wishes to devise a guessing strategy which minimizes a certain guesswork moment. In this work, we study a universal, decentralized scenario where the guesser does not know the distribution of X\textbf{X}, and is not allowed to use a strategy which prepares a list of words to be guessed in advance, or even remember which words were already used. Such a scenario is useful, for example, if bots within a Botnet carry out a brute-force attack in order to guess a password or decrypt a message, yet cannot coordinate the guesses between them or even know how many bots actually participate in the attack. We devise universal decentralized guessing strategies, first, for memoryless sources, and then generalize them for finite-state sources. In each case, we derive the guessing exponent, and then prove its asymptotic optimality by deriving a compatible converse bound. The strategies are based on randomized guessing using a universal distribution. We also extend the results to guessing with side information. Finally, for all above scenarios, we design efficient algorithms in order to sample from the universal distributions, resulting in strategies which do not depend on the source distribution, are efficient to implement, and can be used asynchronously by multiple agents
    corecore