3 research outputs found

    Improved Sequential MAP estimation of CABAC encoded data with objective adjustment of the complexity/efficiency tradeoff

    No full text
    International audienceThis paper presents an efficient MAP estimator for the joint source-channel decoding of data encoded with a context adaptive binary arithmetic coder (CABAC). The decoding process is compatible with realistic implementations of CABAC in standards like H.264, i.e., handling adaptive probabilities, context modeling and integer arithmetic coding. Soft decoding is obtained using an improved sequential decoding technique, which allows to obtain various tradeoffs between complexity and efficiency. The algorithms are simulated in a context reminiscent of H264. Error detection is realized by exploiting on one side the properties of the binarization scheme and on the other side the redundancy left in the code string. As a result, the CABAC compression efficiency is preserved and no additional redundancy is introduced in the bit stream. Simulation results outline the efficiency of the proposed techniques for encoded data sent over AWGN and UMTS-OFDM channels

    Sequential Detection of Linear Features in Two-Dimensional Random Fields

    Get PDF
    The detection of edges, lines, and other linear features in two-dimensional discrete images is a low level processing step of fundamental importance in the automatic processing of such data. Many subsequent tasks in computer vision, pattern recognition, and image processing depend on the successful execution of this step. In this thesis, we will address one class of techniques for performing this task: sequential detection. Our aims are fourfold. First, we would like to discuss the use of sequential techniques as an attractive alternative to the somewhat better known methods of approaching this problem. Although several researchers have obtained significant results with sequential type algorithms, the inherent benefits of a sequential approach would appear to have gone largely unappreciated. Secondly, the sequential techniques reported to date appear somewhat lacking with respect to a theoretical foundation. Furthermore, the theory that is advanced incorporates rather severe restrictions on the types of images to which it applies, thus imposing a significant limitation to the generality of the method(s). We seek to advance a more general theory with minimal assumptions regarding the input image. A third goal is to utilize this newly developed theory to obtain quantitative assessments of the performance of the method. This important step, which depends on a computational theory, can answer such vital questions as: Are assumptions about the qualitative behavior of the method justified? How does signal-to-noise ratio impact its behavior? How fast is it? How accurate? The state of theoretical development of present techniques does not allow for this type of analysis. Finally, a fourth aim is to\u27 extend the earlier results to include correlated image data. Present sequential methods as well as many non-sequential methods assume that the image data is uncorrelated and cannot therefore make use of the mutual information between pixels in real-world images. We would like to extend the theory to incorporate correlated images and demonstrate the advantages incurred by the use of the existing mutual information. The topics to be discussed are organized in the following manner. We will first provide a rather general discussion of the problem of detecting intensity edges in images. The edge detection problem will serve as the prototypical problem of linear feature extraction for much of this thesis. It will later be shown that the detection of lines, ramp edges, texture edges, etc. can be handled in similar fashion to intensity edges, the only difference being the nature of the preprocessing operator used. The class of sequential techniques will then be introduced, with a view to emphasize the particular advantages and disadvantages exhibited by the class. This Chapter will conclude with a more detailed treatment of the various sequential algorithms proposed in the literature. Chapter 2 then develops the algorithm proposed by the author, Sequential Edge Linking or SEL. It begins with some definitions, follows with a derivation of the critical path branch metric and some of its properties, and concludes with a discussion of algorithms. The third Chapter is devoted exclusively to an analysis of the dynamical behavior and performance of the method. \u27 Chapter 4 then deals with the case of correlated random fields. In that Chapter, a model is proposed for which paths searched by the SEL algorithm are shown to possess a well-known autocorrelation function. This allows the use of a simple linear filter to decorrelate the raw image data. Finally, Chapter 5 presents a number of experimental results and corroboration of the theoretical conclusions of earlier Chapters. Some concluding remarks are also included in Chapter 5

    Importance Sampling Simulation of the Stack Algorithm with Application to Sequential Decoding

    Get PDF
    Importance sampling is a Monte Carlo variance reduction technique which in many applications has resulted in a significant reduction in computational cost required to obtain accurate Monte Carlo estimates. The basic idea is to generate the random inputs using a biased simulation distribution. That is, one that differs from the true underlying probability model. Simulation data is then weighted by an appropriate likelihood ratio in order to obtain an unbiased estimate of the desired parameter. This thesis presents new importance sampling techniques for the simulation of systems that employ the stack algorithm. The stack algorithm is primarily used in digital communications to decode convolutional codes, but there are also other applications. For example, sequential edge linking is a method of finding edges in images that employs the stack algorithm. In brief, the stack algorithm is an algorithm that attempts to find the maximum metric path through a large decision tree. There are two quantities that characterize its performance. First there is the probability of a branching error. The second quantity is the distribution of computation. It turns out that the number of tree nodes examined in order to make a specific branching decision is a random variable. The distribution of computation is the distribution of this random variable. The estimation of the distribution of computation, and parameters derived from this distribution, is the main goal of this work. We present two new importance sampling schemes (including some variations) for estimating the distribution of computation of the stack algorithm. The first general method is called the reference path method. This method biases noise inputs using the weight distribution of the associated convolutional code. The second method is the partitioning method. This method uses a stationary biasing of noise inputs that alters the drift of the node metric process in an ensemble average sense. The biasing is applied only up to a certain point in time; the point where the correct path node metric minimum occurs. This method is inspired by both information theory and large deviations theory. This thesis also presents another two importance sampling techniques. The first is called the error events simulation method. This scheme will be used to estimate the error probabilities of stack algorithm decoders. The second method that we shall present is a new importance sampling technique for simulating the sequential edge linking algorithm. The main goal of this presentation will be the development of the basic theory that is relevant to this simulation problem, and to discuss some of the key issues that are related to the sequential edge linking simulation
    corecore