69,931 research outputs found

    Blind equalization

    Get PDF
    An equalizer is an adaptive filter that compensates for the non-ideal characteristics of a communication channel by processing the received signal. The adaptive algorithm searches for the inverse impulse response of the channel, and it requires knowledge of a training sequence, in order to generate an error signal necessary for the adaptive process. There are practical situations where it would be highly desirable to achieve complete adaptation without the use of a training sequence, hence the the term blind . Examples of these situations are multipoint data networks, high-capacity line-of-sight digital radio, and reflection seismology. A blind adaptive algorithm has been developed, based on simplified equalization criteria. These criteria are that the second- and fourth-order moments of the input and output sequences are equalized. The algorithm is entirely driven by statistics, only requiring knowledge of the variance of the input signal. Because of the insensitivity of higher-order statistics to Gaussian processes, the algorithm performs well when additive white Gaussian noise is present in the channel. Simulations are presented in which the new blind equalizer developed is compared to other equalization algorithms

    Statistical framework for video decoding complexity modeling and prediction

    Get PDF
    Video decoding complexity modeling and prediction is an increasingly important issue for efficient resource utilization in a variety of applications, including task scheduling, receiver-driven complexity shaping, and adaptive dynamic voltage scaling. In this paper we present a novel view of this problem based on a statistical framework perspective. We explore the statistical structure (clustering) of the execution time required by each video decoder module (entropy decoding, motion compensation, etc.) in conjunction with complexity features that are easily extractable at encoding time (representing the properties of each module's input source data). For this purpose, we employ Gaussian mixture models (GMMs) and an expectation-maximization algorithm to estimate the joint execution-time - feature probability density function (PDF). A training set of typical video sequences is used for this purpose in an offline estimation process. The obtained GMM representation is used in conjunction with the complexity features of new video sequences to predict the execution time required for the decoding of these sequences. Several prediction approaches are discussed and compared. The potential mismatch between the training set and new video content is addressed by adaptive online joint-PDF re-estimation. An experimental comparison is performed to evaluate the different approaches and compare the proposed prediction scheme with related resource prediction schemes from the literature. The usefulness of the proposed complexity-prediction approaches is demonstrated in an application of rate-distortion-complexity optimized decoding
    corecore