39,052 research outputs found

    Oversampling PCM techniques and optimum noise shapers for quantizing a class of nonbandlimited signals

    Get PDF
    We consider the efficient quantization of a class of nonbandlimited signals, namely, the class of discrete-time signals that can be recovered from their decimated version. The signals are modeled as the output of a single FIR interpolation filter (single band model) or, more generally, as the sum of the outputs of L FIR interpolation filters (multiband model). These nonbandlimited signals are oversampled, and it is therefore reasonable to expect that we can reap the same benefits of well-known efficient A/D techniques that apply only to bandlimited signals. We first show that we can obtain a great reduction in the quantization noise variance due to the oversampled nature of the signals. We can achieve a substantial decrease in bit rate by appropriately decimating the signals and then quantizing them. To further increase the effective quantizer resolution, noise shaping is introduced by optimizing prefilters and postfilters around the quantizer. We start with a scalar time-invariant quantizer and study two important cases of linear time invariant (LTI) filters, namely, the case where the postfilter is the inverse of the prefilter and the more general case where the postfilter is independent from the prefilter. Closed form expressions for the optimum filters and average minimum mean square error are derived in each case for both the single band and multiband models. The class of noise shaping filters and quantizers is then enlarged to include linear periodically time varying (LPTV)M filters and periodically time-varying quantizers of period M. We study two special cases in great detail

    Non-atomic Games for Multi-User Systems

    Get PDF
    In this contribution, the performance of a multi-user system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels for uplink CDMA. This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.Comment: 17 pages, 4 figures, submitted to IEEE JSAC Special Issue on ``Game Theory in Communication Systems'

    An affine combination of two LMS adaptive filters - Transient mean-square analysis

    Get PDF
    This paper studies the statistical behavior of an affine combination of the outputs of two LMS adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor λ(n)\lambda(n) is restricted to the interval (0,1)(0,1). The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the MSE. First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSD's of either filter

    Echo Cancellation : the generalized likelihood ratio test for double-talk vs. channel change

    Get PDF
    Echo cancellers are required in both electrical (impedance mismatch) and acoustic (speaker-microphone coupling) applications. One of the main design problems is the control logic for adaptation. Basically, the algorithm weights should be frozen in the presence of double-talk and adapt quickly in the absence of double-talk. The optimum likelihood ratio test (LRT) for this problem was studied in a recent paper. The LRT requires a priori knowledge of the background noise and double-talk power levels. Instead, this paper derives a generalized log likelihood ratio test (GLRT) that does not require this knowledge. The probability density function of a sufficient statistic under each hypothesis is obtained and the performance of the test is evaluated as a function of the system parameters. The receiver operating characteristics (ROCs) indicate that it is difficult to correctly decide between double-talk and a channel change, based upon a single look. However, detection based on about 200 successive samples yields a detection probability close to unity (0.99) with a small false alarm probability (0.01) for the theoretical GLRT model. Application of a GLRT-based echo canceller (EC) to real voice data shows comparable performance to that of the LRT-based EC given in a recent paper

    The sensitivity of a very long baseline interferometer

    Get PDF
    The theoretical sensitivity of various methods of acquiring and processing interferometer data are compared. It is shown that for a fixed digital recording capacity one bit quantization of single sideband data filtered with a rectangular bandpass and sampled at the Nyquist rate yields the optimum signal to noise ratio. The losses which result from imperfect bandpass, poor image rejection, approximate methods of fringe rotation, fractional bit correction, and loss of quadrature are discussed. Also discussed is the use of the complex delay function as a maximum likelihood fringe estimator

    Hybrid computer Monte-Carlo techniques

    Get PDF
    Hybrid analog-digital computer systems for Monte Carlo method application

    Stochastic Analysis of the LMS Algorithm for System Identification with Subspace Inputs

    Get PDF
    This paper studies the behavior of the low rank LMS adaptive algorithm for the general case in which the input transformation may not capture the exact input subspace. It is shown that the Independence Theory and the independent additive noise model are not applicable to this case. A new theoretical model for the weight mean and fluctuation behaviors is developed which incorporates the correlation between successive data vectors (as opposed to the Independence Theory model). The new theory is applied to a network echo cancellation scheme which uses partial-Haar input vector transformations. Comparison of the new model predictions with Monte Carlo simulations shows good-to-excellent agreement, certainly much better than predicted by the Independence Theory based model available in the literature

    Results on principal component filter banks: colored noise suppression and existence issues

    Get PDF
    We have made explicit the precise connection between the optimization of orthonormal filter banks (FBs) and the principal component property: the principal component filter bank (PCFB) is optimal whenever the minimization objective is a concave function of the subband variances of the FB. This explains PCFB optimality for compression, progressive transmission, and various hitherto unnoticed white-noise, suppression applications such as subband Wiener filtering. The present work examines the nature of the FB optimization problems for such schemes when PCFBs do not exist. Using the geometry of the optimization search spaces, we explain exactly why these problems are usually analytically intractable. We show the relation between compaction filter design (i.e., variance maximization) and optimum FBs. A sequential maximization of subband variances produces a PCFB if one exists, but is otherwise suboptimal for several concave objectives. We then study PCFB optimality for colored noise suppression. Unlike the case when the noise is white, here the minimization objective is a function of both the signal and the noise subband variances. We show that for the transform coder class, if a common signal and noise PCFB (KLT) exists, it is, optimal for a large class of concave objectives. Common PCFBs for general FB classes have a considerably more restricted optimality, as we show using the class of unconstrained orthonormal FBs. For this class, we also show how to find an optimum FB when the signal and noise spectra are both piecewise constant with all discontinuities at rational multiples of π

    Introduction to the Analysis of Low-Frequency Gravitational Wave Data

    Get PDF
    The space-based gravitational wave detector LISA will observe in the low-frequency gravitational-wave band (0.1 mHz up to 1 Hz). LISA will search for a variety of expected signals, and when it detects a signal it will have to determine a number of parameters, such as the location of the source on the sky and the signal's polarisation. This requires pattern-matching, called matched filtering, which uses the best available theoretical predictions about the characteristics of waveforms. All the estimates of the sensitivity of LISA to various sources assume that the data analysis is done in the optimum way. Because these techniques are unfamiliar to many young physicists, I use the first part of this lecture to give a very basic introduction to time-series data analysis, including matched filtering. The second part of the lecture applies these techniques to LISA, showing how estimates of LISA's sensitivity can be made, and briefly commenting on aspects of the signal-analysis problem that are special to LISA.Comment: 20 page

    A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost

    Get PDF
    We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin
    corecore