20,190 research outputs found

    Optimal lattices for sampling

    Get PDF
    The generalization of the sampling theorem to multidimensional signals is considered, with or without bandwidth constraints. The signal is modeled as a stationary random process and sampled on a lattice. Exact expressions for the mean-square error of the best linear interpolator are given in the frequency domain. Moreover, asymptotic expansions are derived for the average mean-square error when the sampling rate tends to zero and infinity, respectively. This makes it possible to determine the optimal lattices for sampling. In the low-rate sampling case, or equivalently for rough processes, the optimal lattice is the one which solves the packing problem, whereas in the high-rate sampling case, or equivalently for smooth processes, the optimal lattice is the one which solves the dual packing problem. In addition, the best linear interpolation is compared with ideal low-pass filtering (cardinal interpolation)

    Optimal filtering in fractional fourier domains

    Get PDF
    For time-invariant degradation models and stationary signals and noise, the classical Fourier domain Wiener filter, which can be implemented in O(NlogN) time, gives the minimum mean-square-error estimate of the original undistorted signal. For time-varying degradations and nonstationary processes, however, the optimal linear estimate requires O(N2) time for implementation. We consider filtering in fractional Fourier domains, which enables significant reduction of the error compared with ordinary Fourier domain filtering for certain types of degradation and noise (especially of chirped nature), while requiring only O(N\og N) implementation time. Thus, improved performance is achieved at no additional cost. Expressions for the optimal filter functions in fractional domains are derived, and several illustrative examples are given in which significant reduction of the error (by a factor of 50) is obtained. © 1997 IEEE

    Minimum requirements for feedback enhanced force sensing

    Full text link
    The problem of estimating an unknown force driving a linear oscillator is revisited. When using linear measurement, feedback is often cited as a mechanism to enhance bandwidth or sensitivity. We show that as long as the oscillator dynamics are known, there exists a real-time estimation strategy that reproduces the same measurement record as any arbitrary feedback protocol. Consequently some form of nonlinearity is required to gain any advantage beyond estimation alone. This result holds true in both quantum and classical systems, with non-stationary forces and feedback, and in the general case of non-Gaussian and correlated noise. Recently, feedback enhanced incoherent force sensing has been demonstrated [Nat. Nano. \textbf{7}, 509 (2012)], with the enhancement attributed to a feedback induced modification of the mechanical susceptibility. As a proof-of-principle we experimentally reproduce this result through straightforward filtering.Comment: 5 pages + 2 pages of Supplementary Informatio

    Oversampling PCM techniques and optimum noise shapers for quantizing a class of nonbandlimited signals

    Get PDF
    We consider the efficient quantization of a class of nonbandlimited signals, namely, the class of discrete-time signals that can be recovered from their decimated version. The signals are modeled as the output of a single FIR interpolation filter (single band model) or, more generally, as the sum of the outputs of L FIR interpolation filters (multiband model). These nonbandlimited signals are oversampled, and it is therefore reasonable to expect that we can reap the same benefits of well-known efficient A/D techniques that apply only to bandlimited signals. We first show that we can obtain a great reduction in the quantization noise variance due to the oversampled nature of the signals. We can achieve a substantial decrease in bit rate by appropriately decimating the signals and then quantizing them. To further increase the effective quantizer resolution, noise shaping is introduced by optimizing prefilters and postfilters around the quantizer. We start with a scalar time-invariant quantizer and study two important cases of linear time invariant (LTI) filters, namely, the case where the postfilter is the inverse of the prefilter and the more general case where the postfilter is independent from the prefilter. Closed form expressions for the optimum filters and average minimum mean square error are derived in each case for both the single band and multiband models. The class of noise shaping filters and quantizers is then enlarged to include linear periodically time varying (LPTV)M filters and periodically time-varying quantizers of period M. We study two special cases in great detail

    Mutual Information and Minimum Mean-square Error in Gaussian Channels

    Full text link
    This paper deals with arbitrarily distributed finite-power input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the input-output mutual information and the minimum mean-square error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discrete-time and continuous-time noncausal MMSE estimation. This fundamental information-theoretic result has an unexpected consequence in continuous-time nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signal-to-noise ratio is chosen uniformly distributed between 0 and SNR
    • …
    corecore