4,513 research outputs found

    A new kernel-based approach to system identification with quantized output data

    Full text link
    In this paper we introduce a novel method for linear system identification with quantized output data. We model the impulse response as a zero-mean Gaussian process whose covariance (kernel) is given by the recently proposed stable spline kernel, which encodes information on regularity and exponential stability. This serves as a starting point to cast our system identification problem into a Bayesian framework. We employ Markov Chain Monte Carlo methods to provide an estimate of the system. In particular, we design two methods based on the so-called Gibbs sampler that allow also to estimate the kernel hyperparameters by marginal likelihood maximization via the expectation-maximization method. Numerical simulations show the effectiveness of the proposed scheme, as compared to the state-of-the-art kernel-based methods when these are employed in system identification with quantized data.Comment: 10 pages, 4 figure

    Bayesian kernel-based system identification with quantized output data

    Full text link
    In this paper we introduce a novel method for linear system identification with quantized output data. We model the impulse response as a zero-mean Gaussian process whose covariance (kernel) is given by the recently proposed stable spline kernel, which encodes information on regularity and exponential stability. This serves as a starting point to cast our system identification problem into a Bayesian framework. We employ Markov Chain Monte Carlo (MCMC) methods to provide an estimate of the system. In particular, we show how to design a Gibbs sampler which quickly converges to the target distribution. Numerical simulations show a substantial improvement in the accuracy of the estimates over state-of-the-art kernel-based methods when employed in identification of systems with quantized data.Comment: Submitted to IFAC SysId 201

    Identification of Parametric Underspread Linear Systems and Super-Resolution Radar

    Full text link
    Identification of time-varying linear systems, which introduce both time-shifts (delays) and frequency-shifts (Doppler-shifts), is a central task in many engineering applications. This paper studies the problem of identification of underspread linear systems (ULSs), whose responses lie within a unit-area region in the delay Doppler space, by probing them with a known input signal. It is shown that sufficiently-underspread parametric linear systems, described by a finite set of delays and Doppler-shifts, are identifiable from a single observation as long as the time bandwidth product of the input signal is proportional to the square of the total number of delay Doppler pairs in the system. In addition, an algorithm is developed that enables identification of parametric ULSs from an input train of pulses in polynomial time by exploiting recent results on sub-Nyquist sampling for time delay estimation and classical results on recovery of frequencies from a sum of complex exponentials. Finally, application of these results to super-resolution target detection using radar is discussed. Specifically, it is shown that the proposed procedure allows to distinguish between multiple targets with very close proximity in the delay Doppler space, resulting in a resolution that substantially exceeds that of standard matched-filtering based techniques without introducing leakage effects inherent in recently proposed compressed sensing-based radar methods.Comment: Revised version of a journal paper submitted to IEEE Trans. Signal Processing: 30 pages, 17 figure

    Throughput-Distortion Computation Of Generic Matrix Multiplication: Toward A Computation Channel For Digital Signal Processing Systems

    Get PDF
    The generic matrix multiply (GEMM) function is the core element of high-performance linear algebra libraries used in many computationally-demanding digital signal processing (DSP) systems. We propose an acceleration technique for GEMM based on dynamically adjusting the imprecision (distortion) of computation. Our technique employs adaptive scalar companding and rounding to input matrix blocks followed by two forms of packing in floating-point that allow for concurrent calculation of multiple results. Since the adaptive companding process controls the increase of concurrency (via packing), the increase in processing throughput (and the corresponding increase in distortion) depends on the input data statistics. To demonstrate this, we derive the optimal throughput-distortion control framework for GEMM for the broad class of zero-mean, independent identically distributed, input sources. Our approach converts matrix multiplication in programmable processors into a computation channel: when increasing the processing throughput, the output noise (error) increases due to (i) coarser quantization and (ii) computational errors caused by exceeding the machine-precision limitations. We show that, under certain distortion in the GEMM computation, the proposed framework can significantly surpass 100% of the peak performance of a given processor. The practical benefits of our proposal are shown in a face recognition system and a multi-layer perceptron system trained for metadata learning from a large music feature database.Comment: IEEE Transactions on Signal Processing (vol. 60, 2012

    Hierarchically Clustered Adaptive Quantization CMAC and Its Learning Convergence

    Get PDF
    No abstract availabl
    • …
    corecore