12,924 research outputs found

    Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    Get PDF
    This paper presents an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a sufficiently large neighborhood of known samples. The estimates of the unknown samples are obtained by minimizing the sum of squares of the residual errors that involve estimates of the autoregressive parameters. A statistical analysis shows that, for a burst of lost samples, the expected quadratic interpolation error per sample converges to the signal variance when the burst length tends to infinity. The method is in fact the first step of an iterative algorithm, in which in each iteration step the current estimates of the missing samples are used to compute the new estimates. Furthermore, the feasibility of implementation in hardware for real-time use is established. The method has been tested on artificially generated auto-regressive processes as well as on digitized music and speech signals

    Yield--Optimized Superoscillations

    Full text link
    Superoscillating signals are band--limited signals that oscillate in some region faster their largest Fourier component. While such signals have many scientific and technological applications, their actual use is hampered by the fact that an overwhelming proportion of the energy goes into that part of the signal, which is not superoscillating. In the present article we consider the problem of optimization of such signals. The optimization that we describe here is that of the superoscillation yield, the ratio of the energy in the superoscillations to the total energy of the signal, given the range and frequency of the superoscillations. The constrained optimization leads to a generalized eigenvalue problem, which is solved numerically. It is noteworthy that it is possible to increase further the superoscillation yield at the cost of slightly deforming the oscillatory part of the signal, while keeping the average frequency. We show, how this can be done gradually, which enables a trade-off between the distortion and the yield. We show how to apply this approach to non-trivial domains, and explain how to generalize this to higher dimensions.Comment: 8 pages, 5 figure

    A Novel Adaptive Spectrum Noise Cancellation Approach for Enhancing Heartbeat Rate Monitoring in a Wearable Device

    Get PDF
    This paper presents a novel approach, Adaptive Spectrum Noise Cancellation (ASNC), for motion artifacts removal in Photoplethysmography (PPG) signals measured by an optical biosensor to obtain clean PPG waveforms for heartbeat rate calculation. One challenge faced by this optical sensing method is the inevitable noise induced by movement when the user is in motion, especially when the motion frequency is very close to the target heartbeat rate. The proposed ASNC utilizes the onboard accelerometer and gyroscope sensors to detect and remove the artifacts adaptively, thus obtaining accurate heartbeat rate measurement while in motion. The ASNC algorithm makes use of a commonly accepted spectrum analysis approaches in medical digital signal processing, discrete cosine transform, to carry out frequency domain analysis. Results obtained by the proposed ASNC have been compared to the classic algorithms, the adaptive threshold peak detection and adaptive noise cancellation. The mean (standard deviation) absolute error and mean relative error of heartbeat rate calculated by ASNC is 0.33 (0.57) beats·min-1 and 0.65%, by adaptive threshold peak detection algorithm is 2.29 (2.21) beats·min-1 and 8.38%, by adaptive noise cancellation algorithm is 1.70 (1.50) beats·min-1 and 2.02%. While all algorithms performed well with both simulated PPG data and clean PPG data collected from our Verity device in situations free of motion artifacts, ASNC provided better accuracy when motion artifacts increase, especially when motion frequency is very close to the heartbeat rate

    Signal Reconstruction via H-infinity Sampled-Data Control Theory: Beyond the Shannon Paradigm

    Get PDF
    This paper presents a new method for signal reconstruction by leveraging sampled-data control theory. We formulate the signal reconstruction problem in terms of an analog performance optimization problem using a stable discrete-time filter. The proposed H-infinity performance criterion naturally takes intersample behavior into account, reflecting the energy distributions of the signal. We present methods for computing optimal solutions which are guaranteed to be stable and causal. Detailed comparisons to alternative methods are provided. We discuss some applications in sound and image reconstruction

    Local Measurement and Reconstruction for Noisy Graph Signals

    Full text link
    The emerging field of signal processing on graph plays a more and more important role in processing signals and information related to networks. Existing works have shown that under certain conditions a smooth graph signal can be uniquely reconstructed from its decimation, i.e., data associated with a subset of vertices. However, in some potential applications (e.g., sensor networks with clustering structure), the obtained data may be a combination of signals associated with several vertices, rather than the decimation. In this paper, we propose a new concept of local measurement, which is a generalization of decimation. Using the local measurements, a local-set-based method named iterative local measurement reconstruction (ILMR) is proposed to reconstruct bandlimited graph signals. It is proved that ILMR can reconstruct the original signal perfectly under certain conditions. The performance of ILMR against noise is theoretically analyzed. The optimal choice of local weights and a greedy algorithm of local set partition are given in the sense of minimizing the expected reconstruction error. Compared with decimation, the proposed local measurement sampling and reconstruction scheme is more robust in noise existing scenarios.Comment: 24 pages, 6 figures, 2 tables, journal manuscrip

    Xampling: Signal Acquisition and Processing in Union of Subspaces

    Full text link
    We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio
    corecore