1,400 research outputs found
Generalizations of the sampling theorem: Seven decades after Nyquist
The sampling theorem is one of the most basic and fascinating topics in engineering sciences. The most well-known form is Shannon's uniform-sampling theorem for bandlimited signals. Extensions of this to bandpass signals and multiband signals, and to nonuniform sampling are also well-known. The connection between such extensions and the theory of filter banks in DSP has been well established. This paper presents some of the less known aspects of sampling, with special emphasis on non bandlimited signals, pointwise stability of reconstruction, and reconstruction from nonuniform samples. Applications in multiresolution computation and in digital spline interpolation are also reviewed
Recommended from our members
Review of Unbiased FIR Filters, Smoothers, and Predictors for Polynomial Signals
Extracting an estimate of a slowly varying signal corrupted by noise is a common task. Examples can be found in industrial, scientific and biomedical instrumentation. Depending on the nature of the application the signal estimate is allowed to be a delayed estimate of the original signal or, in the other extreme, no delay is tolerated. These cases are commonly referred to as filtering, prediction, and smoothing depending on the amount of advance or lag between the input data set and the output data set. In this review paper we provide a comprehensive set of design and analysis tools for designing unbiased FIR filters, predictors, and smoothers for slowly varying signals, i.e. signals that can be modeled by low order polynomials. Explicit expressions of parameters needed in practical implementations are given. Real life examples are provided including cases where the method is extended to signals that are piecewise slowly varying. A critical view on recursive implementations of the algorithms is provided
Efficient Fast-Convolution-Based Waveform Processing for 5G Physical Layer
This paper investigates the application of fast-convolution (FC) filtering
schemes for flexible and effective waveform generation and processing in the
fifth generation (5G) systems. FC-based filtering is presented as a generic
multimode waveform processing engine while, following the progress of 5G new
radio standardization in the Third-Generation Partnership Project, the main
focus is on efficient generation and processing of subband-filtered cyclic
prefix orthogonal frequency-division multiplexing (CP-OFDM) signals. First, a
matrix model for analyzing FC filter processing responses is presented and used
for designing optimized multiplexing of filtered groups of CP-OFDM physical
resource blocks (PRBs) in a spectrally well-localized manner, i.e., with narrow
guardbands. Subband filtering is able to suppress interference leakage between
adjacent subbands, thus supporting independent waveform parametrization and
different numerologies for different groups of PRBs, as well as asynchronous
multiuser operation in uplink. These are central ingredients in the 5G waveform
developments, particularly at sub-6-GHz bands. The FC filter optimization
criterion is passband error vector magnitude minimization subject to a given
subband band-limitation constraint. Optimized designs with different guardband
widths, PRB group sizes, and essential design parameters are compared in terms
of interference levels and implementation complexity. Finally, extensive coded
5G radio link simulation results are presented to compare the proposed approach
with other subband-filtered CP-OFDM schemes and time-domain windowing methods,
considering cases with different numerologies or asynchronous transmissions in
adjacent subbands. Also the feasibility of using independent transmitter and
receiver processing for CP-OFDM spectrum control is demonstrated
Sampling Sparse Signals on the Sphere: Algorithms and Applications
We propose a sampling scheme that can perfectly reconstruct a collection of
spikes on the sphere from samples of their lowpass-filtered observations.
Central to our algorithm is a generalization of the annihilating filter method,
a tool widely used in array signal processing and finite-rate-of-innovation
(FRI) sampling. The proposed algorithm can reconstruct spikes from
spatial samples. This sampling requirement improves over
previously known FRI sampling schemes on the sphere by a factor of four for
large . We showcase the versatility of the proposed algorithm by applying it
to three different problems: 1) sampling diffusion processes induced by
localized sources on the sphere, 2) shot noise removal, and 3) sound source
localization (SSL) by a spherical microphone array. In particular, we show how
SSL can be reformulated as a spherical sparse sampling problem.Comment: 14 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
Two-dimensional block processors - structures and implementations
Includes bibliographical references.Two-dimensional (2-D) block processing technique for linear filtering of digital images is introduced. New 2-D block structures are derived for 2-D recursive digital filters realized by difference equations and state-space formulations. Several special cases have also been considered and the relevant 2-D block structures are given. The computational costs of different implementation techniques employing high-speed convolution algorithms such as fast Fourier transform, number theoretic transform and polynomial transform have been studied. A comparison among the relative efficiencies of these implementation schemes is made and a suitable method is then proposed using short convolution algorithm which results in a minimized computational time
On adaptive filter structure and performance
SIGLEAvailable from British Library Document Supply Centre- DSC:D75686/87 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
A survey on tidal analysis and forecasting methods for Tsunami detection
Accurate analysis and forecasting of tidal level are very important tasks for human activities in oceanic and coastal areas. They can be crucial in catastrophic situations like occurrences of Tsunamis in order to provide a rapid alerting to the human population involved and to save lives. Conventional tidal forecasting methods are based on harmonic analysis using the least squares method to determine harmonic parameters. However, a large number of parameters and long-term measured data are required for precise tidal level predictions with harmonic analysis. Furthermore, traditional harmonic methods rely on models based on the analysis of astronomical components and they can be inadequate when the contribution of non-astronomical components, such as the weather, is significant. Other alternative approaches have been developed in the literature in order to deal with these situations and provide predictions with the desired accuracy, with respect also to the length of the available tidal record. These methods include standard high or band pass filtering techniques, although the relatively deterministic character and large amplitude of tidal signals make special techniques, like artificial neural networks and wavelets transform analysis methods, more effective. This paper is intended to provide the communities of both researchers and practitioners with a broadly applicable, up to date coverage of tidal analysis and forecasting methodologies that have proven to be successful in a variety of circumstances, and that hold particular promise for success in the future. Classical and novel methods are reviewed in a systematic and consistent way, outlining their main concepts and components, similarities and differences, advantages and disadvantages
- …