124 research outputs found
Sampling from a system-theoretic viewpoint: Part I - Concepts and tools
This paper is first in a series of papers studying a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. In this paper we present the paradigm and revise underlying technical tools, such as the lifting technique and some topics of the operator theory. This material facilitates a systematic and unified treatment of a wide range of sampling and reconstruction problems, recovering many hitherto considered different solutions and leading to new results. Some of these applications are discussed in the second part
Novel Digital Alias-Free Signal Processing Approaches to FIR Filtering Estimation
This thesis aims at developing a new methodology of filtering continuous-time bandlimited signals and piecewise-continuous signals from their discrete-time samples. Unlike the existing state-of-the-art filters, my filters are not adversely affected by aliasing, allowing the designers to flexibly select the sampling rates of the processed signal to reach the required accuracy of signal filtering rather than meeting stiff and often demanding constraints imposed by the classical theory of digital signal processing (DSP). The impact of this thesis is cost reduction of alias-free sampling, filtering and other digital processing blocks, particularly when the processed signals have sparse and unknown spectral support.
Novel approaches are proposed which can mitigate the negative effects of aliasing, thanks to the use of nonuniform random/pseudorandom sampling and processing algorithms. As such, the proposed approaches belong to the family of digital alias-free signal processing (DASP). Namely, three main approaches are considered: total random (ToRa), stratified (StSa) and antithetical stratified (AnSt) random sampling techniques.
First, I introduce a finite impulse response (FIR) filter estimator for each of the three considered techniques. In addition, a generalised estimator that encompasses the three filter estimators is also proposed. Then, statistical properties of all estimators are investigated to assess their quality. Properties such as expected value, bias, variance, convergence rate, and consistency are all inspected and unveiled. Moreover, closed-form mathematical expression is devised for the variance of each single estimator.
Furthermore, quality assessment of the proposed estimators is examined in two main cases related to the smoothness status of the filter convolution’s integrand function, \u1d454(\u1d461,\u1d70f)∶=\u1d465(\u1d70f)ℎ(\u1d461−\u1d70f), and its first two derivatives. The first main case is continuous and differentiable functions \u1d454(\u1d461,\u1d70f), \u1d454′(\u1d461,\u1d70f), and \u1d454′′(\u1d461,\u1d70f). Whereas in the second main case, I cover all possible instances where some/all of such functions are piecewise-continuous and involving a finite number of bounded discontinuities.
Primarily obtained results prove that all considered filter estimators are unbiassed and consistent. Hence, variances of the estimators converge to zero after certain number of sample points. However, the convergence rate depends on the selected estimator and which case of smoothness is being considered.
In the first case (i.e. continuous \u1d454(\u1d461,\u1d70f) and its derivatives), ToRa, StSa and AnSt filter estimators converge uniformly at rates of \u1d441−1, \u1d441−3, and \u1d441−5 respectively, where 2\u1d441 is the total number of sample points. More interestingly, in the second main case, the convergence rates of StSa and AnSt estimators are maintained even if there are some discontinuities in the first-order derivative (FOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for StSa estimator) or in the second-order derivative (SOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for AnSt). Whereas these rates drop to \u1d441−2 and \u1d441−4 (for StSa and AnSt, respectively) if the zero-order derivative (ZOD) (for StSa) and FOD (for AnSt) are piecewise-continuous. Finally, if the ZOD of \u1d454(\u1d461,\u1d70f) is piecewise-continuous, then the uniform convergence rate of the AnSt estimator further drops to \u1d441−2.
For practical reasons, I also introduce the utilisation of the three estimators in a special situation where the input signal is pseudorandomly sampled from otherwise uniform and dense grid. An FIR filter model with an oversampled finite-duration impulse response, timely aligned with the grid, is proposed and meant to be stored in a lookup table of the implemented filter’s memory to save processing time. Then, a synchronised convolution sum operation is conducted to estimate the filter output.
Finally, a new unequally spaced Lagrange interpolation-based rule is proposed. The so-called composite 3-nonuniform-sample (C3NS) rule is employed to estimate area under the curve (AUC) of an integrand function rather than the simple Rectangular rule. I then carry out comparisons for the convergence rates of different estimators based on the two interpolation rules. The proposed C3NS estimator outperforms other Rectangular rule estimators on the expense of higher computational complexity. Of course, this extra cost could only be justifiable for some specific applications where more accurate estimation is required
Advancements of MultiRate Signal processing for Wireless Communication Networks: Current State Of the Art
With the hasty growth of internet contact and voice and information centric communications, many contact technologies have been urbanized to meet the stringent insist of high speed information transmission and viaduct the wide bandwidth gap among ever-increasing high-data-rate core system and bandwidth-hungry end-user complex. To make efficient consumption of the limited bandwidth of obtainable access routes and cope with the difficult channel environment, several standards have been projected for a variety of broadband access scheme over different access situation (twisted pairs, coaxial cables, optical fibers, and unchanging or mobile wireless admittance). These access situations may create dissimilar channel impairments and utter unique sets of signal dispensation algorithms and techniques to combat precise impairments. In the intended and implementation sphere of those systems, many research issues arise. In this paper we present advancements of multi-rate indication processing methodologies that are aggravated by this design trend. The thesis covers the contemporary confirmation of the current literature on intrusion suppression using multi-rate indication in wireless communiquE9; networks
Geometric approach to sampling and communication
Relationships that exist between the classical, Shannon-type, and
geometric-based approaches to sampling are investigated. Some aspects of coding
and communication through a Gaussian channel are considered. In particular, a
constructive method to determine the quantizing dimension in Zador's theorem is
provided. A geometric version of Shannon's Second Theorem is introduced.
Applications to Pulse Code Modulation and Vector Quantization of Images are
addressed.Comment: 19 pages, submitted for publicatio
Embracing Off-the-Grid Samples
Many empirical studies suggest that samples of continuous-time signals taken
at locations randomly deviated from an equispaced grid (i.e., off-the-grid) can
benefit signal acquisition, e.g., undersampling and anti-aliasing. However,
explicit statements of such advantages and their respective conditions are
scarce in the literature. This paper provides some insight on this topic when
the sampling positions are known, with grid deviations generated i.i.d. from a
variety of distributions. By solving the basis pursuit problem with an
interpolation kernel we demonstrate the capabilities of nonuniform samples for
compressive sampling, an effective paradigm for undersampling and
anti-aliasing. For functions in the Wiener algebra that admit a discrete
-sparse representation in some transform domain, we show that
random off-the-grid samples are sufficient to recover an
accurate -bandlimited approximation of the signal. For sparse
signals (i.e., ), this sampling complexity is a great reduction in
comparison to equispaced sampling where measurements are
needed for the same quality of reconstruction (Nyquist-Shannon sampling
theorem). We further consider noise attenuation via oversampling (relative to a
desired bandwidth), a standard technique with limited theoretical understanding
when the sampling positions are non-equispaced. By solving a least squares
problem, we show that i.i.d. randomly deviated samples
provide an accurate -bandlimited approximation of the signal with
suppression of the noise energy by a factor
An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]
This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality.
Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications
Finite representation of finite energy signals
Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 2011.Thesis (Master's) -- Bilkent University, 2011.Includes bibliographical references leaves 81-93.In this thesis, we study how to encode finite energy signals by finitely many bits.
Since such an encoding is bound to be lossy, there is an inevitable reconstruction
error in the recovery of the original signal. We also analyze this reconstruction
error. In our work, we not only verify the intuition that finiteness of the energy
for a signal implies finite degree of freedom, but also optimize the reconstruction
parameters to get the minimum possible reconstruction error by using a given
number of bits and to achieve a given reconstruction error by using minimum
number of bits. This optimization leads to a number of bits vs reconstruction
error curve consisting of the best achievable points, which reminds us the rate
distortion curve in information theory. However, the rate distortion theorem are
not concerned with sampling, whereas we need to take sampling into consideration
in order to reduce the finite energy signal we deal with to finitely many
variables to be quantized. Therefore, we first propose a finite sample representation
scheme and question the optimality of it. Then, after representing the signal
of interest by finite number of samples at the expense of a certain error, we discuss
several quantization methods for these finitely many samples and compare
their performances.Gülcü, Talha CihadM.S
- …