43 research outputs found

    On the asymptotic optimality of a low-complexity coding strategy for WSS, MA, and AR vector sources

    Get PDF
    In this paper, we study the asymptotic optimality of a low-complexity coding strategy for Gaussian vector sources. Specifically, we study the convergence speed of the rate of such a coding strategy when it is used to encode the most relevant vector sources, namely wide sense stationary (WSS), moving average (MA), and autoregressive (AR) vector sources. We also study how the coding strategy considered performs when it is used to encode perturbed versions of those relevant sources. More precisely, we give a sufficient condition for such perturbed versions so that the convergence speed of the rate remains unaltered

    A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources

    Get PDF
    In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the knowledge of the correlation matrix of such data blocks. We also show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary (WSS), moving average (MA), autoregressive (AR), and ARMA vector sources

    Rate-distortion function upper bounds for Gaussian vectors and their applications in coding AR sources

    Get PDF
    source coding; rate-distortion function (RDF); Gaussian vector; autoregressive (AR) source; discrete Fourier transform (DFT

    Design of FIR paraunitary filter banks for subband coding using a polynomial eigenvalue decomposition

    Get PDF
    The problem of paraunitary filter bank design for subband coding has received considerable attention in recent years, not least because of the energy preserving property of this class of filter banks. In this paper, we consider the design of signal-adapted, finite impulse response (FIR), paraunitary filter banks using polynomial matrix EVD (PEVD) techniques. Modifications are proposed to an iterative, time-domain PEVD method, known as the sequential best rotation (SBR2) algorithm, which enables its effective application to the problem of FIR orthonormal filter bank design for efficient subband coding. By choosing an optimisation scheme that maximises the coding gain at each stage of the algorithm, it is shown that the resulting filter bank behaves more and more like the infiniteorder principle component filter bank (PCFB). The proposed method is compared to state-of-the-art techniques, namely the iterative greedy algorithm (IGA), the approximate EVD (AEVD), standard SBR2 and a fast algorithm for FIR compaction filter design, called the window method (WM). We demonstrate that for the calculation of the subband coder, the WM approach offers a low-cost alternative at lower coding gains, while at moderate to high complexity, the proposed approach outperforms the benchmarkers. In terms of run-time complexity, AEVD performs well at low orders, while the proposed algorithm offers a better coding gain than the benchmarkers at moderate to high filter order for a number of simulation scenarios

    Orthonormal and biorthonormal filter banks as convolvers, and convolutional coding gain

    Get PDF
    Convolution theorems for filter bank transformers are introduced. Both uniform and nonuniform decimation ratios are considered, and orthonormal as well as biorthonormal cases are addressed. All the theorems are such that the original convolution reduces to a sum of shorter, decoupled convolutions in the subbands. That is, there is no need to have cross convolution between subbands. For the orthonormal case, expressions for optimal bit allocation and the optimized coding gain are derived. The contribution to coding gain comes partly from the nonuniformity of the signal spectrum and partly from nonuniformity of the filter spectrum. With one of the convolved sequences taken to be the unit pulse function,,e coding gain expressions reduce to those for traditional subband and transform coding. The filter-bank convolver has about the same computational complexity as a traditional convolver, if the analysis bank has small complexity compared to the convolution itself

    Autoregressive process parameters estimation from Compressed Sensing measurements and Bayesian dictionary learning

    Get PDF
    The main contribution of this thesis is the introduction of new techniques which allow to perform signal processing operations on signals represented by means of compressed sensing. Exploiting autoregressive modeling of the original signal, we obtain a compact yet representative description of the signal which can be estimated directly in the compressed domain. This is the key concept on which the applications we introduce rely on. In fact, thanks to proposed the framework it is possible to gain information about the original signal given compressed sensing measurements. This is done by means of autoregressive modeling which can be used to describe a signal through a small number of parameters. We develop a method to estimate these parameters given the compressed measurements by using an ad-hoc sensing matrix design and two different coupled estimators that can be used in different scenarios. This enables centralized and distributed estimation of the covariance matrix of a process given the compressed sensing measurements in a efficient way at low communication cost. Next, we use the characterization of the original signal done by means of few autoregressive parameters to improve compressive imaging. In particular, we use these parameters as a proxy to estimate the complexity of a block of a given image. This allows us to introduce a novel compressive imaging system in which the number of allocated measurements is adapted for each block depending on its complexity, i.e., spatial smoothness. The result is that a careful allocation of the measurements, improves the recovery process by reaching higher recovery quality at the same compression ratio in comparison to state-of-the-art compressive image recovery techniques. Interestingly, the parameters we are able to estimate directly in the compressed domain not only can improve the recovery but can also be used as feature vectors for classification. In fact, we also propose to use these parameters as more general feature vectors which allow to perform classification in the compressed domain. Remarkably, this method reaches high classification performance which is comparable with that obtained in the original domain, but with a lower cost in terms of dataset storage. In the second part of this work, we focus on sparse representations. In fact, a better sparsifying dictionary can improve the Compressed Sensing recovery performance. At first, we focus on the original domain and hence no dimensionality reduction by means of Compressed Sensing is considered. In particular, we develop a Bayesian technique which, in a fully automated fashion, performs dictionary learning. More in detail, using the uncertainties coming from atoms selection in the sparse representation step, this technique outperforms state-of-the-art dictionary learning techniques. Then, we also address image denoising and inpainting tasks using the aforementioned technique with excellent results. Next, we move to the compressed domain where a better dictionary is expected to provide improved recovery. We show how the Bayesian dictionary learning model can be adapted to the compressive case and the necessary assumptions that must be made when considering random projections. Lastly, numerical experiments confirm the superiority of this technique when compared to other compressive dictionary learning techniques

    Modelling of mobile fading channels with fading mitigation techniques

    Get PDF
    This thesis aims to contribute to the developments of wireless communication systems. The work generally consists of three parts: the first part is a discussion on general digital communication systems, the second part focuses on wireless channel modelling and fading mitigation techniques, and in the third part we discuss the possible application of advanced digital signal processing, especially time-frequency representation and blind source separation, to wireless communication systems. The first part considers general digital communication systems which will be incorporated in later parts. Today's wireless communication system is a subbranch of a general digital communication system that employs various techniques of A/D (Analog to Digital) conversion, source coding, error correction, coding, modulation, and synchronization, signal detection in noise, channel estimation, and equalization. We study and develop the digital communication algorithms to enhance the performance of wireless communication systems. In the Second Part we focus on wireless channel modelling and fading mitigation techniques. A modified Jakes' method is developed for Rayleigh fading channels. We investigate the level-crossing rate (LCR), the average duration of fades (ADF), the probability density function (PDF), the cumulative distribution function (CDF) and the autocorrelation functions (ACF) of this model. The simulated results are verified against the analytical Clarke's channel model. We also construct frequency-selective geometrical-based hyperbolically distributed scatterers (GBHDS) for a macro-cell mobile environment with the proper statistical characteristics. The modified Clarke's model and the GBHDS model may be readily expanded to a MIMO channel model thus we study the MIMO fading channel, specifically we model the MIMO channel in the angular domain. A detailed analysis of Gauss-Markov approximation of the fading channel is also given. Two fading mitigation techniques are investigated: Orthogonal Frequency Division Multiplexing (OFDM) and spatial diversity. In the Third Part, we devote ourselves to the exciting fields of Time-Frequency Analysis and Blind Source Separation and investigate the application of these powerful Digital Signal Processing (DSP) tools to improve the performance of wireless communication systems

    Frequency Domain Independent Component Analysis Applied To Wireless Communications Over Frequency-selective Channels

    Get PDF
    In wireless communications, frequency-selective fading is a major source of impairment for wireless communications. In this research, a novel Frequency-Domain Independent Component Analysis (ICA-F) approach is proposed to blindly separate and deconvolve signals traveling through frequency-selective, slow fading channels. Compared with existing time-domain approaches, the ICA-F is computationally efficient and possesses fast convergence properties. Simulation results confirm the effectiveness of the proposed ICA-F. Orthogonal Frequency Division Multiplexing (OFDM) systems are widely used in wireless communications nowadays. However, OFDM systems are very sensitive to Carrier Frequency Offset (CFO). Thus, an accurate CFO compensation technique is required in order to achieve acceptable performance. In this dissertation, two novel blind approaches are proposed to estimate and compensate for CFO within the range of half subcarrier spacing: a Maximum Likelihood CFO Correction approach (ML-CFOC), and a high-performance, low-computation Blind CFO Estimator (BCFOE). The Bit Error Rate (BER) improvement of the ML-CFOC is achieved at the expense of a modest increase in the computational requirements without sacrificing the system bandwidth or increasing the hardware complexity. The BCFOE outperforms the existing blind CFO estimator [25, 128], referred to as the YG-CFO estimator, in terms of BER and Mean Square Error (MSE), without increasing the computational complexity, sacrificing the system bandwidth, or increasing the hardware complexity. While both proposed techniques outperform the YG-CFO estimator, the BCFOE is better than the ML-CFOC technique. Extensive simulation results illustrate the performance of the ML-CFOC and BCFOE approaches

    Towards low-cost gigabit wireless systems at 60 GHz

    Get PDF
    The world-wide availability of the huge amount of license-free spectral space in the 60 GHz band provides wide room for gigabit-per-second (Gb/s) wireless applications. A commercial (read: low-cost) 60-GHz transceiver will, however, provide limited system performance due to the stringent link budget and the substantial RF imperfections. The work presented in this thesis is intended to support the design of low-cost 60-GHz transceivers for Gb/s transmission over short distances (a few meters). Typical applications are the transfer of high-definition streaming video and high-speed download. The presented work comprises research into the characteristics of typical 60-GHz channels, the evaluation of the transmission quality as well as the development of suitable baseband algorithms. This can be summarized as follows. In the first part, the characteristics of the wave propagation at 60 GHz are charted out by means of channel measurements and ray-tracing simulations for both narrow-beam and omni-directional configurations. Both line-of-sight (LOS) and non-line-of-sight (NLOS) are considered. This study reveals that antennas that produce a narrow beam can be used to boost the received power by tens of dBs when compared with omnidirectional configurations. Meanwhile, the time-domain dispersion of the channel is reduced to the order of nanoseconds, which facilitates Gb/s data transmission over 60-GHz channels considerably. Besides the execution of measurements and simulations, the influence of antenna radiation patterns is analyzed theoretically. It is indicated to what extent the signal-to-noise ratio, Rician-K factor and channel dispersion are improved by application of narrow-beam antennas and to what extent these parameters will be influenced by beam pointing errors. From both experimental and analytical work it can be concluded that the problem of the stringent link-budget can be solved effectively by application of beam-steering techniques. The second part treats wideband transmission methods and relevant baseband algorithms. The considered schemes include orthogonal frequency division multiplexing (OFDM), multi-carrier code division multiple access (MC-CDMA) and single carrier with frequency-domain equalization (SC-FDE), which are promising candidates for Gb/s wireless transmission. In particular, the optimal linear equalization in the frei quency domain and associated implementation issues such as synchronization and channel estimation are examined. Bit error rate (BER) expressions are derived to evaluate the transmission performance. Besides the linear equalization techniques, a low-complexity inter-symbol interference cancellation technique is proposed to achieve much better performance of code-spreading systems such as MC-CDMA and SC-FDE. Both theoretical analysis and simulations demonstrate that the proposed scheme offers great advantages as regards both complexity and performance. This makes it particularly suitable for 60-GHz applications in multipath environments. The third part treats the influence of quantization and RF imperfections on the considered transmission methods in the context of 60-GHz radios. First, expressions for the BER are derived and the influence of nonlinear distortions caused by the digital-to-analog converters, analog-to-digital converters and power amplifiers on the BER performance is examined. Next, the BER performance under the influence of phase noise and IQ imbalance is evaluated for the case that digital compensation techniques are applied in the receiver as well as for the case that such techniques are not applied. Finally, a baseline design of a low-cost Gb/s 60-GHz transceiver is presented. It is shown that, by application of beam-steering in combination with SC-FDE without advanced channel coding, a data rate in the order of 2 Gb/s can be achieved over a distance of 10 meters in a typical NLOS indoor scenario
    corecore