432 research outputs found

    Theory of optimal orthonormal subband coders

    Get PDF
    The theory of the orthogonal transform coder and methods for its optimal design have been known for a long time. We derive a set of necessary and sufficient conditions for the coding-gain optimality of an orthonormal subband coder for given input statistics. We also show how these conditions can be satisfied by the construction of a sequence of optimal compaction filters one at a time. Several theoretical properties of optimal compaction filters and optimal subband coders are then derived, especially pertaining to behavior as the number of subbands increases. Significant theoretical differences between optimum subband coders, transform coders, and predictive coders are summarized. Finally, conditions are presented under which optimal orthonormal subband coders yield as much coding gain as biorthogonal ones for a fixed number of subbands

    Orthonormal and biorthonormal filter banks as convolvers, and convolutional coding gain

    Get PDF
    Convolution theorems for filter bank transformers are introduced. Both uniform and nonuniform decimation ratios are considered, and orthonormal as well as biorthonormal cases are addressed. All the theorems are such that the original convolution reduces to a sum of shorter, decoupled convolutions in the subbands. That is, there is no need to have cross convolution between subbands. For the orthonormal case, expressions for optimal bit allocation and the optimized coding gain are derived. The contribution to coding gain comes partly from the nonuniformity of the signal spectrum and partly from nonuniformity of the filter spectrum. With one of the convolved sequences taken to be the unit pulse function,,e coding gain expressions reduce to those for traditional subband and transform coding. The filter-bank convolver has about the same computational complexity as a traditional convolver, if the analysis bank has small complexity compared to the convolution itself

    Patch-Based Holographic Image Sensing

    Full text link
    Holographic representations of data enable distributed storage with progressive refinement when the stored packets of data are made available in any arbitrary order. In this paper, we propose and test patch-based transform coding holographic sensing of image data. Our proposal is optimized for progressive recovery under random order of retrieval of the stored data. The coding of the image patches relies on the design of distributed projections ensuring best image recovery, in terms of the 2\ell_2 norm, at each retrieval stage. The performance depends only on the number of data packets that has been retrieved thus far. Several possible options to enhance the quality of the recovery while changing the size and number of data packets are discussed and tested. This leads us to examine several interesting bit-allocation and rate-distortion trade offs, highlighted for a set of natural images with ensemble estimated statistical properties

    Computation Alignment: Capacity Approximation without Noise Accumulation

    Full text link
    Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by Avestimehr et al. has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor

    Distributed Detection and Estimation in Wireless Sensor Networks

    Full text link
    In this article we consider the problems of distributed detection and estimation in wireless sensor networks. In the first part, we provide a general framework aimed to show how an efficient design of a sensor network requires a joint organization of in-network processing and communication. Then, we recall the basic features of consensus algorithm, which is a basic tool to reach globally optimal decisions through a distributed approach. The main part of the paper starts addressing the distributed estimation problem. We show first an entirely decentralized approach, where observations and estimations are performed without the intervention of a fusion center. Then, we consider the case where the estimation is performed at a fusion center, showing how to allocate quantization bits and transmit powers in the links between the nodes and the fusion center, in order to accommodate the requirement on the maximum estimation variance, under a constraint on the global transmit power. We extend the approach to the detection problem. Also in this case, we consider the distributed approach, where every node can achieve a globally optimal decision, and the case where the decision is taken at a central node. In the latter case, we show how to allocate coding bits and transmit power in order to maximize the detection probability, under constraints on the false alarm rate and the global transmit power. Then, we generalize consensus algorithms illustrating a distributed procedure that converges to the projection of the observation vector onto a signal subspace. We then address the issue of energy consumption in sensor networks, thus showing how to optimize the network topology in order to minimize the energy necessary to achieve a global consensus. Finally, we address the problem of matching the topology of the network to the graph describing the statistical dependencies among the observed variables.Comment: 92 pages, 24 figures. To appear in E-Reference Signal Processing, R. Chellapa and S. Theodoridis, Eds., Elsevier, 201

    Subspace methods for portfolio design

    Get PDF
    Financial signal processing (FSP) is one of the emerging areas in the field of signal processing. It is comprised of mathematical finance and signal processing. Signal processing engineers consider speech, image, video, and price of a stock as signals of interest for the given application. The information that they will infer from raw data is different for each application. Financial engineers develop new solutions for financial problems using their knowledge base in signal processing. The goal of financial engineers is to process the harvested financial signal to get meaningful information for the purpose. Designing investment portfolios have always been at the center of finance. An investment portfolio is comprised of financial instruments such as stocks, bonds, futures, options, and others. It is designed based on risk limits and return expectations of investors and managed by portfolio managers. Modern Portfolio Theory (MPT) offers a mathematical method for portfolio optimization. It defines the risk as the standard deviation of the portfolio return and provides closed-form solution for the risk optimization problem where asset allocations are derived from. The risk and the return of an investment are the two inseparable performance metrics. Therefore, risk normalized return, called Sharpe ratio, is the most widely used performance metric for financial investments. Subspace methods have been one of the pillars of functional analysis and signal processing. They are used for portfolio design, regression analysis and noise filtering in finance applications. Each subspace has its unique characteristics that may serve requirements of a specific application. For still image and video compression applications, Discrete Cosine Transform (DCT) has been successfully employed in transform coding where Karhunen-Loeve Transform (KLT) is the optimum block transform. In this dissertation, a signal processing framework to design investment portfolios is proposed. Portfolio theory and subspace methods are investigated and jointly treated. First, KLT, also known as eigenanalysis or principal component analysis (PCA) of empirical correlation matrix for a random vector process that statistically represents asset returns in a basket of instruments, is investigated. Auto-regressive, order one, AR(1) discrete process is employed to approximate such an empirical correlation matrix. Eigenvector and eigenvalue kernels of AR(1) process are utilized for closed-form expressions of Sharpe ratios and market exposures of the resulting eigenportfolios. Their performances are evaluated and compared for various statistical scenarios. Then, a novel methodology to design subband/filterbank portfolios for a given empirical correlation matrix by using the theory of optimal filter banks is proposed. It is a natural extension of the celebrated eigenportfolios. Closed-form expressions for Sharpe ratios and market exposures of subband/filterbank portfolios are derived and compared with eigenportfolios. A simple and powerful new method using the rate-distortion theory to sparse eigen-subspaces, called Sparse KLT (SKLT), is developed. The method utilizes varying size mid-tread (zero-zone) pdf-optimized (Lloyd-Max) quantizers created for each eigenvector (or for the entire eigenmatrix) of a given eigen-subspace to achieve the desired cardinality reduction. The sparsity performance comparisons demonstrate the superiority of the proposed SKLT method over the popular sparse representation algorithms reported in the literature

    Discrete multitone modulation with principal component filter banks

    Get PDF
    Discrete multitone (DMT) modulation is an attractive method for communication over a nonflat channel with possibly colored noise. The uniform discrete Fourier transform (DFT) filter bank and cosine modulated filter bank have in the past been used in this system because of low complexity. We show in this paper that principal component filter banks (PCFB) which are known to be optimal for data compression and denoising applications, are also optimal for a number of criteria in DMT modulation communication. For example, the PCFB of the effective channel noise power spectrum (noise psd weighted by the inverse of the channel gain) is optimal for DMT modulation in the sense of maximizing bit rate for fixed power and error probabilities. We also establish an optimality property of the PCFB when scalar prefilters and postfilters are used around the channel. The difference between the PCFB and a traditional filter bank such as the brickwall filter bank or DFT filter bank is significant for effective power spectra which depart considerably from monotonicity. The twisted pair channel with its bridged taps, next and fext noises, and AM interference, therefore appears to be a good candidate for the application of a PCFB. This is demonstrated with the help of numerical results for the case of the ADSL channel
    corecore