379 research outputs found
Generalized Triangular Decomposition in Transform Coding
A general family of optimal transform coders (TCs) is introduced here based on the generalized triangular decomposition (GTD) developed by Jiang This family includes the Karhunen-Loeve transform (KLT) and the generalized version of the prediction-based lower triangular transform (PLT) introduced by Phoong and Lin as special cases. The coding gain of the entire family, with optimal bit allocation, is equal to that of the KLT and the PLT. Even though the original PLT introduced by Phoong is not applicable for vectors that are not blocked versions of scalar wide sense stationary processes, the GTD-based family includes members that are natural extensions of the PLT, and therefore also enjoy the so-called MINLAB structure of the PLT, which has the unit noise-gain property. Other special cases of the GTD-TC are the geometric mean decomposition (GMD) and the bidiagonal decomposition (BID) transform coders. The GMD-TC in particular has the property that the optimum bit allocation is a uniform allocation; this is because all its transform domain coefficients have the same variance, implying thereby that the dynamic ranges of the coefficients to be quantized are identical
Subspace methods for portfolio design
Financial signal processing (FSP) is one of the emerging areas in the field of signal processing. It is comprised of mathematical finance and signal processing. Signal processing engineers consider speech, image, video, and price of a stock as signals of interest for the given application. The information that they will infer from raw data is different for each application. Financial engineers develop new solutions for financial problems using their knowledge base in signal processing. The goal of financial engineers is to process the harvested financial signal to get meaningful information for the purpose.
Designing investment portfolios have always been at the center of finance. An investment portfolio is comprised of financial instruments such as stocks, bonds, futures, options, and others. It is designed based on risk limits and return expectations of investors and managed by portfolio managers. Modern Portfolio Theory (MPT) offers a mathematical method for portfolio optimization. It defines the risk as the standard deviation of the portfolio return and provides closed-form solution for the risk optimization problem where asset allocations are derived from. The risk and the return of an investment are the two inseparable performance metrics. Therefore, risk normalized return, called Sharpe ratio, is the most widely used performance metric for financial investments.
Subspace methods have been one of the pillars of functional analysis and signal processing. They are used for portfolio design, regression analysis and noise filtering in finance applications. Each subspace has its unique characteristics that may serve requirements of a specific application. For still image and video compression applications, Discrete Cosine Transform (DCT) has been successfully employed in transform coding where Karhunen-Loeve Transform (KLT) is the optimum block transform.
In this dissertation, a signal processing framework to design investment portfolios is proposed. Portfolio theory and subspace methods are investigated and jointly treated. First, KLT, also known as eigenanalysis or principal component analysis (PCA) of empirical correlation matrix for a random vector process that statistically represents asset returns in a basket of instruments, is investigated. Auto-regressive, order one, AR(1) discrete process is employed to approximate such an empirical correlation matrix. Eigenvector and eigenvalue kernels of AR(1) process are utilized for closed-form expressions of Sharpe ratios and market exposures of the resulting eigenportfolios. Their performances are evaluated and compared for various statistical scenarios. Then, a novel methodology to design subband/filterbank portfolios for a given empirical correlation matrix by using the theory of optimal filter banks is proposed. It is a natural extension of the celebrated eigenportfolios. Closed-form expressions for Sharpe ratios and market exposures of subband/filterbank portfolios are derived and compared with eigenportfolios.
A simple and powerful new method using the rate-distortion theory to sparse eigen-subspaces, called Sparse KLT (SKLT), is developed. The method utilizes varying size mid-tread (zero-zone) pdf-optimized (Lloyd-Max) quantizers created for each eigenvector (or for the entire eigenmatrix) of a given eigen-subspace to achieve the desired cardinality reduction. The sparsity performance comparisons demonstrate the superiority of the proposed SKLT method over the popular sparse representation algorithms reported in the literature
A Study of trellis coded quantization for image compression
Trellis coded quantization has recently evolved as a powerful quantization technique in the world of lossy image compression. The aim of this thesis is to investigate the potential of trellis coded quantization in conjunction with two of the most popular image transforms today; the discrete cosine transform and the discrete wavelet trans form. Trellis coded quantization is compared with traditional scalar quantization. The 4-state and the 8-state trellis coded quantizers are compared in an attempt to come up with a quantifiable difference in their performances. The use of pdf-optimized quantizers for trellis coded quantization is also studied. Results for the simulations performed on two gray-scale images at an uncoded bit rate of 0.48 bits/pixel are presented by way of reconstructed images and the respective peak signal-to-noise ratios. It is evident from the results obtained that trellis coded quantization outperforms scalar quantization in both the discrete cosine transform and the discrete wavelet transform domains. The reconstructed images suggest that there does not seem to be any considerable gain in going from a 4-state to a 8-state trellis coded quantizer. Results also suggest that considerable gain can be had by employing pdf-optimized quantizers for trellis coded quantization instead of uniform quantizers
Perceptual models in speech quality assessment and coding
The ever-increasing demand for good communications/toll
quality speech has created a renewed interest into the
perceptual impact of rate compression. Two general areas are
investigated in this work, namely speech quality assessment
and speech coding.
In the field of speech quality assessment, a model is
developed which simulates the processing stages of the
peripheral auditory system. At the output of the model a
"running" auditory spectrum is obtained. This represents
the auditory (spectral) equivalent of any acoustic sound such
as speech. Auditory spectra from coded speech segments serve
as inputs to a second model. This model simulates the
information centre in the brain which performs the speech
quality assessment. [Continues.
The Telecommunications and Data Acquisition Report
This quarterly publication provides archival reports on developments in programs managed by JPL's Telecommunications and Mission Operations Directorate (TMOD), which now includes the former Telecommunications and Data Acquisition (TDA) Office. In space communications, radio navigation, radio science, and ground-based radio and radar astronomy, it reports on activities of the Deep Space Network (DSN) in planning, supporting research and technology, implementation, and operations. Also included are standards activity at JPL for space data and information systems and reimbursable DSN work performed for other space agencies through NASA. The preceding work is all performed for NASA's Office of Space Communications (OSC). TMOD also performs work funded by other NASA program offices through and with the cooperation of OSC. The first of these is the Orbital Debris Radar Program funded by the Office of Space Systems Development. It exists at Goldstone only and makes use of the planetary radar capability when the antennas are configured as science instruments making direct observations of the planets, their satellites, and asteroids of our solar system. The Office of Space Sciences funds the data reduction and science analyses of data obtained by the Goldstone Solar System Radar. The antennas at all three complexes are also configured for radio astronomy research and, as such, conduct experiments funded by the National Science Foundation in the U.S. and other agencies at the overseas complexes. These experiments are either in microwave spectroscopy or very long baseline interferometry. Finally, tasks funded under the JPL Director's Discretionary Fund and the Caltech President's Fund that involve TMOD are included. This and each succeeding issue of 'The Telecommunications and Data Acquisition Progress Report' will present material in some, but not necessarily all, of the aforementioned programs
Subband Coded Image Transmitting over Noisy Channels Using Multicarrier Modulation
In this paper, we present a new loading algorithm for subband coded image transmission on multicarrier modulation systems. The image subbands are transmitted simultaneously, each occupying a number of subchannels. Different modulation rates and powers are assigned to the subchannels transmitting different subbands. Unlike the traditional loading algorithms, which flat the error performance of all the subchannels, the proposed loading algorithm assigns different error performances to the subchannels in order to provide unequal error protection for the subbands data. Numerical examples show that the proposed algorithm yields significant improvement over traditional loading algorithms, especially for spectral-shaped channels
Performance Analysis and Enhancement of Multiband OFDM for UWB Communications
In this paper, we analyze the frequency-hopping orthogonal frequency-division
multiplexing (OFDM) system known as Multiband OFDM for high-rate wireless
personal area networks (WPANs) based on ultra-wideband (UWB) transmission.
Besides considering the standard, we also propose and study system performance
enhancements through the application of Turbo and Repeat-Accumulate (RA) codes,
as well as OFDM bit-loading. Our methodology consists of (a) a study of the
channel model developed under IEEE 802.15 for UWB from a frequency-domain
perspective suited for OFDM transmission, (b) development and quantification of
appropriate information-theoretic performance measures, (c) comparison of these
measures with simulation results for the Multiband OFDM standard proposal as
well as our proposed extensions, and (d) the consideration of the influence of
practical, imperfect channel estimation on the performance. We find that the
current Multiband OFDM standard sufficiently exploits the frequency selectivity
of the UWB channel, and that the system performs in the vicinity of the channel
cutoff rate. Turbo codes and a reduced-complexity clustered bit-loading
algorithm improve the system power efficiency by over 6 dB at a data rate of
480 Mbps.Comment: 32 pages, 10 figures, 1 table. Submitted to the IEEE Transactions on
Wireless Communications (Sep. 28, 2005). Minor revisions based on reviewers'
comments (June 23, 2006
Recommended from our members
Research and developments of Dirac video codec
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.In digital video compression, apart from storage, successful transmission of the compressed video
data over the bandwidth limited erroneous channels is another important issue. To enable a video
codec for broadcasting application, it is required to implement the corresponding coding tools (e.g.
error-resilient coding, rate control etc.). They are normally non-normative parts of a video codec and
hence their specifications are not defined in the standard. In Dirac as well, the original codec is
optimized for storage purpose only and so, several non-normative part of the encoding tools are still
required in order to be able to use in other types of application.
Being the "Research and Developments of the Dirac Video Codec" as the research title, phase I of
the project is mainly focused on the error-resilient transmission over a noisy channel. The error-resilient
coding method used here is a simple and low complex coding scheme which provides the
error-resilient transmission of the compressed video bitstream of Dirac video encoder over the packet
erasure wired network. The scheme combines source and channel coding approach where error-resilient
source coding is achieved by data partitioning in the wavelet transformed domain and
channel coding is achieved through the application of either Rate-Compatible Punctured
Convolutional (RCPC) Code or Turbo Code (TC) using un-equal error protection between header plus
MV and data. The scheme is designed mainly for the packet-erasure channel, i.e. targeted for the
Internet broadcasting application.
But, for a bandwidth limited channel, it is still required to limit the amount of bits generated from
the encoder depending on the available bandwidth in addition to the error-resilient coding. So, in the
2nd phase of the project, a rate control algorithm is presented. The algorithm is based upon the Quality
Factor (QF) optimization method where QF of the encoded video is adaptively changing in order to
achieve average bitrate which is constant over each Group of Picture (GOP). A relation between the
bitrate, R and the QF, which is called Rate-QF (R-QF) model is derived in order to estimate the
optimum QF of the current encoding frame for a given target bitrate, R.
In some applications like video conferencing, real-time encoding and decoding with minimum
delay is crucial, but, the ability to do real-time encoding/decoding is largely determined by the
complexity of the encoder/decoder. As we all know that motion estimation process inside the encoder
is the most time consuming stage. So, reducing the complexity of the motion estimation stage will
certainly give one step closer to the real-time application. So, as a partial contribution toward realtime
application, in the final phase of the research, a fast Motion Estimation (ME) strategy is designed
and implemented. It is the combination of modified adaptive search plus semi-hierarchical way of
motion estimation. The same strategy was implemented in both Dirac and H.264 in order to
investigate its performance on different codecs. Together with this fast ME strategy, a method which
is called partial cost function calculation in order to further reduce down the computational load of the
cost function calculation was presented. The calculation is based upon the pre-defined set of patterns
which were chosen in such a way that they have as much maximum coverage as possible over the
whole block.
In summary, this research work has contributed to the error-resilient transmission of compressed
bitstreams of Dirac video encoder over a bandwidth limited error prone channel. In addition to this,
the final phase of the research has partially contributed toward the real-time application of the Dirac
video codec by implementing a fast motion estimation strategy together with partial cost function
calculation idea.BBC R&D and Brunel University
- …