806 research outputs found

    On the rate-distortion performance and computational efficiency of the Karhunen-Loeve transform for lossy data compression

    Get PDF
    We examine the rate-distortion performance and computational complexity of linear transforms for lossy data compression. The goal is to better understand the performance/complexity tradeoffs associated with using the Karhunen-Loeve transform (KLT) and its fast approximations. Since the optimal transform for transform coding is unknown in general, we investigate the performance penalties associated with using the KLT by examining cases where the KLT fails, developing a new transform that corrects the KLT's failures in those examples, and then empirically testing the performance difference between this new transform and the KLT. Experiments demonstrate that while the worst KLT can yield transform coding performance at least 3 dB worse than that of alternative block transforms, the performance penalty associated with using the KLT on real data sets seems to be significantly smaller, giving at most 0.5 dB difference in our experiments. The KLT and its fast variations studied here range in complexity requirements from O(n^2) to O(n log n) in coding vectors of dimension n. We empirically investigate the rate-distortion performance tradeoffs associated with traversing this range of options. For example, an algorithm with complexity O(n^3/2) and memory O(n) gives 0.4 dB performance loss relative to the full KLT in our image compression experiment

    Foreground Subtraction in Intensity Mapping with the SKA

    Full text link
    21cm intensity mapping experiments aim to observe the diffuse neutral hydrogen (HI) distribution on large scales which traces the Cosmic structure. The Square Kilometre Array (SKA) will have the capacity to measure the 21cm signal over a large fraction of the sky. However, the redshifted 21cm signal in the respective frequencies is faint compared to the Galactic foregrounds produced by synchrotron and free-free electron emission. In this article, we review selected foreground subtraction methods suggested to effectively separate the 21cm signal from the foregrounds with intensity mapping simulations or data. We simulate an intensity mapping experiment feasible with SKA phase 1 including extragalactic and Galactic foregrounds. We give an example of the residuals of the foreground subtraction with a independent component analysis and show that the angular power spectrum is recovered within the statistical errors on most scales. Additionally, the scale of the Baryon Acoustic Oscillations is shown to be unaffected by foreground subtraction.Comment: This article is part of the 'SKA Cosmology Chapter, Advancing Astrophysics with the SKA (AASKA14), Conference, Giardini Naxos (Italy), June 9th-13th 2014

    Suboptimality of the Karhunen-Loève transform for transform coding

    Get PDF
    We examine the performance of the Karhunen-Loeve transform (KLT) for transform coding applications. The KLT has long been viewed as the best available block transform for a system that orthogonally transforms a vector source, scalar quantizes the components of the transformed vector using optimal bit allocation, and then inverse transforms the vector. This paper treats fixed-rate and variable-rate transform codes of non-Gaussian sources. The fixed-rate approach uses an optimal fixed-rate scalar quantizer to describe the transform coefficients; the variable-rate approach uses a uniform scalar quantizer followed by an optimal entropy code, and each quantized component is encoded separately. Earlier work shows that for the variable-rate case there exist sources on which the KLT is not unique and the optimal quantization and coding stage matched to a "worst" KLT yields performance as much as 1.5 dB worse than the optimal quantization and coding stage matched to a "best" KLT. In this paper, we strengthen that result to show that in both the fixed-rate and the variable-rate coding frameworks there exist sources for which the performance penalty for using a "worst" KLT can be made arbitrarily large. Further, we demonstrate in both frameworks that there exist sources for which even a best KLT gives suboptimal performance. Finally, we show that even for vector sources where the KLT yields independent coefficients, the KLT can be suboptimal for fixed-rate coding

    Foreground Subtraction in Intensity Mapping with the SKA

    Get PDF
    21cm intensity mapping experiments aim to observe the diffuse neutral hydrogen (HI) distribution on large scales which traces the Cosmic structure. The Square Kilometre Array (SKA) will have the capacity to measure the 21cm signal over a large fraction of the sky. However, the redshifted 21cm signal in the respective frequencies is faint compared to the Galactic foregrounds produced by synchrotron and free-free electron emission. In this article, we review selected foreground subtraction methods suggested to effectively separate the 21cm signal from the foregrounds with intensity mapping simulations or data. We simulate an intensity mapping experiment feasible with SKA phase 1 including extragalactic and Galactic foregrounds. We give an example of the residuals of the foreground subtraction with a independent component analysis and show that the angular power spectrum is recovered within the statistical errors on most scales. Additionally, the scale of the Baryon Acoustic Oscillations is shown to be unaffected by foreground subtraction

    Optimal detection of burst events in gravitational wave interferometric observatories

    Get PDF
    We consider the problem of detecting a burst signal of unknown shape. We introduce a statistic which generalizes the excess power statistic proposed by Flanagan and Hughes and extended by Anderson et al. The statistic we propose is shown to be optimal for arbitrary noise spectral characteristic, under the two hypotheses that the noise is Gaussian, and that the prior for the signal is uniform. The statistic derivation is based on the assumption that a signal affects only affects N samples in the data stream, but that no other information is a priori available, and that the value of the signal at each sample can be arbitrary. We show that the proposed statistic can be implemented combining standard time-series analysis tools which can be efficiently implemented, and the resulting computational cost is still compatible with an on-line analysis of interferometric data. We generalize this version of an excess power statistic to the multiple detector case, also including the effect of correlated noise. We give full details about the implementation of the algorithm, both for the single and the multiple detector case, and we discuss exact and approximate forms, depending on the specific characteristics of the noise and on the assumed length of the burst event. As a example, we show what would be the sensitivity of the network of interferometers to a delta-function burst.Comment: 21 pages, 5 figures in 3 groups. Submitted for publication to Phys.Rev.D. A Mathematica notebook is available at http://www.ligo.caltech.edu/~avicere/nda/burst/Burst.nb which allows to reproduce the numerical results of the pape

    Feature Based Control of Compact Disc Players

    Get PDF

    On the Prediction of Upwelling Events at the Colombian Caribbean Coasts from Modis-SST Imagery

    Get PDF
    The upwelling cores on the Caribbean Colombian coasts are mainly located at the Peninsula de la Guajira and Cabo de la Aguja. We used monthly averaged Moderate Resolution Imaging Spectroradiometer (MODIS) sea surface temperature as the only information to build up a prediction model for the upwelling events. This comprised two steps: (i) the reduction of the complexity by means of the Karhunen–Loève transform and (ii) a prediction model of time series. Two prediction models were considered: (a) a parametric autoregressive-moving average (ARMA) time series from the Box–Jenkins methodology and (b) a harmonic synthesis model. The harmonic synthesis also comprised of two steps: the maximum entropy spectral analysis and a least-squares harmonic analysis on the set of frequencies. The parametric ARMA time series model failed at the time of prediction with a very narrow range, and it was quite di cult to apply. The harmonic synthesis allowed prediction with a horizon of six months with a correlation of about 0.80. The results can be summarized using the time series of the weights of the di erent oscillation modes, their spatial structures with the nodal lines, and a high confidence model with a horizon of prediction of about four months

    Expansion of random boundary excitations for elliptic PDEs

    Get PDF
    In this paper we deal with elliptic boundary value problems with random boundary conditions. Solutions to these problems are inhomogeneous random fields which can be represented as series expansions involving a complete set of deterministic functions with corresponding random coefficients. We construct the Karhunen-Loève (K-L) series expansion which is based on the eigen-decomposition of the covariance operator. It can be applied to simulate both homogeneous and inhomogeneous random fields. We study the correlation structure of solutions to some classical elliptic equations in respond to random excitations of functions prescribed on the boundary. We analyze the stochastic solutions for Dirichlet and Neumann boundary conditions to Laplace equation, biharmonic equation, and to the Lamé system of elasticity equations. Explicit formulae for the correlation tensors of the generalized solutions are obtained when the boundary function is a white noise, or a homogeneous random field on a circle, a sphere, and a half-space. These exact results may serve as an excellent benchmark for developing numerical methods, e.g., Monte Carlo simulations, stochastic volume and boundary element methods

    Expansion of random boundary excitations for elliptic PDEs

    Get PDF
    In this paper we deal with elliptic boundary value problems with random boundary conditions. Solutions to these problems are inhomogeneous random fields which can be represented as series expansions involving a complete set of deterministic functions with corresponding random coefficients. We construct the Karhunen-Lo\`eve (K-L) series expansion which is based on the eigen-decomposition of the covariance operator. It can be applied to simulate both homogeneous and inhomogeneous random fields. We study the correlation structure of solutions to some classical elliptic equations in respond to random excitations of functions prescribed on the boundary. We analyze the stochastic solutions for Dirichlet and Neumann boundary conditions to Laplace equation, biharmonic equation, and to the Lam\'e system of elasticity equations. Explicit formulae for the correlation tensors of the generalized solutions are obtained when the boundary function is a white noise, or a homogeneous random field on a circle, a sphere, and a half-space. These exact results may serve as an excellent benchmark for developing numerical methods, e.g., Monte Carlo simulations, stochastic volume and boundary element methods

    Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format

    Full text link
    We apply the Tensor Train (TT) decomposition to construct the tensor product Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format, but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its post-processing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with significantly extended numerical experiments. Some unused material is remove
    • …
    corecore