3,087 research outputs found

    Measuring the galaxy power spectrum and scale-scale correlations with multiresolution-decomposed covariance -- I. method

    Get PDF
    We present a method of measuring galaxy power spectrum based on the multiresolution analysis of the discrete wavelet transformation (DWT). Since the DWT representation has strong capability of suppressing the off-diagonal components of the covariance for selfsimilar clustering, the DWT covariance for popular models of the cold dark matter cosmogony generally is diagonal, or jj(scale)-diagonal in the scale range, in which the second scale-scale correlations are weak. In this range, the DWT covariance gives a lossless estimation of the power spectrum, which is equal to the corresponding Fourier power spectrum banded with a logarithmical scaling. In the scale range, in which the scale-scale correlation is significant, the accuracy of a power spectrum detection depends on the scale-scale or band-band correlations. This is, for a precision measurements of the power spectrum, a measurement of the scale-scale or band-band correlations is needed. We show that the DWT covariance can be employed to measuring both the band-power spectrum and second order scale-scale correlation. We also present the DWT algorithm of the binning and Poisson sampling with real observational data. We show that the alias effect appeared in usual binning schemes can exactly be eliminated by the DWT binning. Since Poisson process possesses diagonal covariance in the DWT representation, the Poisson sampling and selection effects on the power spectrum and second order scale-scale correlation detection are suppressed into minimum. Moreover, the effect of the non-Gaussian features of the Poisson sampling can be calculated in this frame.Comment: AAS Latex file, 44 pages, accepted for publication in Ap

    A Multiresolution Census Algorithm for Calculating Vortex Statistics in Turbulent Flows

    Full text link
    The fundamental equations that model turbulent flow do not provide much insight into the size and shape of observed turbulent structures. We investigate the efficient and accurate representation of structures in two-dimensional turbulence by applying statistical models directly to the simulated vorticity field. Rather than extract the coherent portion of the image from the background variation, as in the classical signal-plus-noise model, we present a model for individual vortices using the non-decimated discrete wavelet transform. A template image, supplied by the user, provides the features to be extracted from the vorticity field. By transforming the vortex template into the wavelet domain, specific characteristics present in the template, such as size and symmetry, are broken down into components associated with spatial frequencies. Multivariate multiple linear regression is used to fit the vortex template to the vorticity field in the wavelet domain. Since all levels of the template decomposition may be used to model each level in the field decomposition, the resulting model need not be identical to the template. Application to a vortex census algorithm that records quantities of interest (such as size, peak amplitude, circulation, etc.) as the vorticity field evolves is given. The multiresolution census algorithm extracts coherent structures of all shapes and sizes in simulated vorticity fields and is able to reproduce known physical scaling laws when processing a set of voriticity fields that evolve over time

    A survey of parallel algorithms for fractal image compression

    Get PDF
    This paper presents a short survey of the key research work that has been undertaken in the application of parallel algorithms for Fractal image compression. The interest in fractal image compression techniques stems from their ability to achieve high compression ratios whilst maintaining a very high quality in the reconstructed image. The main drawback of this compression method is the very high computational cost that is associated with the encoding phase. Consequently, there has been significant interest in exploiting parallel computing architectures in order to speed up this phase, whilst still maintaining the advantageous features of the approach. This paper presents a brief introduction to fractal image compression, including the iterated function system theory upon which it is based, and then reviews the different techniques that have been, and can be, applied in order to parallelize the compression algorithm

    Spectral fluctuations of tridiagonal random matrices from the beta-Hermite ensemble

    Full text link
    A time series delta(n), the fluctuation of the nth unfolded eigenvalue was recently characterized for the classical Gaussian ensembles of NxN random matrices (GOE, GUE, GSE). It is investigated here for the beta-Hermite ensemble as a function of beta (zero or positive) by Monte Carlo simulations. The fluctuation of delta(n) and the autocorrelation function vary logarithmically with n for any beta>0 (1<<n<<N). The simple logarithmic behavior reported for the higher-order moments of delta(n) for the GOE (beta=1) and the GUE (beta=2) is valid for any positive beta and is accounted for by Gaussian distributions whose variances depend linearly on ln(n). The 1/f noise previously demonstrated for delta(n) series of the three Gaussian ensembles, is characterized by wavelet analysis both as a function of beta and of N. When beta decreases from 1 to 0, for a given and large enough N, the evolution from a 1/f noise at beta=1 to a 1/f^2 noise at beta=0 is heterogeneous with a ~1/f^2 noise at the finest scales and a ~1/f noise at the coarsest ones. The range of scales in which a ~1/f^2 noise predominates grows progressively when beta decreases. Asymptotically, a 1/f^2 noise is found for beta=0 while a 1/f noise is the rule for beta positive.Comment: 35 pages, 10 figures, corresponding author: G. Le Cae

    On the efficient Monte Carlo implementation of path integrals

    Full text link
    We demonstrate that the Levy-Ciesielski implementation of Lie-Trotter products enjoys several properties that make it extremely suitable for path-integral Monte Carlo simulations: fast computation of paths, fast Monte Carlo sampling, and the ability to use different numbers of time slices for the different degrees of freedom, commensurate with the quantum effects. It is demonstrated that a Monte Carlo simulation for which particles or small groups of variables are updated in a sequential fashion has a statistical efficiency that is always comparable to or better than that of an all-particle or all-variable update sampler. The sequential sampler results in significant computational savings if updating a variable costs only a fraction of the cost for updating all variables simultaneously or if the variables are independent. In the Levy-Ciesielski representation, the path variables are grouped in a small number of layers, with the variables from the same layer being statistically independent. The superior performance of the fast sampling algorithm is shown to be a consequence of these observations. Both mathematical arguments and numerical simulations are employed in order to quantify the computational advantages of the sequential sampler, the Levy-Ciesielski implementation of path integrals, and the fast sampling algorithm.Comment: 14 pages, 3 figures; submitted to Phys. Rev.

    On adaptive wavelet estimation of a class of weighted densities

    Full text link
    We investigate the estimation of a weighted density taking the form g=w(F)fg=w(F)f, where ff denotes an unknown density, FF the associated distribution function and ww is a known (non-negative) weight. Such a class encompasses many examples, including those arising in order statistics or when gg is related to the maximum or the minimum of NN (random or fixed) independent and identically distributed (\iid) random variables. We here construct a new adaptive non-parametric estimator for gg based on a plug-in approach and the wavelets methodology. For a wide class of models, we prove that it attains fast rates of convergence under the Lp\mathbb{L}_p risk with p1p\ge 1 (not only for p=2p = 2 corresponding to the mean integrated squared error) over Besov balls. The theoretical findings are illustrated through several simulations

    Multiscale 3D Shape Analysis using Spherical Wavelets

    Get PDF
    ©2005 Springer. The original publication is available at www.springerlink.com: http://dx.doi.org/10.1007/11566489_57DOI: 10.1007/11566489_57Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data

    Selective Principal Component Extraction and Reconstruction: A Novel Method for Ground Based Exoplanet Spectroscopy

    Full text link
    Context: Infrared spectroscopy of primary and secondary eclipse events probes the composition of exoplanet atmospheres and, using space telescopes, has detected H2O, CH4 and CO2 in three hot Jupiters. However, the available data from space telescopes has limited spectral resolution and does not cover the 2.4 - 5.2 micron spectral region. While large ground based telescopes have the potential to obtain molecular-abundance-grade spectra for many exoplanets, realizing this potential requires retrieving the astrophysical signal in the presence of large Earth-atmospheric and instrument systematic errors. Aims: Here we report a wavelet-assisted, selective principal component extraction method for ground based retrieval of the dayside spectrum of HD 189733b from data containing systematic errors. Methods: The method uses singular value decomposition and extracts those critical points of the Rayleigh quotient which correspond to the planet induced signal. The method does not require prior knowledge of the planet spectrum or the physical mechanisms causing systematic errors. Results: The spectrum obtained with our method is in excellent agreement with space based measurements made with HST and Spitzer (Swain et al. 2009b; Charbonneau et al. 2008) and confirms the recent ground based measurements (Swain et al. 2010) including the strong 3.3 micron emission.Comment: 4 pages, 3 figures; excepted for publication by A&

    Deep Regionlets for Object Detection

    Full text link
    In this paper, we propose a novel object detection framework named "Deep Regionlets" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a "gating network" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-the-art algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.Comment: Accepted to ECCV 201

    Measurements design and phenomena discrimination

    Get PDF
    The construction of measurements suitable for discriminating signal components produced by phenomena of different types is considered. The required measurements should be capable of cancelling out those signal components which are to be ignored when focusing on a phenomenon of interest. Under the hypothesis that the subspaces hosting the signal components produced by each phenomenon are complementary, their discrimination is accomplished by measurements giving rise to the appropriate oblique projector operator. The subspace onto which the operator should project is selected by nonlinear techniques in line with adaptive pursuit strategies
    corecore