4,922 research outputs found

    Multiscale Probability Transformation of Basic Probability Assignment

    Get PDF
    Decision making is still an open issue in the application of Dempster-Shafer evidence theory. A lot of works have been presented for it. In the transferable belief model (TBM), pignistic probabilities based on the basic probability assignments are used for decision making. In this paper, multiscale probability transformation of basic probability assignment based on the belief function and the plausibility function is proposed, which is a generalization of the pignistic probability transformation. In the multiscale probability function, a factor q based on the Tsallis entropy is used to make the multiscale probabilities diversified. An example showing that the multiscale probability transformation is more reasonable in the decision making is given

    Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs

    Full text link
    Laplacian mixture models identify overlapping regions of influence in unlabeled graph and network data in a scalable and computationally efficient way, yielding useful low-dimensional representations. By combining Laplacian eigenspace and finite mixture modeling methods, they provide probabilistic or fuzzy dimensionality reductions or domain decompositions for a variety of input data types, including mixture distributions, feature vectors, and graphs or networks. Provable optimal recovery using the algorithm is analytically shown for a nontrivial class of cluster graphs. Heuristic approximations for scalable high-performance implementations are described and empirically tested. Connections to PageRank and community detection in network analysis demonstrate the wide applicability of this approach. The origins of fuzzy spectral methods, beginning with generalized heat or diffusion equations in physics, are reviewed and summarized. Comparisons to other dimensionality reduction and clustering methods for challenging unsupervised machine learning problems are also discussed.Comment: 13 figures, 35 reference

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Measuring the galaxy power spectrum and scale-scale correlations with multiresolution-decomposed covariance -- I. method

    Get PDF
    We present a method of measuring galaxy power spectrum based on the multiresolution analysis of the discrete wavelet transformation (DWT). Since the DWT representation has strong capability of suppressing the off-diagonal components of the covariance for selfsimilar clustering, the DWT covariance for popular models of the cold dark matter cosmogony generally is diagonal, or jj(scale)-diagonal in the scale range, in which the second scale-scale correlations are weak. In this range, the DWT covariance gives a lossless estimation of the power spectrum, which is equal to the corresponding Fourier power spectrum banded with a logarithmical scaling. In the scale range, in which the scale-scale correlation is significant, the accuracy of a power spectrum detection depends on the scale-scale or band-band correlations. This is, for a precision measurements of the power spectrum, a measurement of the scale-scale or band-band correlations is needed. We show that the DWT covariance can be employed to measuring both the band-power spectrum and second order scale-scale correlation. We also present the DWT algorithm of the binning and Poisson sampling with real observational data. We show that the alias effect appeared in usual binning schemes can exactly be eliminated by the DWT binning. Since Poisson process possesses diagonal covariance in the DWT representation, the Poisson sampling and selection effects on the power spectrum and second order scale-scale correlation detection are suppressed into minimum. Moreover, the effect of the non-Gaussian features of the Poisson sampling can be calculated in this frame.Comment: AAS Latex file, 44 pages, accepted for publication in Ap

    Multilevel Artificial Neural Network Training for Spatially Correlated Learning

    Get PDF
    Multigrid modeling algorithms are a technique used to accelerate relaxation models running on a hierarchy of similar graphlike structures. We introduce and demonstrate a new method for training neural networks which uses multilevel methods. Using an objective function derived from a graph-distance metric, we perform orthogonally-constrained optimization to find optimal prolongation and restriction maps between graphs. We compare and contrast several methods for performing this numerical optimization, and additionally present some new theoretical results on upper bounds of this type of objective function. Once calculated, these optimal maps between graphs form the core of Multiscale Artificial Neural Network (MsANN) training, a new procedure we present which simultaneously trains a hierarchy of neural network models of varying spatial resolution. Parameter information is passed between members of this hierarchy according to standard coarsening and refinement schedules from the multiscale modelling literature. In our machine learning experiments, these models are able to learn faster than default training, achieving a comparable level of error in an order of magnitude fewer training examples.Comment: Manuscript (24 pages) and Supplementary Material (4 pages). Updated January 2019 to reflect new formulation of MsANN structure and new training procedur
    corecore