24,562 research outputs found

    The Beylkin-Cramer Summation Rule and A New Fast Algorithm of Cosmic Statistics for Large Data Sets

    Full text link
    Based on the Beylkin-Cramer summation rule, we introduce a new fast algorithm that enable us to explore the high order statistics efficiently in large data sets. Central to this technique is to make decomposition both of fields and operators within the framework of multi-resolution analysis (MRA), and realize theirs discrete representations. Accordingly, a homogenous point process could be equivalently described by a operation of a Toeplitz matrix on a vector, which is accomplished by making use of fast Fourier transformation. The algorithm could be applied widely in the cosmic statistics to tackle large data sets. Especially, we demonstrate this novel technique using the spherical, cubic and cylinder counts in cells respectively. The numerical test shows that the algorithm produces an excellent agreement with the expected results. Moreover, the algorithm introduces naturally a sharp-filter, which is capable of suppressing shot noise in weak signals. In the numerical procedures, the algorithm is somewhat similar to particle-mesh (PM) methods in N-body simulations. As scaled with O(Nlogā”N)O(N\log N), it is significantly faster than the current particle-based methods, and its computational cost does not relies on shape or size of sampling cells. In addition, based on this technique, we propose further a simple fast scheme to compute the second statistics for cosmic density fields and justify it using simulation samples. Hopefully, the technique developed here allows us to make a comprehensive study of non-Guassianity of the cosmic fields in high precision cosmology. A specific implementation of the algorithm is publicly available upon request to the author.Comment: 27 pages, 9 figures included. revised version, changes include (a) adding a new fast algorithm for 2nd statistics (b) more numerical tests including counts in asymmetric cells, the two-point correlation functions and 2nd variances (c) more discussions on technic

    Simultaneous Coherent Structure Coloring facilitates interpretable clustering of scientific data by amplifying dissimilarity

    Get PDF
    The clustering of data into physically meaningful subsets often requires assumptions regarding the number, size, or shape of the subgroups. Here, we present a new method, simultaneous coherent structure coloring (sCSC), which accomplishes the task of unsupervised clustering without a priori guidance regarding the underlying structure of the data. sCSC performs a sequence of binary splittings on the dataset such that the most dissimilar data points are required to be in separate clusters. To achieve this, we obtain a set of orthogonal coordinates along which dissimilarity in the dataset is maximized from a generalized eigenvalue problem based on the pairwise dissimilarity between the data points to be clustered. This sequence of bifurcations produces a binary tree representation of the system, from which the number of clusters in the data and their interrelationships naturally emerge. To illustrate the effectiveness of the method in the absence of a priori assumptions, we apply it to three exemplary problems in fluid dynamics. Then, we illustrate its capacity for interpretability using a high-dimensional protein folding simulation dataset. While we restrict our examples to dynamical physical systems in this work, we anticipate straightforward translation to other fields where existing analysis tools require ad hoc assumptions on the data structure, lack the interpretability of the present method, or in which the underlying processes are less accessible, such as genomics and neuroscience

    Incremental Learning of Nonparametric Bayesian Mixture Models

    Get PDF
    Clustering is a fundamental task in many vision applications. To date, most clustering algorithms work in a batch setting and training examples must be gathered in a large group before learning can begin. Here we explore incremental clustering, in which data can arrive continuously. We present a novel incremental model-based clustering algorithm based on nonparametric Bayesian methods, which we call Memory Bounded Variational Dirichlet Process (MB-VDP). The number of clusters are determined flexibly by the data and the approach can be used to automatically discover object categories. The computational requirements required to produce model updates are bounded and do not grow with the amount of data processed. The technique is well suited to very large datasets, and we show that our approach outperforms existing online alternatives for learning nonparametric Bayesian mixture models
    • ā€¦
    corecore