4,759 research outputs found

    Smallest small-world network

    Full text link
    Efficiency in passage times is an important issue in designing networks, such as transportation or computer networks. The small-world networks have structures that yield high efficiency, while keeping the network highly clustered. We show that among all networks with the small-world structure, the most efficient ones have a single ``center'', from which all shortcuts are connected to uniformly distributed nodes over the network. The networks with several centers and a connected subnetwork of shortcuts are shown to be ``almost'' as efficient. Genetic-algorithm simulations further support our results.Comment: 5 pages, 6 figures, REVTeX

    A Probabilistic Interpretation of Sampling Theory of Graph Signals

    Full text link
    We give a probabilistic interpretation of sampling theory of graph signals. To do this, we first define a generative model for the data using a pairwise Gaussian random field (GRF) which depends on the graph. We show that, under certain conditions, reconstructing a graph signal from a subset of its samples by least squares is equivalent to performing MAP inference on an approximation of this GRF which has a low rank covariance matrix. We then show that a sampling set of given size with the largest associated cut-off frequency, which is optimal from a sampling theoretic point of view, minimizes the worst case predictive covariance of the MAP estimate on the GRF. This interpretation also gives an intuitive explanation for the superior performance of the sampling theoretic approach to active semi-supervised classification.Comment: 5 pages, 2 figures, To appear in International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 201

    The Parameter-Less Self-Organizing Map algorithm

    Get PDF
    The Parameter-Less Self-Organizing Map (PLSOM) is a new neural network algorithm based on the Self-Organizing Map (SOM). It eliminates the need for a learning rate and annealing schemes for learning rate and neighbourhood size. We discuss the relative performance of the PLSOM and the SOM and demonstrate some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally we discuss some example applications of the PLSOM and present a proof of ordering under certain limited conditions.Comment: 29 pages, 27 figures. Based on publication in IEEE Trans. on Neural Network

    {\sc CosmoNet}: fast cosmological parameter estimation in non-flat models using neural networks

    Full text link
    We present a further development of a method for accelerating the calculation of CMB power spectra, matter power spectra and likelihood functions for use in cosmological Bayesian inference. The algorithm, called {\sc CosmoNet}, is based on training a multilayer perceptron neural network. We compute CMB power spectra (up to ℓ=2000\ell=2000) and matter transfer functions over a hypercube in parameter space encompassing the 4σ4\sigma confidence region of a selection of CMB (WMAP + high resolution experiments) and large scale structure surveys (2dF and SDSS). We work in the framework of a generic 7 parameter non-flat cosmology. Additionally we use {\sc CosmoNet} to compute the WMAP 3-year, 2dF and SDSS likelihoods over the same region. We find that the average error in the power spectra is typically well below cosmic variance for spectra, and experimental likelihoods calculated to within a fraction of a log unit. We demonstrate that marginalised posteriors generated with {\sc CosmoNet} spectra agree to within a few percent of those generated by {\sc CAMB} parallelised over 4 CPUs, but are obtained 2-3 times faster on just a \emph{single} processor. Furthermore posteriors generated directly via {\sc CosmoNet} likelihoods can be obtained in less than 30 minutes on a single processor, corresponding to a speed up of a factor of ∼32\sim 32. We also demonstrate the capabilities of {\sc CosmoNet} by extending the CMB power spectra and matter transfer function training to a more generic 10 parameter cosmological model, including tensor modes, a varying equation of state of dark energy and massive neutrinos. {\sc CosmoNet} and interfaces to both {\sc CosmoMC} and {\sc Bayesys} are publically available at {\tt www.mrao.cam.ac.uk/software/cosmonet}.Comment: 8 pages, submitted to MNRA
    • …
    corecore