53,921 research outputs found

    Map equation for link community

    Full text link
    Community structure exists in many real-world networks and has been reported being related to several functional properties of the networks. The conventional approach was partitioning nodes into communities, while some recent studies start partitioning links instead of nodes to find overlapping communities of nodes efficiently. We extended the map equation method, which was originally developed for node communities, to find link communities in networks. This method is tested on various kinds of networks and compared with the metadata of the networks, and the results show that our method can identify the overlapping role of nodes effectively. The advantage of this method is that the node community scheme and link community scheme can be compared quantitatively by measuring the unknown information left in the networks besides the community structure. It can be used to decide quantitatively whether or not the link community scheme should be used instead of the node community scheme. Furthermore, this method can be easily extended to the directed and weighted networks since it is based on the random walk.Comment: 9 pages,5 figure

    Cross-Entropy Clustering

    Full text link
    We construct a cross-entropy clustering (CEC) theory which finds the optimal number of clusters by automatically removing groups which carry no information. Moreover, our theory gives simple and efficient criterion to verify cluster validity. Although CEC can be build on an arbitrary family of densities, in the most important case of Gaussian CEC: {\em -- the division into clusters is affine invariant; -- the clustering will have the tendency to divide the data into ellipsoid-type shapes; -- the approach is computationally efficient as we can apply Hartigan approach.} We study also with particular attention clustering based on the Spherical Gaussian densities and that of Gaussian densities with covariance s \I. In the letter case we show that with ss converging to zero we obtain the classical k-means clustering

    A Review of Codebook Models in Patch-Based Visual Object Recognition

    No full text
    The codebook model-based approach, while ignoring any structural aspect in vision, nonetheless provides state-of-the-art performances on current datasets. The key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of such a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. In our recent work, we proposed a resource-allocating codebook, to constructing a discriminant codebook in a one-pass design procedure that slightly outperforms more traditional approaches at drastically reduced computing times. In this review we survey several approaches that have been proposed over the last decade with their use of feature detectors, descriptors, codebook construction schemes, choice of classifiers in recognising objects, and datasets that were used in evaluating the proposed methods

    Hand gesture recognition based on signals cross-correlation

    Get PDF

    Adaptive gravitational softening in GADGET

    Full text link
    Cosmological simulations of structure formation follow the collisionless evolution of dark matter starting from a nearly homogeneous field at early times down to the highly clustered configuration at redshift zero. The density field is sampled by a number of particles in number infinitely smaller than those believed to be its actual components and this limits the mass and spatial scales over which we can trust the results of a simulation. Softening of the gravitational force is introduced in collisionless simulations to limit the importance of close encounters between these particles. The scale of softening is generally fixed and chosen as a compromise between the need for high spatial resolution and the need to limit the particle noise. In the scenario of cosmological simulations, where the density field evolves to a highly inhomogeneous state, this compromise results in an appropriate choice only for a certain class of objects, the others being subject to either a biased or a noisy dynamical description. We have implemented adaptive gravitational softening lengths in the cosmological simulation code GADGET; the formalism allows the softening scale to vary in space and time according to the density of the environment, at the price of modifying the equation of motion for the particles in order to be consistent with the new dependencies introduced in the system's Lagrangian. We have applied the technique to a number of test cases and to a set of cosmological simulations of structure formation. We conclude that the use of adaptive softening enhances the clustering of particles at small scales, a result visible in the amplitude of the correlation function and in the inner profile of massive objects, thereby anticipating the results expected from much higher resolution simulations.Comment: 15 pages, 21 figures, 1 table. Accepted for publication in MNRA

    Dynamic Zoom Simulations: a fast, adaptive algorithm for simulating lightcones

    Get PDF
    The advent of a new generation of large-scale galaxy surveys is pushing cosmological numerical simulations in an uncharted territory. The simultaneous requirements of high resolution and very large volume pose serious technical challenges, due to their computational and data storage demand. In this paper, we present a novel approach dubbed Dynamic Zoom Simulations -- or DZS -- developed to tackle these issues. Our method is tailored to the production of lightcone outputs from N-body numerical simulations, which allow for a more efficient storage and post-processing compared to standard comoving snapshots, and more directly mimic the format of survey data. In DZS, the resolution of the simulation is dynamically decreased outside the lightcone surface, reducing the computational work load, while simultaneously preserving the accuracy inside the lightcone and the large-scale gravitational field. We show that our approach can achieve virtually identical results to traditional simulations at half of the computational cost for our largest box. We also forecast this speedup to increase up to a factor of 5 for larger and/or higher-resolution simulations. We assess the accuracy of the numerical integration by comparing pairs of identical simulations run with and without DZS. Deviations in the lightcone halo mass function, in the sky-projected lightcone, and in the 3D matter lightcone always remain below 0.1%. In summary, our results indicate that the DZS technique may provide a highly-valuable tool to address the technical challenges that will characterise the next generation of large-scale cosmological simulations.Comment: 17 pages, 13 figures, version accepted for publication in MNRA
    • …
    corecore