4,292 research outputs found

    Heterogeneity in structurally arrested hard spheres

    Get PDF
    When cooled or compressed sufficiently rapidly, a liquid vitrifies into a glassy amorphous state. Vitrification in a dense liquid is associated with jamming of the particles. For hard spheres, the density and degree of order in the final structure depend on the compression rate: simple intuition suggests, and previous computer simulation demonstrates, that slower compression results in states that are both denser and more ordered. In this work, we use the Lubachevsky-Stillinger algorithm to generate a sequence of structurally arrested hard-sphere states by varying the compression rate. We find that while the degree of order, as measured by both bond-orientation and translation order parameters, increases monotonically with decreasing compression rate, the density of the arrested state first increases, then decreases, then increases again, as the compression rate decreases, showing a minimum at an intermediate compression rate. Examination of the distribution of the local order parameters and the distribution of the root-mean-square fluctuation of the particle positions, as well as direct visual inspection of the arrested structures, reveal that they are structurally heterogeneous, consisting of disordered, amorphous regions and locally ordered crystal-like domains. In particular, the low-density arrested states correspond with many interconnected small crystal clusters that form a polycrystalline network interspersed in an amorphous background, suggesting that jamming by the domains may be an important mechanism for these states

    Scalable Label Distribution Learning for Multi-Label Classification

    Full text link
    Multi-label classification (MLC) refers to the problem of tagging a given instance with a set of relevant labels. Most existing MLC methods are based on the assumption that the correlation of two labels in each label pair is symmetric, which is violated in many real-world scenarios. Moreover, most existing methods design learning processes associated with the number of labels, which makes their computational complexity a bottleneck when scaling up to large-scale output space. To tackle these issues, we propose a novel MLC learning method named Scalable Label Distribution Learning (SLDL) for multi-label classification which can describe different labels as distributions in a latent space, where the label correlation is asymmetric and the dimension is independent of the number of labels. Specifically, SLDL first converts labels into continuous distributions within a low-dimensional latent space and leverages the asymmetric metric to establish the correlation between different labels. Then, it learns the mapping from the feature space to the latent space, resulting in the computational complexity is no longer related to the number of labels. Finally, SLDL leverages a nearest-neighbor-based strategy to decode the latent representations and obtain the final predictions. Our extensive experiments illustrate that SLDL can achieve very competitive classification performances with little computational consumption
    • …
    corecore