1,246 research outputs found

    Entropic Approach to Multiscale Clustering Analysis

    Get PDF
    (iii) not biased against the null hypothesis. Applications to the physics of ultra-high energy cosmic rays, as a cosmological probe, are presented and discussed

    Statistical mechanics characterization of spatio-compositional inhomogeneity

    Full text link
    On the basis of a model system of pillars built of unit cubes, a two-component entropic measure for the multiscale analysis of spatio-compositional inhomogeneity is proposed. It quantifies the statistical dissimilarity per cell of the actual configurational macrostate and the theoretical reference one that maximizes entropy. Two kinds of disorder compete: i) the spatial one connected with possible positions of pillars inside a cell (the first component of the measure), ii) the compositional one linked to compositions of each local sum of their integer heights into a number of pillars occupying the cell (the second component). As both the number of pillars and sum of their heights are conserved, the upper limit for a pillar height hmax occurs. If due to a further constraint there is the more demanding limit h <= h* < hmax, the exact number of restricted compositions can be then obtained only through the generating function. However, at least for systems with exclusively composition degrees of freedom, we show that the neglecting of the h* is not destructive yet for a nice correlation of the h*-constrained entropic measure and its less demanding counterpart, which is much easier to compute. Given examples illustrate a broad applicability of the measure and its ability to quantify some of the subtleties of a fractional Brownian motion, time evolution of a quasipattern [28,29] and reconstruction of a laser-speckle pattern [2], which are hardly to discern or even missed.Comment: 17 pages, 5 figure

    A Smoothed Dual Approach for Variational Wasserstein Problems

    Full text link
    Variational problems that involve Wasserstein distances have been recently proposed to summarize and learn from probability measures. Despite being conceptually simple, such problems are computationally challenging because they involve minimizing over quantities (Wasserstein distances) that are themselves hard to compute. We show that the dual formulation of Wasserstein variational problems introduced recently by Carlier et al. (2014) can be regularized using an entropic smoothing, which leads to smooth, differentiable, convex optimization problems that are simpler to implement and numerically more stable. We illustrate the versatility of this approach by applying it to the computation of Wasserstein barycenters and gradient flows of spacial regularization functionals

    Variational Methods for Biomolecular Modeling

    Full text link
    Structure, function and dynamics of many biomolecular systems can be characterized by the energetic variational principle and the corresponding systems of partial differential equations (PDEs). This principle allows us to focus on the identification of essential energetic components, the optimal parametrization of energies, and the efficient computational implementation of energy variation or minimization. Given the fact that complex biomolecular systems are structurally non-uniform and their interactions occur through contact interfaces, their free energies are associated with various interfaces as well, such as solute-solvent interface, molecular binding interface, lipid domain interface, and membrane surfaces. This fact motivates the inclusion of interface geometry, particular its curvatures, to the parametrization of free energies. Applications of such interface geometry based energetic variational principles are illustrated through three concrete topics: the multiscale modeling of biomolecular electrostatics and solvation that includes the curvature energy of the molecular surface, the formation of microdomains on lipid membrane due to the geometric and molecular mechanics at the lipid interface, and the mean curvature driven protein localization on membrane surfaces. By further implicitly representing the interface using a phase field function over the entire domain, one can simulate the dynamics of the interface and the corresponding energy variation by evolving the phase field function, achieving significant reduction of the number of degrees of freedom and computational complexity. Strategies for improving the efficiency of computational implementations and for extending applications to coarse-graining or multiscale molecular simulations are outlined.Comment: 36 page

    Self-Assembly of Nanocomponents into Composite Structures: Derivation and Simulation of Langevin Equations

    Full text link
    The kinetics of the self-assembly of nanocomponents into a virus, nanocapsule, or other composite structure is analyzed via a multiscale approach. The objective is to achieve predictability and to preserve key atomic-scale features that underlie the formation and stability of the composite structures. We start with an all-atom description, the Liouville equation, and the order parameters characterizing nanoscale features of the system. An equation of Smoluchowski type for the stochastic dynamics of the order parameters is derived from the Liouville equation via a multiscale perturbation technique. The self-assembly of composite structures from nanocomponents with internal atomic structure is analyzed and growth rates are derived. Applications include the assembly of a viral capsid from capsomers, a ribosome from its major subunits, and composite materials from fibers and nanoparticles. Our approach overcomes errors in other coarse-graining methods which neglect the influence of the nanoscale configuration on the atomistic fluctuations. We account for the effect of order parameters on the statistics of the atomistic fluctuations which contribute to the entropic and average forces driving order parameter evolution. This approach enables an efficient algorithm for computer simulation of self-assembly, whereas other methods severely limit the timestep due to the separation of diffusional and complexing characteristic times. Given that our approach does not require recalibration with each new application, it provides a way to estimate assembly rates and thereby facilitate the discovery of self-assembly pathways and kinetic dead-end structures.Comment: 34 pages, 11 figure

    Generating realistic scaled complex networks

    Get PDF
    Research on generative models is a central project in the emerging field of network science, and it studies how statistical patterns found in real networks could be generated by formal rules. Output from these generative models is then the basis for designing and evaluating computational methods on networks, and for verification and simulation studies. During the last two decades, a variety of models has been proposed with an ultimate goal of achieving comprehensive realism for the generated networks. In this study, we (a) introduce a new generator, termed ReCoN; (b) explore how ReCoN and some existing models can be fitted to an original network to produce a structurally similar replica, (c) use ReCoN to produce networks much larger than the original exemplar, and finally (d) discuss open problems and promising research directions. In a comparative experimental study, we find that ReCoN is often superior to many other state-of-the-art network generation methods. We argue that ReCoN is a scalable and effective tool for modeling a given network while preserving important properties at both micro- and macroscopic scales, and for scaling the exemplar data by orders of magnitude in size.Comment: 26 pages, 13 figures, extended version, a preliminary version of the paper was presented at the 5th International Workshop on Complex Networks and their Application
    • …
    corecore