15,738 research outputs found

    Active matter beyond mean-field: Ring-kinetic theory for self-propelled particles

    Full text link
    A ring-kinetic theory for Vicsek-style models of self-propelled agents is derived from the exact N-particle evolution equation in phase space. The theory goes beyond mean-field and does not rely on Boltzmann's approximation of molecular chaos. It can handle pre-collisional correlations and cluster formation which both seem important to understand the phase transition to collective motion. We propose a diagrammatic technique to perform a small density expansion of the collision operator and derive the first two equations of the BBGKY-hierarchy. An algorithm is presented that numerically solves the evolution equation for the two-particle correlations on a lattice. Agent-based simulations are performed and informative quantities such as orientational and density correlation functions are compared with those obtained by ring-kinetic theory. Excellent quantitative agreement between simulations and theory is found at not too small noises and mean free paths. This shows that there is parameter ranges in Vicsek-like models where the correlated closure of the BBGKY-hierarchy gives correct and nontrivial results. We calculate the dependence of the orientational correlations on distance in the disordered phase and find that it seems to be consistent with a power law with exponent around -1.8, followed by an exponential decay. General limitations of the kinetic theory and its numerical solution are discussed

    Doctor of Philosophy

    Get PDF
    dissertationX-ray computed tomography (CT) is a widely popular medical imaging technique that allows for viewing of in vivo anatomy and physiology. In order to produce high-quality images and provide reliable treatment, CT imaging requires the precise knowledge of t

    Cell shape analysis of random tessellations based on Minkowski tensors

    Full text link
    To which degree are shape indices of individual cells of a tessellation characteristic for the stochastic process that generates them? Within the context of stochastic geometry and the physics of disordered materials, this corresponds to the question of relationships between different stochastic models. In the context of image analysis of synthetic and biological materials, this question is central to the problem of inferring information about formation processes from spatial measurements of resulting random structures. We address this question by a theory-based simulation study of shape indices derived from Minkowski tensors for a variety of tessellation models. We focus on the relationship between two indices: an isoperimetric ratio of the empirical averages of cell volume and area and the cell elongation quantified by eigenvalue ratios of interfacial Minkowski tensors. Simulation data for these quantities, as well as for distributions thereof and for correlations of cell shape and volume, are presented for Voronoi mosaics of the Poisson point process, determinantal and permanental point processes, and Gibbs hard-core and random sequential absorption processes as well as for Laguerre tessellations of polydisperse spheres and STIT- and Poisson hyperplane tessellations. These data are complemented by mechanically stable crystalline sphere and disordered ellipsoid packings and area-minimising foam models. We find that shape indices of individual cells are not sufficient to unambiguously identify the generating process even amongst this limited set of processes. However, we identify significant differences of the shape indices between many of these tessellation models. Given a realization of a tessellation, these shape indices can narrow the choice of possible generating processes, providing a powerful tool which can be further strengthened by density-resolved volume-shape correlations.Comment: Chapter of the forthcoming book "Tensor Valuations and their Applications in Stochastic Geometry and Imaging" in Lecture Notes in Mathematics edited by Markus Kiderlen and Eva B. Vedel Jense

    Fuzzy Techniques for Decision Making 2018

    Get PDF
    Zadeh's fuzzy set theory incorporates the impreciseness of data and evaluations, by imputting the degrees by which each object belongs to a set. Its success fostered theories that codify the subjectivity, uncertainty, imprecision, or roughness of the evaluations. Their rationale is to produce new flexible methodologies in order to model a variety of concrete decision problems more realistically. This Special Issue garners contributions addressing novel tools, techniques and methodologies for decision making (inclusive of both individual and group, single- or multi-criteria decision making) in the context of these theories. It contains 38 research articles that contribute to a variety of setups that combine fuzziness, hesitancy, roughness, covering sets, and linguistic approaches. Their ranges vary from fundamental or technical to applied approaches

    Learning Task Specifications from Demonstrations

    Full text link
    Real world applications often naturally decompose into several sub-tasks. In many settings (e.g., robotics) demonstrations provide a natural way to specify the sub-tasks. However, most methods for learning from demonstrations either do not provide guarantees that the artifacts learned for the sub-tasks can be safely recombined or limit the types of composition available. Motivated by this deficit, we consider the problem of inferring Boolean non-Markovian rewards (also known as logical trace properties or specifications) from demonstrations provided by an agent operating in an uncertain, stochastic environment. Crucially, specifications admit well-defined composition rules that are typically easy to interpret. In this paper, we formulate the specification inference task as a maximum a posteriori (MAP) probability inference problem, apply the principle of maximum entropy to derive an analytic demonstration likelihood model and give an efficient approach to search for the most likely specification in a large candidate pool of specifications. In our experiments, we demonstrate how learning specifications can help avoid common problems that often arise due to ad-hoc reward composition.Comment: NIPS 201

    A Robust Distance Measurement and Dark Energy Constraints from the Spherically-Averaged Correlation Function of Sloan Digital Sky Survey Luminous Red Galaxies

    Full text link
    We measure the effective distance to z=0.35, D_V(0.35), from the overall shape of the spherically-averaged two-point correlation function of the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) luminous red galaxy (LRG) sample. We find D_V(0.35)=1428_{-73}^{+74} without assuming a dark energy model or a flat Universe. We find that the derived measurement of r_s(z_d)/D_V(0.35)=0.1143 \pm 0.0030 (the ratio of the sound horizon at the drag epoch to the effective distance to z=0.35) is more tightly constrained and more robust with respect to possible systematic effects. It is also nearly uncorrelated with \Omega_m h^2. Combining our results with the cosmic microwave background and supernova data, we obtain \Omega_k=-0.0032^{+0.0074}_{-0.0072} and w=-1.010^{+0.046}_{-0.045} (assuming a constant dark energy equation of state). By scaling the spherically-averaged correlation function, we find the Hubble parameter H(0.35)=83^{+13}_{-15} km s^{-1}Mpc^{-1} and the angular diameter distance D_A(0.35)=1089^{+93}_{-87} Mpc. We use LasDamas SDSS mock catalogs to compute the covariance matrix of the correlation function, and investigate the use of lognormal catalogs as an alternative. We find that the input correlation function can be accurately recovered from lognormal catalogs, although they give larger errors on all scales (especially on small scales) compared to the mock catalogs derived from cosmological N-body simulations.Comment: revised, 12 pages, 12 figure

    Sparse neural networks with large learning diversity

    Full text link
    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory

    Modelling fraud detection by attack trees and Choquet integral

    Get PDF
    Modelling an attack tree is basically a matter of associating a logical ÒndÓand a logical ÒrÓ but in most of real world applications related to fraud management the Ònd/orÓlogic is not adequate to effectively represent the relationship between a parent node and its children, most of all when information about attributes is associated to the nodes and the main problem to solve is how to promulgate attribute values up the tree through recursive aggregation operations occurring at the Ònd/orÓnodes. OWA-based aggregations have been introduced to generalize ÒndÓand ÒrÓoperators starting from the observation that in between the extremes Òor allÓ(and) and Òor anyÓ(or), terms (quantifiers) like ÒeveralÓ ÒostÓ ÒewÓ ÒomeÓ etc. can be introduced to represent the different weights associated to the nodes in the aggregation. The aggregation process taking place at an OWA node depends on the ordered position of the child nodes but it doesnÕ take care of the possible interactions between the nodes. In this paper, we propose to overcome this drawback introducing the Choquet integral whose distinguished feature is to be able to take into account the interaction between nodes. At first, the attack tree is valuated recursively through a bottom-up algorithm whose complexity is linear versus the number of nodes and exponential for every node. Then, the algorithm is extended assuming that the attribute values in the leaves are unimodal LR fuzzy numbers and the calculation of Choquet integral is carried out using the alpha-cuts.Fraud detection; attack tree; ordered weighted averaging (OWA) operator; Choquet integral; fuzzy numbers.

    Probability Transform Based on the Ordered Weighted Averaging and Entropy Difference

    Get PDF
    Dempster-Shafer evidence theory can handle imprecise and unknown information, which has attracted many people. In most cases, the mass function can be translated into the probability, which is useful to expand the applications of the D-S evidence theory. However, how to reasonably transfer the mass function to the probability distribution is still an open issue. Hence, the paper proposed a new probability transform method based on the ordered weighted averaging and entropy difference. The new method calculates weights by ordered weighted averaging, and adds entropy difference as one of the measurement indicators. Then achieved the transformation of the minimum entropy difference by adjusting the parameter r of the weight function. Finally, some numerical examples are given to prove that new method is more reasonable and effective
    • …
    corecore