1,317 research outputs found

    Compressing networks with super nodes

    Full text link
    Community detection is a commonly used technique for identifying groups in a network based on similarities in connectivity patterns. To facilitate community detection in large networks, we recast the network to be partitioned into a smaller network of 'super nodes', each super node comprising one or more nodes in the original network. To define the seeds of our super nodes, we apply the 'CoreHD' ranking from dismantling and decycling. We test our approach through the analysis of two common methods for community detection: modularity maximization with the Louvain algorithm and maximum likelihood optimization for fitting a stochastic block model. Our results highlight that applying community detection to the compressed network of super nodes is significantly faster while successfully producing partitions that are more aligned with the local network connectivity, more stable across multiple (stochastic) runs within and between community detection algorithms, and overlap well with the results obtained using the full network

    Stable and actionable explanations of black-box models through factual and counterfactual rules

    Get PDF
    Recent years have witnessed the rise of accurate but obscure classification models that hide the logic of their internal decision processes. Explaining the decision taken by a black-box classifier on a specific input instance is therefore of striking interest. We propose a local rule-based model-agnostic explanation method providing stable and actionable explanations. An explanation consists of a factual logic rule, stating the reasons for the black-box decision, and a set of actionable counterfactual logic rules, proactively suggesting the changes in the instance that lead to a different outcome. Explanations are computed from a decision tree that mimics the behavior of the black-box locally to the instance to explain. The decision tree is obtained through a bagging-like approach that favors stability and fidelity: first, an ensemble of decision trees is learned from neighborhoods of the instance under investigation; then, the ensemble is merged into a single decision tree. Neighbor instances are synthetically generated through a genetic algorithm whose fitness function is driven by the black-box behavior. Experiments show that the proposed method advances the state-of-the-art towards a comprehensive approach that successfully covers stability and actionability of factual and counterfactual explanations

    Simulating Hard Rigid Bodies

    Full text link
    Several physical systems in condensed matter have been modeled approximating their constituent particles as hard objects. The hard spheres model has been indeed one of the cornerstones of the computational and theoretical description in condensed matter. The next level of description is to consider particles as rigid objects of generic shape, which would enrich the possible phenomenology enormously. This kind of modeling will prove to be interesting in all those situations in which steric effects play a relevant role. These include biology, soft matter, granular materials and molecular systems. With a view to developing a general recipe for event-driven Molecular Dynamics simulations of hard rigid bodies, two algorithms for calculating the distance between two convex hard rigid bodies and the contact time of two colliding hard rigid bodies solving a non-linear set of equations will be described. Building on these two methods, an event-driven molecular dynamics algorithm for simulating systems of convex hard rigid bodies will be developed and illustrated in details. In order to optimize the collision detection between very elongated hard rigid bodies, a novel nearest-neighbor list method based on an oriented bounding box will be introduced and fully explained. Efficiency and performance of the new algorithm proposed will be extensively tested for uniaxial hard ellipsoids and superquadrics. Finally applications in various scientific fields will be reported and discussed.Comment: 36 pages, 17 figure

    Geospatial Tessellation in the Agent-In-Cell Model: A Framework for Agent-Based Modeling of Pandemic

    Full text link
    Agent-based simulation is a versatile and potent computational modeling technique employed to analyze intricate systems and phenomena spanning diverse fields. However, due to their computational intensity, agent-based models become more resource-demanding when geographic considerations are introduced. This study delves into diverse strategies for crafting a series of Agent-Based Models, named "agent-in-the-cell," which emulate a city. These models, incorporating geographical attributes of the city and employing real-world open-source mobility data from Safegraph's publicly available dataset, simulate the dynamics of COVID spread under varying scenarios. The "agent-in-the-cell" concept designates that our representative agents, called meta-agents, are linked to specific home cells in the city's tessellation. We scrutinize tessellations of the mobility map with varying complexities and experiment with the agent density, ranging from matching the actual population to reducing the number of (meta-) agents for computational efficiency. Our findings demonstrate that tessellations constructed according to the Voronoi Diagram of specific location types on the street network better preserve dynamics compared to Census Block Group tessellations and better than Euclidean-based tessellations. Furthermore, the Voronoi Diagram tessellation and also a hybrid -- Voronoi Diagram - and Census Block Group - based -- tessellation require fewer meta-agents to adequately approximate full-scale dynamics. Our analysis spans a range of city sizes in the United States, encompassing small (Santa Fe, NM), medium (Seattle, WA), and large (Chicago, IL) urban areas. This examination also provides valuable insights into the effects of agent count reduction, varying sensitivity metrics, and the influence of city-specific factors

    New off-lattice Pattern Recognition Scheme for off-lattice kinetic Monte Carlo Simulations

    Full text link
    We report the development of a new pattern-recognition scheme for the off- lattice self-learning kinetic Monte Carlo (KMC) method that is simple and flex ible enough that it can be applied to all types of surfaces. In this scheme, to uniquely identify the local environment and associated processes involving three-dimensional (3D) motion of an atom or atoms, 3D space around a central atom or leading atom is divided into 3D rectangular boxes. The dimensions and the number of 3D boxes are determined by the type of the lattice and by the ac- curacy with which a process needs to be identified. As a test of this method we present the application of off-lattice KMC with the pattern-recognition scheme to 3D Cu island decay on the Cu(100) surface and to 2D diffusion of a Cu monomer and a dimer on the Cu (111) surface. We compare the results and computational efficiency to those available in the literature.Comment: 25 pages, 12 figure

    Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    Get PDF
    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era
    • …
    corecore