2,937 research outputs found

    Lenia and Expanded Universe

    Full text link
    We report experimental extensions of Lenia, a continuous cellular automata family capable of producing lifelike self-organizing autonomous patterns. The rule of Lenia was generalized into higher dimensions, multiple kernels, and multiple channels. The final architecture approaches what can be seen as a recurrent convolutional neural network. Using semi-automatic search e.g. genetic algorithm, we discovered new phenomena like polyhedral symmetries, individuality, self-replication, emission, growth by ingestion, and saw the emergence of "virtual eukaryotes" that possess internal division of labor and type differentiation. We discuss the results in the contexts of biology, artificial life, and artificial intelligence.Comment: 8 pages, 5 figures, 1 table; submitted to ALIFE 2020 conferenc

    A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units

    Get PDF
    Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.GPGPU, Agent Based Modeling, Data Parallel Algorithms, Stochastic Simulations

    From Social Simulation to Integrative System Design

    Full text link
    As the recent financial crisis showed, today there is a strong need to gain "ecological perspective" of all relevant interactions in socio-economic-techno-environmental systems. For this, we suggested to set-up a network of Centers for integrative systems design, which shall be able to run all potentially relevant scenarios, identify causality chains, explore feedback and cascading effects for a number of model variants, and determine the reliability of their implications (given the validity of the underlying models). They will be able to detect possible negative side effect of policy decisions, before they occur. The Centers belonging to this network of Integrative Systems Design Centers would be focused on a particular field, but they would be part of an attempt to eventually cover all relevant areas of society and economy and integrate them within a "Living Earth Simulator". The results of all research activities of such Centers would be turned into informative input for political Decision Arenas. For example, Crisis Observatories (for financial instabilities, shortages of resources, environmental change, conflict, spreading of diseases, etc.) would be connected with such Decision Arenas for the purpose of visualization, in order to make complex interdependencies understandable to scientists, decision-makers, and the general public.Comment: 34 pages, Visioneer White Paper, see http://www.visioneer.ethz.c

    Advancing performability in playable media : a simulation-based interface as a dynamic score

    Get PDF
    When designing playable media with non-game orientation, alternative play scenarios to gameplay scenarios must be accompanied by alternative mechanics to game mechanics. Problems of designing playable media with non-game orientation are stated as the problems of designing a platform for creative explorations and creative expressions. For such design problems, two requirements are articulated: 1) play state transitions must be dynamic in non-trivial ways in order to achieve a significant level of engagement, and 2) pathways for players’ experience from exploration to expression must be provided. The transformative pathway from creative exploration to creative expression is analogous to pathways for game players’ skill acquisition in gameplay. The paper first describes a concept of simulation-based interface, and then binds that concept with the concept of dynamic score. The former partially accounts for the first requirement, the latter the second requirement. The paper describes the prototype and realization of the two concepts’ binding. “Score” is here defined as a representation of cue organization through a transmodal abstraction. A simulation based interface is presented with swarm mechanics and its function as a dynamic score is demonstrated with an interactive musical composition and performance

    Deep learning cardiac motion analysis for human survival prediction

    Get PDF
    Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p < .0001) for our model C=0.73 (95%\% CI: 0.68 - 0.78) than the human benchmark of C=0.59 (95%\% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival

    improving path planning of unmanned aerial vehicles in an immersive environment using meta-paths and terrain information

    Get PDF
    Effective command and control of unmanned aerial vehicles (UAVs) is an issue under investigation as the military pushes toward more automation and incorporation of technology into their operational strategy. UAVs require the intelligence to maneuver safely along a path to an intended target while avoiding obstacles such as other aircraft or enemy threats. To date, path-planning algorithms (designed to aid the operator in the control of semi-autonomous UAVs) have been limited to providing only a single solution (alternate path) without utilizing input or feedback from the UAV operator. The work presented in this thesis builds off of and improves an existing path planner. The original path planner presents a unique platform for decision making in a three-dimensional environment where multiple solution paths are generated using Particle Swarm Optimization (PSO) and returned to the operator for evaluation. The paths are optimized to minimize risk due to enemy threats, to minimize fuel consumption incurred by deviating from the original path, and to maximize reconnaissance over predefined targets. The work presented in this thesis focuses on improving the mathematical models of these objectives. Terrain data is also incorporated into the path planner to ensure that the generated alternate paths are feasible and at a safe height above ground. An effective interface is needed to evaluate the alternate paths returned by PSO. A meta-path is a new concept presented in this thesis to address this issue. Meta-paths allow an operator to explore paths in an efficient and organized manner by displaying multiple alternate paths as a single path cloud. The interface was augmented with more detailed information on these paths to allow the operator to make a more informed decision. Two other interaction techniques were investigated to allow the operator more interactive control over the results displayed by the path planner. Viewing the paths in an immersive environment enhances the operator\u27s understanding of the situation and the options while facilitating better decision making. The problem formulation and solution implementation are described along with the results from several simulated scenarios. Preliminary assessments using simulated scenarios show the usefulness of these features in improving command and control of UAVs. Finally, a user study was conducted to gauge how different visualization capabilities affect operator performance when using an interactive path planning tool. The study demonstrates that viewing alternate paths in 3D instead of 2D takes more time because the operator switches between multiple views of the paths but also suggests that 3D is better for allowing the operator to understand more complex situations

    Business and Social Behaviour Intelligence Analysis Using PSO

    Get PDF
    The goal of this paper is to elaborate swarm intelligence for business intelligence decision making and the business rules management improvement. The paper introduces the decision making model which is based on the application of Artificial Neural Networks (ANNs) and Particle Swarm Optimization (PSO) algorithm. Essentially the business spatial data illustrate the group behaviors. The swarm optimization, which is highly influenced by the behavior of creature, performs in group. The Spatial data is defined as data that is represented by 2D or 3D images. SQL Server supports only 2D images till now. As we know that location is an essential part of any organizational data as well as business data: enterprises maintain customer address lists, own property, ship goods from and to warehouses, manage transport flows among their workforce, and perform many other activities. By means to say a lot of spatial data is used and processed by enterprises, organizations and other bodies in order to make the things more visible and self-descriptive. From the experiments, we found that PSO is can facilitate the intelligence in social and business behaviour
    corecore