1,785 research outputs found

    Cooperative hunting in a discrete predator-prey system

    Full text link
    We propose and investigate a discrete-time predator-prey system with cooperative hunting in the predator population. The model is constructed from the classical Nicholson-Bailey host-parasitoid system with density dependent growth rate. A sufficient condition based on the model parameters for which both populations can coexist is derived, namely that the predator's maximal reproductive number exceeds one. We study existence of interior steady states and their stability in certain parameter regimes. It is shown that the system behaves asymptotically similar to the model with no cooperative hunting if the degree of cooperation is small. Large cooperative hunting, however, may promote persistence of the predator for which the predator would otherwise go extinct if there were no cooperation

    Group chasing tactics: how to catch a faster prey

    Get PDF
    We propose a bio-inspired, agent-based approach to describe the natural phenomenon of group chasing in both two and three dimensions. Using a set of local interaction rules we created a continuous-space and discrete-time model with time delay, external noise and limited acceleration. We implemented a unique collective chasing strategy, optimized its parameters and studied its properties when chasing a much faster, erratic escaper. We show that collective chasing strategies can significantly enhance the chasers’ success rate. Our realistic approach handles group chasing within closed, soft boundaries—in contrast with the periodic ones in most published literature—and resembles several properties of pursuits observed in nature, such as emergent encircling or the escaper’s zigzag motion

    Reinforcement Learning Agents acquire Flocking and Symbiotic Behaviour in Simulated Ecosystems

    Get PDF
    In nature, group behaviours such as flocking as well as cross-species symbiotic partnerships are observed in vastly different forms and circumstances. We hypothesize that such strategies can arise in response to generic predator-prey pressures in a spatial environment with range-limited sensation and action. We evaluate whether these forms of coordination can emerge by independent multi-agent reinforcement learning in simple multiple-species ecosystems. In contrast to prior work, we avoid hand-crafted shaping rewards, specific actions, or dynamics that would directly encourage coordination across agents. Instead we test whether coordination emerges as a consequence of adaptation without encouraging these specific forms of coordination, which only has indirect benefit. Our simulated ecosystems consist of a generic food chain involving three trophic levels: apex predator, mid-level predator, and prey. We conduct experiments on two different platforms, a 3D physics engine with tens of agents as well as in a 2D grid world with up to thousands. The results clearly confirm our hypothesis and show substantial coordination both within and across species. To obtain these results, we leverage and adapt recent advances in deep reinforcement learning within an ecosystem training protocol featuring homogeneous groups of independent agents from different species (sets of policies), acting in many different random combinations in parallel habitats. The policies utilize neural network architectures that are invariant to agent individuality but not type (species) and that generalize across varying numbers of observed other agents. While the emergence of complexity in artificial ecosystems have long been studied in the artificial life community, the focus has been more on individual complexity and genetic algorithms or explicit modelling, and less on group complexity and reinforcement learning emphasized in this article. Unlike what the name and intuition suggests, reinforcement learning adapts over evolutionary history rather than a life-time and is here addressing the sequential optimization of fitness that is usually approached by genetic algorithms in the artificial life community. We utilize a shift from procedures to objectives, allowing us to bring new powerful machinery to bare, and we see emergence of complex behaviour from a sequence of simple optimization problems
    • …
    corecore