21,971 research outputs found

    Omnidirectional Sensory and Motor Volumes in Electric Fish

    Get PDF
    Active sensing organisms, such as bats, dolphins, and weakly electric fish, generate a 3-D space for active sensation by emitting self-generated energy into the environment. For a weakly electric fish, we demonstrate that the electrosensory space for prey detection has an unusual, omnidirectional shape. We compare this sensory volume with the animal's motor volume—the volume swept out by the body over selected time intervals and over the time it takes to come to a stop from typical hunting velocities. We find that the motor volume has a similar omnidirectional shape, which can be attributed to the fish's backward-swimming capabilities and body dynamics. We assessed the electrosensory space for prey detection by analyzing simulated changes in spiking activity of primary electrosensory afferents during empirically measured and synthetic prey capture trials. The animal's motor volume was reconstructed from video recordings of body motion during prey capture behavior. Our results suggest that in weakly electric fish, there is a close connection between the shape of the sensory and motor volumes. We consider three general spatial relationships between 3-D sensory and motor volumes in active and passive-sensing animals, and we examine hypotheses about these relationships in the context of the volumes we quantify for weakly electric fish. We propose that the ratio of the sensory volume to the motor volume provides insight into behavioral control strategies across all animals

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Scalable discovery of hybrid process models in a cloud computing environment

    Get PDF
    Process descriptions are used to create products and deliver services. To lead better processes and services, the first step is to learn a process model. Process discovery is such a technique which can automatically extract process models from event logs. Although various discovery techniques have been proposed, they focus on either constructing formal models which are very powerful but complex, or creating informal models which are intuitive but lack semantics. In this work, we introduce a novel method that returns hybrid process models to bridge this gap. Moreover, to cope with today’s big event logs, we propose an efficient method, called f-HMD, aims at scalable hybrid model discovery in a cloud computing environment. We present the detailed implementation of our approach over the Spark framework, and our experimental results demonstrate that the proposed method is efficient and scalabl

    Optimal local estimates of visual motion in a natural environment

    Full text link
    Many organisms, from flies to humans, use visual signals to estimate their motion through the world. To explore the motion estimation problem, we have constructed a camera/gyroscope system that allows us to sample, at high temporal resolution, the joint distribution of input images and rotational motions during a long walk in the woods. From these data we construct the optimal estimator of velocity based on spatial and temporal derivatives of image intensity in small patches of the visual world. Over the bulk of the naturally occurring dynamic range, the optimal estimator exhibits the same systematic errors seen in neural and behavioral responses, including the confounding of velocity and contrast. These results suggest that apparent errors of sensory processing may reflect an optimal response to the physical signals in the environment

    From Social Simulation to Integrative System Design

    Full text link
    As the recent financial crisis showed, today there is a strong need to gain "ecological perspective" of all relevant interactions in socio-economic-techno-environmental systems. For this, we suggested to set-up a network of Centers for integrative systems design, which shall be able to run all potentially relevant scenarios, identify causality chains, explore feedback and cascading effects for a number of model variants, and determine the reliability of their implications (given the validity of the underlying models). They will be able to detect possible negative side effect of policy decisions, before they occur. The Centers belonging to this network of Integrative Systems Design Centers would be focused on a particular field, but they would be part of an attempt to eventually cover all relevant areas of society and economy and integrate them within a "Living Earth Simulator". The results of all research activities of such Centers would be turned into informative input for political Decision Arenas. For example, Crisis Observatories (for financial instabilities, shortages of resources, environmental change, conflict, spreading of diseases, etc.) would be connected with such Decision Arenas for the purpose of visualization, in order to make complex interdependencies understandable to scientists, decision-makers, and the general public.Comment: 34 pages, Visioneer White Paper, see http://www.visioneer.ethz.c
    corecore