92,670 research outputs found

    COACHES Cooperative Autonomous Robots in Complex and Human Populated Environments

    Get PDF
    Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: (1) a knowledge-based representation of the environment, (2) human activities and needs estimation using Markov and Bayesian techniques, (3) distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and (4) a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of ?xed cameras and the mobile robots. The ?xed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the COACHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN

    Through a glass darkly: a case for the study of virtual space

    Get PDF
    This paper begins to examine the similarities and differences between virtual space and real space, as taken from anarchitectural (as opposed to a biological, psychological, geographic, philosophical or information theoretic)standpoint. It continues by introducing a number of criteria, suggested by the authors as being necessary for virtualspace to be used in a manner consistent with our experience of real space. Finally, it concludes by suggesting apedagogical framework for the benefits and associated learning outcomes of the study and examination of thisrelationship. This is accompanied by examples of recent student work, which set out to investigate this relationship

    Through a glass darkly: a case for the study of virtual space

    Get PDF
    This paper begins to examine the similarities and differences between virtual space and real space, as taken from anarchitectural (as opposed to a biological, psychological, geographic, philosophical or information theoretic)standpoint. It continues by introducing a number of criteria, suggested by the authors as being necessary for virtualspace to be used in a manner consistent with our experience of real space. Finally, it concludes by suggesting apedagogical framework for the benefits and associated learning outcomes of the study and examination of thisrelationship. This is accompanied by examples of recent student work, which set out to investigate this relationship

    Challenges for identifying the neural mechanisms that support spatial navigation: the impact of spatial scale.

    Get PDF
    Spatial navigation is a fascinating behavior that is essential for our everyday lives. It involves nearly all sensory systems, it requires numerous parallel computations, and it engages multiple memory systems. One of the key problems in this field pertains to the question of reference frames: spatial information such as direction or distance can be coded egocentrically-relative to an observer-or allocentrically-in a reference frame independent of the observer. While many studies have associated striatal and parietal circuits with egocentric coding and entorhinal/hippocampal circuits with allocentric coding, this strict dissociation is not in line with a growing body of experimental data. In this review, we discuss some of the problems that can arise when studying the neural mechanisms that are presumed to support different spatial reference frames. We argue that the scale of space in which a navigation task takes place plays a crucial role in determining the processes that are being recruited. This has important implications, particularly for the inferences that can be made from animal studies in small scale space about the neural mechanisms supporting human spatial navigation in large (environmental) spaces. Furthermore, we argue that many of the commonly used tasks to study spatial navigation and the underlying neuronal mechanisms involve different types of reference frames, which can complicate the interpretation of neurophysiological data

    Applying Bourdieu to socio-technical systems: The importance of affordances for social translucence in building 'capital' and status to eBay's success

    Get PDF
    This paper introduces the work of Sociologist Pierre Bourdieu and his concepts of ‘the field’ and ‘capital’ in relation to eBay. This paper considers eBay to be a socio-technical system with its own set of social norms, rules and competition over ‘capital’. eBay is used as a case study of the importance of using a Bourdieuean approach to create successful socio-technical systems.Using a two-year qualitative study of eBay users as empirical illustration, this paper argues that a large part of eBay’s success is in the social and cultural affordances for social translucence and navigation of eBay’s website - in supporting the Bourdieuean competition over capital and status. This exploration has implications for wider socio-technical systems design which this paper will discuss - in particular, the importance of creating socially translucent and navigable systems, informed by Bourdieu’s theoretical insights, which support competition for ‘capital’ and status

    A Topological Deep Learning Framework for Neural Spike Decoding

    Full text link
    The brain's spatial orientation system uses different neuron ensembles to aid in environment-based navigation. One of the ways brains encode spatial information is through grid cells, layers of decked neurons that overlay to provide environment-based navigation. These neurons fire in ensembles where several neurons fire at once to activate a single grid. We want to capture this firing structure and use it to decode grid cell data. Understanding, representing, and decoding these neural structures require models that encompass higher order connectivity than traditional graph-based models may provide. To that end, in this work, we develop a topological deep learning framework for neural spike train decoding. Our framework combines unsupervised simplicial complex discovery with the power of deep learning via a new architecture we develop herein called a simplicial convolutional recurrent neural network (SCRNN). Simplicial complexes, topological spaces that use not only vertices and edges but also higher-dimensional objects, naturally generalize graphs and capture more than just pairwise relationships. Additionally, this approach does not require prior knowledge of the neural activity beyond spike counts, which removes the need for similarity measurements. The effectiveness and versatility of the SCRNN is demonstrated on head direction data to test its performance and then applied to grid cell datasets with the task to automatically predict trajectories

    Neural codes for one’s own position and direction in a real-world “vista” environment

    Get PDF
    Humans, like animals, rely on an accurate knowledge of one’s spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale “vista” space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications
    corecore