235 research outputs found

    Collision Avoidance With Multiple Walkers: Sequential or Simultaneous Interactions?

    Get PDF
    Collision avoidance between multiple walkers, such as pedestrians in a crowd, is based on a reciprocal coupling between the walkers with a continuous loop between perception and action. Such interpersonal coordination has previously been studied in the case of dyadic locomotor interactions. However, when walking through a crowd of people, collision avoidance is not restricted to dyadic interactions. We examined how dyadic avoidance (1 vs. 1) compared to triadic avoidance (1 vs. 2). Additionally, we examined how the dynamics of a passable gap between two walkers affected locomotor interactions. To this end, we manipulated the starting formation of two walkers that formed a potentially pass-able gap for the other walker. We analyzed the interactions in terms of the evolution over time of the Minimal Predicted Distance and the Dynamics of the Gap, which both provide information about what action is afforded (i.e., passing in front/behind and the pass-ability of the gap). Results showed that some triadic interactions invited for sequential interactions, resulting in avoidance strategies comparable with dyadic interactions. However, some formations resulted in simultaneous interactions where the dynamics of the pass-ability of the gap revealed that the coordination strategy emerged over time through the bi-directional interactions between all walkers. Future work should address which circumstances invite for simultaneous and which for sequential interactions between multiple walkers. This study contributed toward understanding how collision is avoided between multiple walkers at the level of the local interactions

    VISUAL INPUTS AND MOTOR OUTPUTS AS INDIVIDUALS WALK THROUGH DYNAMICALLY CHANGING ENVIRONMENTS

    Get PDF
    Walking around in dynamically changing environments require the integration of three of our sensory systems: visual, vestibular, and kinesethic. Vision is the only modality of these three sensory systems that provides information at a distance for proactively controlling locomotion (Gibson, 1958). The visual system provides information about self-motion, about body position and body segments relative to one another and the environment, and environmental information at a distance (Patla, 1998). Gibson (1979) developed the idea that everyday behaviour is controlled by perception-action coupling between an action and some specific information picked up from the optic flow that is generated by that action. Such that visual perception guides the action required to navigate safely through an environment and the action in turn alters perception. The objective of my thesis was to determine how well perception and action are coupled when approaching and walking through moving doors with dynamically changing apertures. My first two studies were grouped together and here I found that as the level of threat increased, the parameters of control changed and not the controlling mechanism. The two dominant action control parameters observed were a change in approach velocity and a change in posture (i. e. shoulder rotation). These findings add to previous work done in this area using a similar set-up in virtual reality, where after much practice participants increased success rate by decreasing velocity prior to crossing the doors. In my third study I found that visual fixation patterns and action parameters were similar when the location of the aperture was predictable and when it was not. Previous work from other researchers has shown that vision and a subsequent action are tightly coupled with a latency of about 1second. I have found that vision only tightly couples action when a specific action is required and the threat of a collision increases. My findings also point in the same direction as previous work that has shown that individuals look where they are going. My last study was designed to determine if we go where we are looking. Here I found that action does follow vision but is only loosely correlated. The most important and common finding from all the studies is that at 2 seconds prior to crossing the moving doors (any type of movement) vision seems to have the most profound effect on action. At this time variability in action is significantly lower than at prior times. I believe that my findings will help to understand how individuals use vision to modify actions in order to avoid colliding with other people or other moving objects within the environment. And this knowledge will help elderly individuals to be better able to cope with walking in cluttered environments and avoid contacting other objects

    Visual Inputs and Motor Outputs as Indivduals Walk Through Dynamically Changing Environments

    Get PDF
    Walking around in dynamically changing environments require the integration of three of our sensory systems: visual, vestibular, and kinesethic. Vision is the only modality of these three sensory systems that provides information at a distance for proactively controlling locomotion (Gibson, 1958). The visual system provides information about self-motion, about body position and body segments relative to one another and the environment, and environmental information at a distance (Patla, 1998). Gibson (1979) developed the idea that everyday behaviour is controlled by perception-action coupling between an action and some specific information picked up from the optic flow that is generated by that action. Such that visual perception guides the action required to navigate safely through an environment and the action in turn alters perception. The objective of my thesis was to determine how well perception and action are coupled when approaching and walking through moving doors with dynamically changing apertures. My first two studies were grouped together and here I found that as the level of threat increased, the parameters of control changed and not the controlling mechanism. The two dominant action control parameters observed were a change in approach velocity and a change in posture (i.e. shoulder rotation). These findings add to previous work done in this area using a similar set-up in virtual reality, where after much practice participants increased success rate by decreasing velocity prior to crossing the doors. In my third study I found that visual fixation patterns and action parameters were similar when the location of the aperture was predictable and when it was not. Previous work from other researchers has shown that vision and a subsequent action are tightly coupled with a latency of about 1second. I have found that vision only tightly couples action when a specific action is required and the threat of a collision increases. My findings also point in the same direction as previous work that has shown that individuals look where they are going. My last study was designed to determine if we go where we are looking. Here I found that action does follow vision but is only loosely correlated. The most important and common finding from all the studies is that at 2 seconds prior to crossing the moving doors (any type of movement) vision seems to have the most profound effect on action. At this time variability in action is significantly lower than at prior times. I believe that my findings will help to understand how individuals use vision to modify actions in order to avoid colliding with other people or other moving objects within the environment. And this knowledge will help elderly individuals to be better able to cope with walking in cluttered environments and avoid contacting other objects

    Human-aware space sharing and navigation for an interactive robot

    Get PDF
    Les méthodes de planification de mouvements robotiques se sont développées à un rythme accéléré ces dernières années. L'accent a principalement été mis sur le fait de rendre les robots plus efficaces, plus sécurisés et plus rapides à réagir à des situations imprévisibles. En conséquence, nous assistons de plus en plus à l'introduction des robots de service dans notre vie quotidienne, en particulier dans les lieux publics tels que les musées, les centres commerciaux et les aéroports. Tandis qu'un robot de service mobile se déplace dans l'environnement humain, il est important de prendre en compte l'effet de son comportement sur les personnes qu'il croise ou avec lesquelles il interagit. Nous ne les voyons pas comme de simples machines, mais comme des agents sociaux et nous nous attendons à ce qu'ils se comportent de manière similaire à l'homme en suivant les normes sociétales comme des règles. Ceci a créé de nouveaux défis et a ouvert de nouvelles directions de recherche pour concevoir des algorithmes de commande de robot, qui fournissent des comportements de robot acceptables, lisibles et proactifs. Cette thèse propose une méthode coopérative basée sur l'optimisation pour la planification de trajectoire et la navigation du robot avec des contraintes sociales intégrées pour assurer des mouvements de robots prudents, conscients de la présence de l'être humain et prévisibles. La trajectoire du robot est ajustée dynamiquement et continuellement pour satisfaire ces contraintes sociales. Pour ce faire, nous traitons la trajectoire du robot comme une bande élastique (une construction mathématique représentant la trajectoire du robot comme une série de positions et une différence de temps entre ces positions) qui peut être déformée (dans l'espace et dans le temps) par le processus d'optimisation pour respecter les contraintes données. De plus, le robot prédit aussi les trajectoires humaines plausibles dans la même zone d'exploitation en traitant les chemins humains aussi comme des bandes élastiques. Ce système nous permet d'optimiser les trajectoires des robots non seulement pour le moment présent, mais aussi pour l'interaction entière qui se produit lorsque les humains et les robots se croisent les uns les autres. Nous avons réalisé un ensemble d'expériences avec des situations interactives humains-robots qui se produisent dans la vie de tous les jours telles que traverser un couloir, passer par une porte et se croiser sur de grands espaces ouverts. La méthode de planification coopérative proposée se compare favorablement à d'autres schémas de planification de la navigation à la pointe de la technique. Nous avons augmenté le comportement de navigation du robot avec un mouvement synchronisé et réactif de sa tête. Cela permet au robot de regarder où il va et occasionnellement de détourner son regard vers les personnes voisines pour montrer que le robot va éviter toute collision possible avec eux comme prévu par le planificateur. À tout moment, le robot pondère les multiples critères selon le contexte social et décide de ce vers quoi il devrait porter le regard. Grâce à une étude utilisateur en ligne, nous avons montré que ce mécanisme de regard complète efficacement le comportement de navigation ce qui améliore la lisibilité des actions du robot. Enfin, nous avons intégré notre schéma de navigation avec un système de supervision plus large qui peut générer conjointement des comportements du robot standard tel que l'approche d'une personne et l'adaptation de la vitesse du robot selon le groupe de personnes que le robot guide dans des scénarios d'aéroport ou de musée.The methods of robotic movement planning have grown at an accelerated pace in recent years. The emphasis has mainly been on making robots more efficient, safer and react faster to unpredictable situations. As a result we are witnessing more and more service robots introduced in our everyday lives, especially in public places such as museums, shopping malls and airports. While a mobile service robot moves in a human environment, it leaves an innate effect on people about its demeanor. We do not see them as mere machines but as social agents and expect them to behave humanly by following societal norms and rules. This has created new challenges and opened new research avenues for designing robot control algorithms that deliver human-acceptable, legible and proactive robot behaviors. This thesis proposes a optimization-based cooperative method for trajectoryplanning and navigation with in-built social constraints for keeping robot motions safe, human-aware and predictable. The robot trajectory is dynamically and continuously adjusted to satisfy these social constraints. To do so, we treat the robot trajectory as an elastic band (a mathematical construct representing the robot path as a series of poses and time-difference between those poses) which can be deformed (both in space and time) by the optimization process to respect given constraints. Moreover, we also predict plausible human trajectories in the same operating area by treating human paths also as elastic bands. This scheme allows us to optimize the robot trajectories not only for the current moment but for the entire interaction that happens when humans and robot cross each other's paths. We carried out a set of experiments with canonical human-robot interactive situations that happen in our everyday lives such as crossing a hallway, passing through a door and intersecting paths on wide open spaces. The proposed cooperative planning method compares favorably against other stat-of-the-art human-aware navigation planning schemes. We have augmented robot navigation behavior with synchronized and responsive movements of the robot head, making the robot look where it is going and occasionally diverting its gaze towards nearby people to acknowledge that robot will avoid any possible collision with them. At any given moment the robot weighs multiple criteria according to the social context and decides where it should turn its gaze. Through an online user study we have shown that such gazing mechanism effectively complements the navigation behavior and it improves legibility of the robot actions. Finally, we have integrated our navigation scheme with a broader supervision system which can jointly generate normative robot behaviors such as approaching a person and adapting the robot speed according to a group of people who the robot guides in airports or museums
    • …
    corecore