103,167 research outputs found

    Teams organization and performance analysis in autonomous human-robot teams

    Get PDF
    This paper proposes a theory of human control of robot teams based on considering how people coordinate across different task allocations. Our current work focuses on domains such as foraging in which robots perform largely independent tasks. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an Urban Search and Rescue (USAR) task. We identify three subtasks: perceptual search-visual search for victims, assistance-teleoperation to assist robot, and navigation-path planning and coordination. For the studies reported here, navigation was selected for automation because it involves weak dependencies among robots making it more complex and because it was shown in an earlier experiment to be the most difficult. This paper reports an extended analysis of the two conditions from a larger four condition study. In these two "shared pool" conditions Twenty four simulated robots were controlled by teams of 2 participants. Sixty paid participants (30 teams) were recruited to perform the shared pool tasks in which participants shared control of the 24 UGVs and viewed the same screens. Groups in the manual control condition issued waypoints to navigate their robots. In the autonomy condition robots generated their own waypoints using distributed path planning. We identify three self-organizing team strategies in the shared pool condition: joint control operators share full authority over robots, mixed control in which one operator takes primary control while the other acts as an assistant, and split control in which operators divide the robots with each controlling a sub-team. Automating path planning improved system performance. Effects of team organization favored operator teams who shared authority for the pool of robots. © 2010 ACM

    Fast and Robust Detection of Fallen People from a Mobile Robot

    Full text link
    This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments

    Adaptive Information Visualization for Personalized Access to Educational Digital Libraries

    Get PDF
    Personalization is one of the emerging ways to increase the power of modern Digital Libraries. The Knowledge Sea II system presented in this paper explores social navigation support, an approach for providing personalized guidance within the open corpus of educational resources. Following the concepts of social navigation we have attempted to organize a personalized navigation support that is based on past learners’ interaction with the system. The study indicates that Knowledge Sea II became the students' primary tool for accessing the open corpus documents used in a programming course. The social navigation support implemented in this system was considered useful by students participating in the study of Knowledge Sea II. At the same time, some user comments indicated the need to provide more powerful navigational support, such as the ability to rank the usefulness of a page

    Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment

    Get PDF
    Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an 'exosomatic visual architecture" where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context
    • …
    corecore