101 research outputs found
Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior
We aim to reduce the burden of programming and deploying autonomous systems
to work in concert with people in time-critical domains, such as military field
operations and disaster response. Deployment plans for these operations are
frequently negotiated on-the-fly by teams of human planners. A human operator
then translates the agreed upon plan into machine instructions for the robots.
We present an algorithm that reduces this translation burden by inferring the
final plan from a processed form of the human team's planning conversation. Our
approach combines probabilistic generative modeling with logical plan
validation used to compute a highly structured prior over possible plans. This
hybrid approach enables us to overcome the challenge of performing inference
over the large solution space with only a small amount of noisy data from the
team planning session. We validate the algorithm through human subject
experimentation and show we are able to infer a human team's final plan with
83% accuracy on average. We also describe a robot demonstration in which two
people plan and execute a first-response collaborative task with a PR2 robot.
To the best of our knowledge, this is the first work that integrates a logical
planning technique within a generative model to perform plan inference.Comment: Appears in Proceedings of the Twenty-Seventh AAAI Conference on
Artificial Intelligence (AAAI-13
RHex Slips on Granular Media
RHex is one of very few legged robots being used for realworld rough-terrain locomotion applications. From its early days, RHex has been shown to locomote successfully over obstacles higher than its own hip height [1], and more recently, on sand [2] and sand dunes [3], [4] (see Figure 1). The commercial version of RHex made by Boston Dynamics has been demonstrated in a variety of difficult, natural terrains such as branches, culverts, and rocks, and has been shipped to Afghanistan, ostensibly for use in mine clearing in sandy environments [5]. Here, we discuss recent qualitative observations of an updated research version of RHex [6] slipping at the toes on two main types of difficult terrain: sand dunes and rubble piles. No lumped parameter (finite dimensional) formal model nor even a satisfactory computational model of RHexs locomotion on sand dunes or rubble piles currently exists. We briefly review the extent to which available physical theories describe legged locomotion on flat granular media and possible extensions to locomotion on sand dunes
Towards an Architecture for Semiautonomous Robot Telecontrol Systems.
The design and development of a computational system to support robot–operator collaboration is a challenging task, not only because of the overall system complexity, but furthermore because of the involvement of different technical and scientific disciplines, namely, Software Engineering, Psychology and Artificial Intelligence, among others. In our opinion the approach generally used to face this type of project is based on system architectures inherited from the development of autonomous robots and therefore fails to incorporate explicitly the role of the operator, i.e. these architectures lack a view that help the operator to see him/herself as an integral part of the system. The goal of this paper is to provide a human-centered paradigm that makes it possible to create this kind of view of the system architecture. This architectural description includes the definition of the role of operator and autonomous behaviour of the robot, it identifies the shared knowledge, and it helps the operator to see the robot as an intentional being as himself/herself
Applications and prototype for systems of systems swarm robotics
In order to develop a robotic system of systems the robotic platforms must be designed and built. For this to happen, the type of application involved should be clear. Swarm robots need to be self contained and powered. They must also be self governing. Here the authors examine various applications and a prototype robot that may be useful in these scenarios. <br /
Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface
Invited paperWe have created an infrastructure that allows a human to collaborate in a natural manner with a robotic system. In this paper we describe our system and its implementation with a mobile robot. In our
prototype the human communicates with the mobile robot using natural speech and gestures, for example, by selecting a point in 3D space and saying “go here” or “go behind that”. The robot responds using
speech so the human is able to understand its intentions and beliefs. Augmented Reality (AR) technology is used to facilitate natural use of gestures and provide a common 3D spatial reference for both the robot and human, thus providing a means for grounding of communication and maintaining spatial awareness.
This paper first discusses related work then gives a brief overview of AR and its capabilities. The architectural design we have developed is outlined and then a case study is discussed
Development of a separable search-and-rescue robot composed of a mobile robot and a snake robot
In this study, we propose a new robot system consisting of a mobile robot and a snake robot. The system works not only as a mobile manipulator but also as a multi-agent system by using the snake robot's ability to separate from the mobile robot. Initially, the snake robot is mounted on the mobile robot in the carrying mode. When an operator uses the snake robot as a manipulator, the robot changes to the manipulator mode. The operator can detach the snake robot from the mobile robot and command the snake robot to conduct lateral rolling motions. In this paper, we present the details of our robot and its performance in the World Robot Summit
ARTEMIS: AI-driven Robotic Triage Labeling and Emergency Medical Information System
Mass casualty incidents (MCIs) pose a formidable challenge to emergency
medical services by overwhelming available resources and personnel. Effective
victim assessment is paramount to minimizing casualties during such a crisis.
In this paper, we introduce ARTEMIS, an AI-driven Robotic Triage Labeling and
Emergency Medical Information System. This system comprises a deep learning
model for acuity labeling that is integrated with a robot, that performs the
preliminary assessment of injury severity in patients and assigns appropriate
triage labels. Additionally, we have developed a frontend (graphical user
interface) that is updated by the robots in real time and is accessible to the
first responders. To validate the reliability of our proposed algorithmic
triage protocol, we employed an off-the-shelf robot kit equipped with sensors
for vital sign acquisition. A controlled laboratory simulation of an MCI was
conducted to assess the system's performance and effectiveness in real-world
scenarios resulting in a triage-level classification accuracy of 92%. This
noteworthy achievement underscores the model's proficiency in discerning
crucial patterns for accurate triage classification, showcasing its promising
potential in healthcare applications
Cheating with robots: How at ease do they make us feel?
People are not perfect, and if given the chance, some will be dishonest with no regrets. Some people will cheat just a little to gain some advantage, and others will not do it at all. With the prospect of more human-robot interactions in the future, it will become very important to understand which kind of roles a robot can have in the regulation of cheating behavior. We investigated whether people will cheat while in the presence of a robot and to what extent this depends on the role the robot plays. We ran a study to test cheating behavior with a die task, and allocated people to one of the following conditions: 1) participants were alone in the room while doing the task; 2) with a robot with a vigilant role or 3) with a robot that had a supporting role in the task, accompanying and giving instructions. Our results showed that participants cheated significantly more than chance when they were alone or with the robot giving instructions. In contrast, cheating could not be proven when the robot presented a vigilant role. This study has implications for human-robot interaction and for the deployment of autonomous robots in sensitive roles in which people may be prone to dishonest behavior.info:eu-repo/semantics/acceptedVersio
Active Sensing for Dynamic, Non-holonomic, Robust Visual Servoing
We consider the problem of visually servoing a legged vehicle with unicycle-like nonholonomic constraints subject to second-order fore-aft dynamics in its horizontal plane. We target applications to rugged environments characterized by complex terrain likely to perturb significantly the robot’s nominal dynamics. At the same time, it is crucial that the camera avoid “obstacle” poses where absolute localization would be compromised by even partial loss of landmark visibility. Hence, we seek a controller whose robustness against disturbances and obstacle avoidance capabilities can be assured by a strict global Lyapunov function. Since the nonholonomic constraints preclude smooth point stabilizability we introduce an extra degree of sensory freedom, affixing the camera to an actuated panning axis mounted on the robot’s back. Smooth stabilizability to the robot-orientation-indifferent goal cycle no longer precluded, we construct a controller and strict global Lyapunov function with the desired properties. We implement several versions of the scheme on a RHex robot maneuvering over slippery ground and document its successful empirical performance.
For more information: Kod*La
- …