50,610 research outputs found

    Learning and Reasoning for Robot Sequential Decision Making under Uncertainty

    Full text link
    Robots frequently face complex tasks that require more than one action, where sequential decision-making (SDM) capabilities become necessary. The key contribution of this work is a robot SDM framework, called LCORPP, that supports the simultaneous capabilities of supervised learning for passive state estimation, automated reasoning with declarative human knowledge, and planning under uncertainty toward achieving long-term goals. In particular, we use a hybrid reasoning paradigm to refine the state estimator, and provide informative priors for the probabilistic planner. In experiments, a mobile robot is tasked with estimating human intentions using their motion trajectories, declarative contextual knowledge, and human-robot interaction (dialog-based and motion-based). Results suggest that, in efficiency and accuracy, our framework performs better than its no-learning and no-reasoning counterparts in office environment.Comment: In proceedings of 34th AAAI conference on Artificial Intelligence, 202

    Music-aided affective interaction between human and service robot

    Get PDF
    This study proposes a music-aided framework for affective interaction of service robots with humans. The framework consists of three systems, respectively, for perception, memory, and expression on the basis of the human brain mechanism. We propose a novel approach to identify human emotions in the perception system. The conventional approaches use speech and facial expressions as representative bimodal indicators for emotion recognition. But, our approach uses the mood of music as a supplementary indicator to more correctly determine emotions along with speech and facial expressions. For multimodal emotion recognition, we propose an effective decision criterion using records of bimodal recognition results relevant to the musical mood. The memory and expression systems also utilize musical data to provide natural and affective reactions to human emotions. For evaluation of our approach, we simulated the proposed human-robot interaction with a service robot, iRobiQ. Our perception system exhibited superior performance over the conventional approach, and most human participants noted favorable reactions toward the music-aided affective interaction.open0

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    An Optimal Control Framework for Influencing Human Driving Behavior in Mixed-Autonomy Traffic

    Full text link
    As autonomous vehicles (AVs) become increasingly prevalent, their interaction with human drivers presents a critical challenge. Current AVs lack social awareness, causing behavior that is often awkward or unsafe. To combat this, social AVs, which are proactive rather than reactive in their behavior, have been explored in recent years. With knowledge of robot-human interaction dynamics, a social AV can influence a human driver to exhibit desired behaviors by strategically altering its own behaviors. In this paper, we present a novel framework for achieving human influence. The foundation of our framework lies in an innovative use of control barrier functions to formulate the desired objectives of influence as constraints in an optimal control problem. The computed controls gradually push the system state toward satisfaction of the objectives, e.g. slowing the human down to some desired speed. We demonstrate the proposed framework's feasibility in a variety of scenarios related to car-following and lane changes, including multi-robot and multi-human configurations. In two case studies, we validate the framework's effectiveness when applied to the problems of traffic flow optimization and aggressive behavior mitigation. Given these results, the main contribution of our framework is its versatility in a wide spectrum of influence objectives and mixed-autonomy configurations

    A Novel Approach for Performance Assessment of Human-Robotic Interaction

    Get PDF
    Robots have always been touted as powerful tools that could be used effectively in a number of applications ranging from automation to human-robot interaction. In order for such systems to operate adequately and safely in the real world, they must be able to perceive, and must have abilities of reasoning up to a certain level. Toward this end, performance evaluation metrics are used as important measures. This research work is intended to be a further step toward identifying common metrics for task-oriented human-robot interaction. We believe that within the context of human-robot interaction systems, both humans' and robots' actions and interactions (jointly and independently) can significantly affect the quality of the accomplished task. As such, our goal becomes that of providing a foundation upon which we can assess how well the human and the robot perform as a team. Thus, we propose a generic performance metric to assess the performance of the human-robot team, where one or more robots are involved. Sequential and parallel robot cooperation schemes with varying levels of task dependency are considered, and the proposed performance metric is augmented and extended to accommodate such scenarios. This is supported by some intuitively derived mathematical models and some advanced numerical simulations. To efficiently model such a metric, we propose a two-level fuzzy temporal model to evaluate and estimate the human trust in automation, while collaborating and interacting with robots and machines to complete some tasks. Trust modelling is critical, as it directly influences the interaction time that should be directly and indirectly dedicated toward interacting with the robot. Another fuzzy temporal model is also presented to evaluate the human reliability during interaction time. A significant amount of research work stipulates that system failures are due almost equally to humans as to machines, and therefore, assessing this factor in human-robot interaction systems is crucial. The proposed framework is based on the most recent research work in the areas of human-machine interaction and performance evaluation metrics. The fuzzy knowledge bases are further updated by implementing an application robotic platform where robots and users interact via semi-natural language to achieve tasks with varying levels of complexity and completion rates. User feedback is recorded and used to tune the knowledge base where needed. This work intends to serve as a foundation for further quantitative research to evaluate the performance of the human-robot teams in achievement of collective tasks

    A²ML: A general human-inspired motion language for anthropomorphic arms based on movement primitives

    Get PDF
    The recent increasing demands on accomplishing complicated manipulation tasks necessitate the development of effective task-motion planning techniques. To help understand robot movement intention and avoid causing unease or discomfort to nearby humans toward safe human–robot interaction when these tasks are performed in the vicinity of humans by those robot arms that resemble an anthropomorphic arrangement, a dedicated and unified anthropomorphism-aware task-motion planning framework for anthropomorphic arms is at a premium. A general human-inspired four-level Anthropomorphic Arm Motion Language (A²ML) is therefore proposed for the first time to serve as this framework. First, six hypotheses/rules of human arm motion are extracted from the literature in neurophysiological field, which form the basis and guidelines for the design of A²ML. Inspired by these rules, a library of movement primitives and related motion grammar are designed to build the complete motion language. The movement primitives in the library are designed from two different but associated representation spaces of arm configuration: Cartesian-posture-swivel-angle space and human arm triangle space. Since these two spaces can be always recognized for all the anthropomorphic arms, the designed movement primitives and consequent motion language possess favorable generality. Decomposition techniques described by the A²ML grammar are proposed to decompose complicated tasks into movement primitives. Furthermore, a quadratic programming based method and a sampling based method serve as powerful interfaces for transforming the decomposed tasks expressed in A²ML to the specific joint trajectories of different arms. Finally, the generality and advantages of the proposed motion language are validated by extensive simulations and experiments on two different anthropomorphic arms

    An empirical framework for human-robot proxemics

    Get PDF
    The work described in this paper was conducted within the EU Integrated Projects COGNIRON ("The Cognitive Robot Companion") and LIREC (LIving with Robots and intEractive Companions) and was funded by the European Commission under contract numbers FP6- 002020 and FP7-215554.An empirical framework for Human-Robot (HR) proxemics is proposed which shows how the measurement and control of interpersonal distances between a human and a robot can be potentially used by the robot to interpret, predict and manipulate proxemic behaviour for Human-Robot Interactions (HRIs). The proxemic framework provides for incorporation of inter-factor effects, and can be extended to incorporate new factors, updated values and results. The framework is critically discussed and future work proposed

    Dance Teaching by a Robot: Combining Cognitive and Physical Human-Robot Interaction for Supporting the Skill Learning Process

    Full text link
    This letter presents a physical human-robot interaction scenario in which a robot guides and performs the role of a teacher within a defined dance training framework. A combined cognitive and physical feedback of performance is proposed for assisting the skill learning process. Direct contact cooperation has been designed through an adaptive impedance-based controller that adjusts according to the partner's performance in the task. In measuring performance, a scoring system has been designed using the concept of progressive teaching (PT). The system adjusts the difficulty based on the user's number of practices and performance history. Using the proposed method and a baseline constant controller, comparative experiments have shown that the PT presents better performance in the initial stage of skill learning. An analysis of the subjects' perception of comfort, peace of mind, and robot performance have shown a significant difference at the p < .01 level, favoring the PT algorithm.Comment: Presented at IEEE International Conference on Robotics and Automation ICRA-201
    • …
    corecore