1,268 research outputs found

    Teaching humanoid robotics by means of human teleoperation through RGB-D sensors

    Get PDF
    This paper presents a graduate course project on humanoid robotics offered by the University of Padova. The target is to safely lift an object by teleoperating a small humanoid. Students have to map human limbs into robot joints, guarantee the robot stability during the motion, and teleoperate the robot to perform the correct movement. We introduce the following innovative aspects with respect to classical robotic classes: i) the use of humanoid robots as teaching tools; ii) the simplification of the stable locomotion problem by exploiting the potential of teleoperation; iii) the adoption of a Project-Based Learning constructivist approach as teaching methodology. The learning objectives of both course and project are introduced and compared with the students\u2019 background. Design and constraints students have to deal with are reported, together with the amount of time they and their instructors dedicated to solve tasks. A set of evaluation results are provided in order to validate the authors\u2019 purpose, including the students\u2019 personal feedback. A discussion about possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels

    Human-Robot Collaboration for Kinesthetic Teaching

    Get PDF
    Recent industrial interest in producing smaller volumes of products in shorter time frames, in contrast to mass production in previous decades, motivated the introduction of human–robot collaboration (HRC) in industrial settings, as an attempt to increase flexibility in manufacturing applications by incorporating human intelligence and dexterity to these processes. This thesis presents methods for improving the involvement of human operators in industrial settings where robots are present, with a particular focus on kinesthetic teaching, i.e., manually guiding the robot to define or correct its motion, since it can facilitate non-expert robot programming.To increase flexibility in the manufacturing industry implies a loss of a fixed structure of the industrial environment, which increases the uncertainties in the shared workspace between humans and robots. Two methods have been proposed in this thesis to mitigate such uncertainty. First, null-space motion was used to increase the accuracy of kinesthetic teaching by reducing the joint static friction, or stiction, without altering the execution of the robotic task. This was possible since robots used in HRC, i.e., collaborative robots, are often designed with additional degrees of freedom (DOFs) for a greater dexterity. Second, to perform effective corrections of the motion of the robot through kinesthetic teaching in partially-unknown industrial environments, a fast identification of the source of robot–environment contact is necessary. Fast contact detection and classification methods in literature were evaluated, extended, and modified to use them in kinesthetic teaching applications for an assembly task. For this, collaborative robots that are made compliant with respect to their external forces/torques (as an active safety mechanism) were used, and only embedded sensors of the robot were considered.Moreover, safety is a major concern when robotic motion occurs in an inherently uncertain scenario, especially if humans are present. Therefore, an online variation of the compliant behavior of the robot during its manual guidance by a human operator was proposed to avoid undesired parts of the workspace of the robot. The proposed method used safety control barrier functions (SCBFs) that considered the rigid-body dynamics of the robot, and the method’s stability was guaranteed using a passivity-based energy-storage formulation that includes a strict Lyapunov function.All presented methods were tested experimentally on a real collaborative robot

    Robot Introspection with Bayesian Nonparametric Vector Autoregressive Hidden Markov Models

    Full text link
    Robot introspection, as opposed to anomaly detection typical in process monitoring, helps a robot understand what it is doing at all times. A robot should be able to identify its actions not only when failure or novelty occurs, but also as it executes any number of sub-tasks. As robots continue their quest of functioning in unstructured environments, it is imperative they understand what is it that they are actually doing to render them more robust. This work investigates the modeling ability of Bayesian nonparametric techniques on Markov Switching Process to learn complex dynamics typical in robot contact tasks. We study whether the Markov switching process, together with Bayesian priors can outperform the modeling ability of its counterparts: an HMM with Bayesian priors and without. The work was tested in a snap assembly task characterized by high elastic forces. The task consists of an insertion subtask with very complex dynamics. Our approach showed a stronger ability to generalize and was able to better model the subtask with complex dynamics in a computationally efficient way. The modeling technique is also used to learn a growing library of robot skills, one that when integrated with low-level control allows for robot online decision making.Comment: final version submitted to humanoids 201

    Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models

    Full text link
    Recent progress in human-robot collaboration makes fast and fluid interactions possible, even when human observations are partial and occluded. Methods like Interaction Probabilistic Movement Primitives (ProMP) model human trajectories through motion capture systems. However, such representation does not properly model tasks where similar motions handle different objects. Under current approaches, a robot would not adapt its pose and dynamics for proper handling. We integrate the use of Electromyography (EMG) into the Interaction ProMP framework and utilize muscular signals to augment the human observation representation. The contribution of our paper is increased task discernment when trajectories are similar but tools are different and require the robot to adjust its pose for proper handling. Interaction ProMPs are used with an augmented vector that integrates muscle activity. Augmented time-normalized trajectories are used in training to learn correlation parameters and robot motions are predicted by finding the best weight combination and temporal scaling for a task. Collaborative single task scenarios with similar motions but different objects were used and compared. For one experiment only joint angles were recorded, for the other EMG signals were additionally integrated. Task recognition was computed for both tasks. Observation state vectors with augmented EMG signals were able to completely identify differences across tasks, while the baseline method failed every time. Integrating EMG signals into collaborative tasks significantly increases the ability of the system to recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in our studies. Furthermore, the integration of EMG signals for collaboration also opens the door to a wide class of human-robot physical interactions based on haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201

    Flexible Task Execution and Cognitive Control in Human-Robot Interaction

    Get PDF
    A robotic system that interacts with humans is expected to flexibly execute structured cooperative tasks while reacting to unexpected events and behaviors. In this thesis, these issues are faced presenting a framework that integrates cognitive control, executive attention, structured task execution and learning. In the proposed approach, the execution of structured tasks is guided by top-down (task-oriented) and bottom-up (stimuli-driven) attentional processes that affect behavior selection and activation, while resolving conflicts and decisional impasses. Specifically, attention is here deployed to stimulate the activations of multiple hierarchical behaviors orienting them towards the execution of finalized and interactive activities. On the other hand, this framework allows a human to indirectly and smoothly influence the robotic task execution exploiting attention manipulation. We provide an overview of the overall system architecture discussing the framework at work in different applicative contexts. In particular, we show that multiple concurrent tasks/plans can be effectively orchestrated and interleaved in a flexible manner; moreover, in a human-robot interaction setting, we test and assess the effectiveness of attention manipulation and learning processes

    Flexible human-robot cooperation models for assisted shop-floor tasks

    Get PDF
    The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots, i.e., robots able to work alongside and together with humans, could bring to the whole production process. In this context, an enabling technology yet unreached is the design of flexible robots able to deal at all levels with humans' intrinsic variability, which is not only a necessary element for a comfortable working experience for the person but also a precious capability for efficiently dealing with unexpected events. In this paper, a sensing, representation, planning and control architecture for flexible human-robot cooperation, referred to as FlexHRC, is proposed. FlexHRC relies on wearable sensors for human action recognition, AND/OR graphs for the representation of and reasoning upon cooperation models, and a Task Priority framework to decouple action planning from robot motion planning and control.Comment: Submitted to Mechatronics (Elsevier

    Pragmatic Frames for Teaching and Learning in Human-Robot interaction: Review and Challenges

    Get PDF
    Vollmer A-L, Wrede B, Rohlfing KJ, Oudeyer P-Y. Pragmatic Frames for Teaching and Learning in Human-Robot interaction: Review and Challenges. FRONTIERS IN NEUROROBOTICS. 2016;10: 10.One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching
    • …
    corecore