2,508 research outputs found

    Online Identification of Interaction Behaviors from Haptic Data during Collaborative Object Transfer

    Get PDF
    Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another’s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation. The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80% accuracy with both classifiers when identifying general interaction types

    Resolving conflicts during human-robot co-manipulation

    Get PDF
    UK Research and Innovation, UKRI: EP/S033718/2, EP/T022493/1, EP/V00784XThis work is partially funded by UKRI and CHIST-ERA (HEAP: EP/S033718/2; Horizon: EP/T022493/1; TAS Hub: EP/V00784X).This paper proposes a machine learning (ML) approach to detect and resolve motion conflicts that occur between a human and a proactive robot during the execution of a physically collaborative task. We train a random forest classifier to distinguish between harmonious and conflicting human-robot interaction behaviors during object co-manipulation. Kinesthetic information generated through the teamwork is used to describe the interactive quality of collaboration. As such, we demonstrate that features derived from haptic (force/torque) data are sufficient to classify if the human and the robot harmoniously manipulate the object or they face a conflict. A conflict resolution strategy is implemented to get the robotic partner to proactively contribute to the task via online trajectory planning whenever interactive motion patterns are harmonious, and to follow the human lead when a conflict is detected. An admittance controller regulates the physical interaction between the human and the robot during the task. This enables the robot to follow the human passively when there is a conflict. An artificial potential field is used to proactively control the robot motion when partners work in harmony. An experimental study is designed to create scenarios involving harmonious and conflicting interactions during collaborative manipulation of an object, and to create a dataset to train and test the random forest classifier. The results of the study show that ML can successfully detect conflicts and the proposed conflict resolution mechanism reduces human force and effort significantly compared to the case of a passive robot that always follows the human partner and a proactive robot that cannot resolve conflicts. © 2023 Copyright is held by the owner/author(s).2-s2.0-8515037875

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Internet of Robotic Things Intelligent Connectivity and Platforms

    Get PDF
    The Internet of Things (IoT) and Industrial IoT (IIoT) have developed rapidly in the past few years, as both the Internet and “things” have evolved significantly. “Things” now range from simple Radio Frequency Identification (RFID) devices to smart wireless sensors, intelligent wireless sensors and actuators, robotic things, and autonomous vehicles operating in consumer, business, and industrial environments. The emergence of “intelligent things” (static or mobile) in collaborative autonomous fleets requires new architectures, connectivity paradigms, trustworthiness frameworks, and platforms for the integration of applications across different business and industrial domains. These new applications accelerate the development of autonomous system design paradigms and the proliferation of the Internet of Robotic Things (IoRT). In IoRT, collaborative robotic things can communicate with other things, learn autonomously, interact safely with the environment, humans and other things, and gain qualities like self-maintenance, self-awareness, self-healing, and fail-operational behavior. IoRT applications can make use of the individual, collaborative, and collective intelligence of robotic things, as well as information from the infrastructure and operating context to plan, implement and accomplish tasks under different environmental conditions and uncertainties. The continuous, real-time interaction with the environment makes perception, location, communication, cognition, computation, connectivity, propulsion, and integration of federated IoRT and digital platforms important components of new-generation IoRT applications. This paper reviews the taxonomy of the IoRT, emphasizing the IoRT intelligent connectivity, architectures, interoperability, and trustworthiness framework, and surveys the technologies that enable the application of the IoRT across different domains to perform missions more efficiently, productively, and completely. The aim is to provide a novel perspective on the IoRT that involves communication among robotic things and humans and highlights the convergence of several technologies and interactions between different taxonomies used in the literature.publishedVersio

    Intelligent Haptic Perception for Physical Robot Interaction

    Get PDF
    Doctorado en Ingeniería mecatrónica. Fecha de entrega de la Tesis doctoral: 8 de enero de 2020. Fecha de lectura de Tesis doctoral: 30 de marzo 2020.The dream of having robots living among us is coming true thanks to the recent advances in Artificial Intelligence (AI). The gap that still exists between that dream and reality will be filled by scientific research, but manifold challenges are yet to be addressed. Handling the complexity and uncertainty of real-world scenarios is still the major challenge in robotics nowadays. In this respect, novel AI methods are giving the robots the capability to learn from experience and therefore to cope with real-life situations. Moreover, we live in a physical world in which physical interactions are both vital and natural. Thus, those robots that are being developed to live among humans must perform tasks that require physical interactions. Haptic perception, conceived as the idea of feeling and processing tactile and kinesthetic sensations, is essential for making this physical interaction possible. This research is inspired by the dream of having robots among us, and therefore, addresses the challenge of developing robots with haptic perception capabilities that can operate in real-world scenarios. This PhD thesis tackles the problems related to physical robot interaction by employing machine learning techniques. Three AI solutions are proposed for different physical robot interaction challenges: i) Grasping and manipulation of humans’ limbs; ii) Tactile object recognition; iii) Control of Variable-Stiffness-Link (VSL) manipulators. The ideas behind this research work have potential robotic applications such as search and rescue, healthcare or rehabilitation. This dissertation consists of a compendium of publications comprising as the main body a compilation of previously published scientific articles. The baseline of this research is composed of a total of five papers published in prestigious peer-reviewed scientific journals and international robotics conferences
    • 

    corecore