69 research outputs found

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    An interactive interface for nursing robots.

    Get PDF
    Physical Human-Robot Interaction (pHRI) is inevitable for a human user while working with assistive robots. There are various aspects of pHRI, such as choosing the interface, type of control schemes implemented and the modes of interaction. The research work presented in this thesis concentrates on a health-care assistive robot called Adaptive Robot Nursing Assistant (ARNA). An assistive robot in a health-care environment has to be able to perform routine tasks and be aware of the surrounding environment at the same time. In order to operate the robot, a teleoperation based interaction would be tedious for some patients as it would require a high level of concentration and can cause cognitive fatigue. It would also require a learning curve for the user in order to teleoperate the robot efficiently. The research work involves the development of a proposed Human-Machine Interface (HMI) framework which integrates the decision-making module, interaction module, and a tablet interface module. The HMI framework integrates a traded control based interaction which allows the robot to take decisions on planning and executing a task while the user only has to specify the task through a tablet interface. According to the preliminary experiments conducted as a part of this thesis, the traded control based approach allows a novice user to operate the robot with the same efficiency as an expert user. Past researchers have shown that during a conversation with a speech interface, a user would feel disengaged if the answers received from the interface are not in the context of the conversation. The research work in this thesis explores the different possibilities of implementing a speech interface that would be able to reply to any conversational queries from the user. A speech interface was developed by creating a semantic space out of Wikipedia database using Latent Semantic Analysis (LSA). This allowed the speech interface to have a wide knowledge-base and be able to maintain a conversation in the same context as intended by the user. This interface was developed as a web-service and was deployed on two different robots to exhibit its portability and the ease of implementation with any other robot. In the work presented, a tablet application was developed which integrates speech interface and an onscreen button interface to execute tasks through ARNA robot. This tablet interface application can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provide conversational dialogue during sitting sessions. In this thesis, we present the software and hardware framework that enable a patient sitter HMI, and together with experimental results with a small number of users that demonstrate that the concept is sound and scalable

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Organizational concepts and interaction between humans and robots in industrial environments

    Get PDF
    This paper is discussing the intuitive interaction with robotic systems and the conceptualisation connected with known organisational problems. In particular, the focus will be on the manufacturing industry with respect to its social dimension. One of the aims is to identify relevant research questions about the possibility of development of safer robot systems in closer human-machine intuitive interaction systems at the manufacturing shop-floor level. We try to contribute to minimize the cognitive and perceptual workload for robot operators in complex working systems. In particular that will be highly relevant when more different robots with different roles and produced by different companies or designers are to be used in the manufacturing industry to a larger extent. The social sciences approach to such technology assessment is of high relevance to understand the dimensions of the intuitive interaction concept

    TOWARD INTELLIGENT WELDING BY BUILDING ITS DIGITAL TWIN

    Get PDF
    To meet the increasing requirements for production on individualization, efficiency and quality, traditional manufacturing processes are evolving to smart manufacturing with the support from the information technology advancements including cyber-physical systems (CPS), Internet of Things (IoT), big industrial data, and artificial intelligence (AI). The pre-requirement for integrating with these advanced information technologies is to digitalize manufacturing processes such that they can be analyzed, controlled, and interacted with other digitalized components. Digital twin is developed as a general framework to do that by building the digital replicas for the physical entities. This work takes welding manufacturing as the case study to accelerate its transition to intelligent welding by building its digital twin and contributes to digital twin in the following two aspects (1) increasing the information analysis and reasoning ability by integrating deep learning; (2) enhancing the human user operative ability to physical welding manufacturing via digital twins by integrating human-robot interaction (HRI). Firstly, a digital twin of pulsed gas tungsten arc welding (GTAW-P) is developed by integrating deep learning to offer the strong feature extraction and analysis ability. In such a system, the direct information including weld pool images, arc images, welding current and arc voltage is collected by cameras and arc sensors. The undirect information determining the welding quality, i.e., weld joint top-side bead width (TSBW) and back-side bead width (BSBW), is computed by a traditional image processing method and a deep convolutional neural network (CNN) respectively. Based on that, the weld joint geometrical size is controlled to meet the quality requirement in various welding conditions. In the meantime, this developed digital twin is visualized to offer a graphical user interface (GUI) to human users for their effective and intuitive perception to physical welding processes. Secondly, in order to enhance the human operative ability to the physical welding processes via digital twins, HRI is integrated taking virtual reality (VR) as the interface which could transmit the information bidirectionally i.e., transmitting the human commends to welding robots and visualizing the digital twin to human users. Six welders, skilled and unskilled, tested this system by completing the same welding job but demonstrate different patterns and resulted welding qualities. To differentiate their skill levels (skilled or unskilled) from their demonstrated operations, a data-driven approach, FFT-PCA-SVM as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed and demonstrates the 94.44% classification accuracy. The robots can also work as an assistant to help the human welders to complete the welding tasks by recognizing and executing the intended welding operations. This is done by a developed human intention recognition algorithm based on hidden Markov model (HMM) and the welding experiments show that developed robot-assisted welding can help to improve welding quality. To further take the advantages of the robots i.e., movement accuracy and stability, the role of the robot upgrades to be a collaborator from an assistant to complete a subtask independently i.e., torch weaving and automatic seam tracking in weaving GTAW. The other subtask i.e., welding torch moving along the weld seam is completed by the human users who can adjust the travel speed to control the heat input and ensure the good welding quality. By doing that, the advantages of humans (intelligence) and robots (accuracy and stability) are combined together under this human-robot collaboration framework. The developed digital twin for welding manufacturing helps to promote the next-generation intelligent welding and can be applied in other similar manufacturing processes easily after small modifications including painting, spraying and additive manufacturing

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams
    • …
    corecore