30 research outputs found

    Monitoring companion for industrial robotic processes

    Get PDF
    For system integrators, optimizing complex industrial robotic applications (e.g. robotised welding) is a difficult and time-consuming task. This procedure is rendered tedious and often very hard to achieve when the operator cannot access the robotic system once in operation, perhaps because the installation is far away or because of the operational environment. In these circumstances, as an alternative to physically visiting the installation site, the system integrator may rely on additional nearby sensors to remotely acquire the necessary process information. While it is hard to completely replace this trial and error approach, it is possible to provide a way to gather process information more effectively that can be used in several robotic installations.This thesis investigates the use of a "monitoring robot" in addition to the task robot(s) that belong to the industrial process to be optimized. The monitoring robot can be equipped with several different sensors and can be moved into close proximity of any installed task robot so that it can be used to collect information from that process during and/or after the operation without interfering. The thesis reviews related work in the industry and in the field of teleoperation to identify the most important challenges in remote monitoring and teleoperation. From the background investigation it is clear that two very important issues are: i) the nature of the teleoperator’s interface and; ii) the efficiency of the shared control between the human operator and the monitoring system. In order to investigate these two issues efficiently it was necessary to create experimental scenarios that operate independently from any application scenario, so an abstract problem domain is created. This way the monitoring system's control and interface can be evaluated in a context that presents challenges that are typical of a remote monitoring task but are not application domain specific. Therefore the validity of the proposed approach can be assessed from a generic and, therefore, more powerful and widely applicable perspective. The monitoring framework developed in this thesis is described, both in the shared control design choices based on virtual fixtures (VF) and the implementation in a 3D visualization environment. The monitoring system developed is evaluated with a usability study with user participants. The usability study aims at assessing the system's performance along with its acceptance and ease of use in a static monitoring task, accompanied by user\hyp{}filled TLX questionnaires. Since future work will apply this system in real robotic welding scenarios, this thesis finally reports some preliminary work in such an application

    Advanced Knowledge Application in Practice

    Get PDF
    The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research

    EtÀlÀsnÀolorobotin suun ja pÀÀn liikkeet videoanalyysin perusteella

    Get PDF
    TiivistelmĂ€. EtĂ€lĂ€snĂ€olon ja etĂ€tapaamisen hyödyntĂ€misen kasvaessa tarve erilaisille etĂ€lĂ€snĂ€olotekniikoille kasvaa. Yksi tapa luoda etĂ€lĂ€snĂ€olosta lĂ€snĂ€olevampaa on kĂ€yttÀÀ etĂ€lĂ€snĂ€olorobottia, joka mallintaa muualla sijaitsevan ihmisen liikkeitĂ€. TĂ€mĂ€ kandidaatintyö tehtiin osana isompaa ryhmÀÀ, jonka tavoitteena on tutkia ja kehittÀÀ uusia ominaisuuksia robottipÀÀn etĂ€lĂ€snĂ€olon ominaisuuksien parantamiseksi kĂ€yttĂ€mĂ€llĂ€ robottien kĂ€yttöjĂ€rjestelmÀÀ (ROS) ja InMoov-robotti-alustaa. TĂ€mĂ€ tutkielma keskittyy kuvailemaan metodeja ihmisen suun ja pÀÀn liikkeiden seuraamiseen puheen aikana ja mallintamaan liikkeet etĂ€lĂ€snĂ€olorobottiin luomalla nĂ€in mahdollisimman luonnollisen kanssakĂ€ymisen robotin ja ihmisen vĂ€lille etĂ€tapaamisen aikana. TyössĂ€ toteutetun etĂ€lĂ€snĂ€oloratkaisun ihmisenkaltaisuuden onnistumista pohdittiin kvantitatiivisen tutkimuksen avulla. Tutkimuksessa tehtiin kysely, jonka avulla selvitettiin ihmisten tuntemuksia robottia katsellessa verrattuna oikeaan ihmiseen. KyselyssĂ€ haluttiin selvittÀÀ myös robottipÀÀn aiheuttamia mahdollisia Outo laakso -ilmiön aiheuttamia tuntemuksia ihmisissĂ€ eli mahdollisia epĂ€mukavuuden tunteita, jotka robotti aiheuttaa ulkonÀöllÀÀn ja liikkeillÀÀn. Kyselyyn vastanneiden mukaan pÀÀn liike muistutti hieman kuvatun ihmisen liikettĂ€, mutta suun liike ei onnistunut vastaamaan ihmisen suun liikkeitĂ€ kovinkaan hyvin suurimman osan kyselyyn vastanneiden mielestĂ€. LisĂ€ksi suurin osa kyselyyn vastanneista koki robottipÀÀn hĂ€iritsevĂ€nĂ€ kommunikaation kannalta.The mouth and head movements of a telepresence robot based on video analysis. Abstract. As the amount of telepresence and telepresence meetings are increasing, there is a growing demand for different and more developed telepresence technologies. One way to make telepresence more present is to use a telepresence robot, which mirrors the movements of the human located elsewhere. This bachelor’s thesis was made as a part of a bigger group studying and developing new features to enhance telepresence capabilities of a robot head, using Robot Operating System (ROS) and InMoov -robot platform. The thesis focuses on describing methods to capture user mouth and head movement while speaking and duplicating it on the telepresence robot creating more natural interaction between a robot and a human. The level of success of the project was considered using a quantitative research method. A questionnaire was used to determine the feelings and emotions the participants encountered while watching videos of the robot and human next to each other. One of the questionnaire’s aims was also to find out, whether the participants experience the uncomfort related to Uncanny valley -phenomenon, that the robot causes with its looks and movements. According to most of the participants in the questionnaire, the movement of the robot head was rather accurate compared to the human movement. However, the mouth movement did not work too well. In addition, most of the respondents felt the robot head was at least fairly disturbing

    Complementary Situational Awareness for an Intelligent Telerobotic Surgical Assistant System

    Get PDF
    Robotic surgical systems have contributed greatly to the advancement of Minimally Invasive Surgeries (MIS). More specifically, telesurgical robots have provided enhanced dexterity to surgeons performing MIS procedures. However, current robotic teleoperated systems have only limited situational awareness of the patient anatomy and surgical environment that would typically be available to a surgeon in an open surgery. Although the endoscopic view enhances the visualization of the anatomy, perceptual understanding of the environment and anatomy is still lacking due to the absence of sensory feedback. In this work, these limitations are addressed by developing a computational framework to provide Complementary Situational Awareness (CSA) in a surgical assistant. This framework aims at improving the human-robot relationship by providing elaborate guidance and sensory feedback capabilities for the surgeon in complex MIS procedures. Unlike traditional teleoperation, this framework enables the user to telemanipulate the situational model in a virtual environment and uses that information to command the slave robot with appropriate admittance gains and environmental constraints. Simultaneously, the situational model is updated based on interaction of the slave robot with the task space environment. However, developing such a system to provide real-time situational awareness requires that many technical challenges be met. To estimate intraoperative organ information continuous palpation primitives are required. Intraoperative surface information needs to be estimated in real-time while the organ is being palpated/scanned. The model of the task environment needs to be updated in near real-time using the estimated organ geometry so that the force-feedback applied on the surgeon's hand would correspond to the actual location of the model. This work presents a real-time framework that meets these requirements/challenges to provide situational awareness of the environment in the task space. Further, visual feedback is also provided for the surgeon/developer to view the near video frame rate updates of the task model. All these functions are executed in parallel and need to have a synchronized data exchange. The system is very portable and can be incorporated to any existing telerobotic platforms with minimal overhead

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays

    An optimization-based formalism for shared autonomy in dynamic environments

    Get PDF
    Teleoperation is an integral component of various industrial processes. For example, concrete spraying, assisted welding, plastering, inspection, and maintenance. Often these systems implement direct control that maps interface signals onto robot motions. Successful completion of tasks typically requires high levels of manual dexterity and cognitive load. In addition, the operator is often present nearby dangerous machinery. Consequently, safety is of critical importance and training is expensive and prolonged -- in some cases taking several months or even years. An autonomous robot replacement would be an ideal solution since the human could be removed from danger and training costs significantly reduced. However, this is currently not possible due to the complexity and unpredictability of the environments, and the levels of situational and contextual awareness required to successfully complete these tasks. In this thesis, the limitations of direct control are addressed by developing methods for shared autonomy. A shared autonomous approach combines human input with autonomy to generate optimal robot motions. The approach taken in this thesis is to formulate shared autonomy within an optimization framework that finds optimized states and controls by minimizing a cost function, modeling task objectives, given a set of (changing) physical and operational constraints. Online shared autonomy requires the human to be continuously interacting with the system via an interface (akin to direct control). The key challenges addressed in this thesis are: 1) ensuring computational feasibility (such a method should be able to find solutions fast enough to achieve a sampling frequency bound below by 40Hz), 2) being reactive to changes in the environment and operator intention, 3) knowing how to appropriately blend operator input and autonomy, and 4) allowing the operator to supply input in an intuitive manner that is conducive to high task performance. Various operator interfaces are investigated with regards to the control space, called a mode of teleoperation. Extensive evaluations were carried out to determine for which modes are most intuitive and lead to highest performance in target acquisition tasks (e.g. spraying/welding/etc). Our performance metrics quantified task difficulty based on Fitts' law, as well as a measure of how well constraints affecting the task performance were met. The experimental evaluations indicate that higher performance is achieved when humans submit commands in low-dimensional task spaces as opposed to joint space manipulations. In addition, our multivariate analysis indicated that those with regular exposure to computer games achieved higher performance. Shared autonomy aims to relieve human operators of the burden of precise motor control, tracking, and localization. An optimization-based representation for shared autonomy in dynamic environments was developed. Real-time tractability is ensured by modulating the human input with information of the changing environment within the same task space, instead of adding it to the optimization cost or constraints. The method was illustrated with two real world applications: grasping objects in cluttered environments and spraying tasks requiring sprayed linings with greater homogeneity. Maintaining motion patterns -- referred to as skills -- is often an integral part of teleoperation for various industrial processes (e.g. spraying, welding, plastering). We develop a novel model-based shared autonomous framework for incorporating the notion of skill assistance to aid operators to sustain these motion patterns whilst adhering to environment constraints. In order to achieve computational feasibility, we introduce a novel parameterization for state and control that combines skill and underlying trajectory models, leveraging a special type of curve known as Clothoids. This new parameterization allows for efficient computation of skill-based short term horizon plans, enabling the use of a model predictive control loop. Our hardware realization validates the effectiveness of our method to recognize a change of intended skill, and showing an improved quality of output motion, even under dynamically changing obstacles. In addition, extensions of the work to supervisory control are described. An exploratory study presents an approach that improves computational feasibility for complex tasks with minimal interactive effort on the part of the human. Adaptations are theorized which might allow such a method to be applicable and beneficial to high degree of freedom systems. Finally, a system developed in our lab is described that implements sliding autonomy and shown to complete multi-objective tasks in complex environments with minimal interaction from the human

    NASA Technology Plan 1998

    Get PDF
    This NASA Strategic Plan describes an ambitious, exciting vision for the Agency across all its Strategic Enterprises that addresses a series of fundamental questions of science and research. This vision is so challenging that it literally depends on the success of an aggressive, cutting-edge advanced technology development program. The objective of this plan is to describe the NASA-wide technology program in a manner that provides not only the content of ongoing and planned activities, but also the rationale and justification for these activities in the context of NASA's future needs. The scope of this plan is Agencywide, and it includes technology investments to support all major space and aeronautics program areas, but particular emphasis is placed on longer term strategic technology efforts that will have broad impact across the spectrum of NASA activities and perhaps beyond. Our goal is to broaden the understanding of NASA technology programs and to encourage greater participation from outside the Agency. By relating technology goals to anticipated mission needs, we hope to stimulate additional innovative approaches to technology challenges and promote more cooperative programs with partners outside NASA who share common goals. We also believe that this will increase the transfer of NASA-sponsored technology into nonaerospace applications, resulting in an even greater return on the investment in NASA

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry

    A Biosymtic (Biosymbiotic Robotic) Approach to Human Development and Evolution. The Echo of the Universe.

    Get PDF
    In the present work we demonstrate that the current Child-Computer Interaction paradigm is not potentiating human development to its fullest – it is associated with several physical and mental health problems and appears not to be maximizing children’s cognitive performance and cognitive development. In order to potentiate children’s physical and mental health (including cognitive performance and cognitive development) we have developed a new approach to human development and evolution. This approach proposes a particular synergy between the developing human body, computing machines and natural environments. It emphasizes that children should be encouraged to interact with challenging physical environments offering multiple possibilities for sensory stimulation and increasing physical and mental stress to the organism. We created and tested a new set of computing devices in order to operationalize our approach – Biosymtic (Biosymbiotic Robotic) devices: “Albert” and “Cratus”. In two initial studies we were able to observe that the main goal of our approach is being achieved. We observed that, interaction with the Biosymtic device “Albert”, in a natural environment, managed to trigger a different neurophysiological response (increases in sustained attention levels) and tended to optimize episodic memory performance in children, compared to interaction with a sedentary screen-based computing device, in an artificially controlled environment (indoors) - thus a promising solution to promote cognitive performance/development; and that interaction with the Biosymtic device “Cratus”, in a natural environment, instilled vigorous physical activity levels in children - thus a promising solution to promote physical and mental health
    corecore