164,866 research outputs found

    Effective methods for human-robot-environment interaction by means of haptic robotics

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.Industrial robots have been widely used to perform well-defined repetitive tasks in carefully constructed simple environments such as manufacturing factories. The futuristic vision of industrial robots is to operate in complex, unstructured and unknown (or partially known) environments, to assist human workers in undertaking hazardous tasks such as sandblasting in steel bridge maintenance. Autonomous operation of industrial robots in such environments is ideal, but semi-autonomous or manual operation with human interaction is a practical solution because it utilises human intelligence and experience combined with the power and accuracy of an industrial robot. To achieve the human interaction operation, there are several challenges that need to be addressed: environmental awareness, effective robot-environment interaction and human-robot interaction. This thesis aims to develop methodologies that enable natural and efficient Human- Robot-Environment Interaction (HREI) and apply them in a steel bridge maintenance robotic system. Three research issues are addressed: Robot-Environment-Interaction (REI), haptic device and robot interface and intuitive human-robot interaction. To enable efficient robot-environment interaction, a potential field-based Virtual Force Field (VF2) approach has been investigated. The VF2 approach includes an Attractive Force (AF) method and a force control algorithm for robot motion control, and a 3D Virtual Force Field (3D-VF2) method for real-time collision avoidance. Results obtained from simulation, experiments in a laboratory setup and field test have verified and validated these methods. A haptic device-robot interface has been developed for providing intuitive human-robot interaction. Haptic devices are normally small compared to industrial robots. Thus, the workspace of a haptic device is much smaller than the workspace of a big industrial manipulator. A novel workspace mapping method, which includes drifting control, scaling control and edge motion control, has been investigated for mapping a small haptic workspace to the large workspace of manipulator with the aim of providing natural kinesthetic feedback to an operator and smooth control of robot operation. A haptic force control approach has also been studied for transferring the virtual contact force (between the robot and the environment) and the inertia of the manipulator to the operator's hand through a force feedback function. Human factors have significant effect on the performance of haptic-based human-robot interaction. An eXtended Hand Movement (XHM) model for eye-guided hand movement has been investigated in this thesis with the aim of providing natural and comfortable interaction between a human operator and a robot, and improving the operational performance. The model has been studied for increasing the speed of the manipulator while maintaining the control accuracy. This model is applied into a robotic system and it has been verified by various experiments. These theoretical methods and algorithms have been successfully implemented in a steel bridge maintenance robotic system, and tested in both laboratory and a bridge maintenance site located in Sydney

    Development of Virtual Laboratory Through Hand Motion Detector in Order to Improve Psychomotor Skills Student of Vocational High School

    Get PDF
    Abstract: The students interact directly in virtual laboratory with a simulator or remote equipment, and it is desirable that the experience will be similar to a real lab. There are many ways by which a student could attain this experience -through real experimental activities or through computer human interactions. These computer based multimedia environment and cohesive with hardware. These environments offer students a means to explore, experience, express themselves, and train psychomotoric. In a Digital Electronics virtual environment, the students can posit hypotheses about a engineering concept, conduct as many experiments as they want. In this paper the virtual laboratory design based on macromedia flash (software) and hand movement detection (hardware). Have implemented a virtual lab for the user especially vocational students (SMK), making practices more interesting and interactive through user interaction with computer using a periferal with hand movement detection that provides flexibility in operating. Combination of real and virtual lab which is integrated into the course material, can enrich the learning process, increase student’s interest and curiosity, enhance the ability of psychomotor with hands-on

    A Novel Application of the 3D VirCA Environment: Modeling a Standard Ethological Test of Dog-Human Interactions

    Get PDF
    The concept of ‘Future Internet’, ‘Internet of Things’ and ‘3D Internet’ opens a novel way for modeling ethological tests by rebuilding models of human-animal interaction in an augmented environment as an interactive mixture of virtual actors and real human observers. On the one hand these experiments can serve as a proof of concept, as a kind of experimental validation of formal ethological models, but on the other hand they can also serve as examples for the ways a human can communicate with things (i.e., with everyday objects) in a virtual environment (e.g. on the Internet). These kinds of experiments can also support Cognitive Infocommunication related research, the field that investigates how a human can co-evolve with artificially cognitive systems through infocommunications devices. The goal of the paper is to introduce an example for such an ethological test system, a possible way for embedding a prototype ethological model described as a fuzzy automaton in MATLAB to the 3D VirCA collaborative augmented reality environment. Some details of the applied ethological experiment paradigm developed for studying the dog-owner relationship in a standard laboratory procedure, as a demonstrative example for ethological model implementation, will also be discussed briefly in this paper

    A visualisation and simulation framework for local and remote HRI experimentation

    Get PDF
    In this text, we will present work on the design and development of a ROS-based (Robot Operating System) remote 3D visualisation, control and simulation framework. This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project. The proposed solution was implemented using ROS, and designed to attend the needs of two user groups – local and remote users and developers. The framework consists of: (1) a fully functional simulator integrated with the ROS environment, including a faithful representation of a robotic platform, a human model with animation capabilities and enough features for enacting human robot interaction scenarios, and a virtual experimental setup with similar features as the real laboratory workspace; (2) a fully functional and intuitive user interface for monitoring and development; (3) a remote robotic laboratory that can connect remote users to the framework via a web browser. The proposed solution was thoroughly and systematically tested under operational conditions, so as to assess its qualities in terms of features, ease-of-use and performance. Finally, conclusions concerning the success and potential of this research and development effort are drawn, and the foundations for future work will be proposed

    Development of an anthropomorphic mobile manipulator with human, machine and environment interaction

    Get PDF
    An anthropomorphic mobile manipulator robot (CHARMIE) is being developed by the University of Minho's Automation and Robotics Laboratory (LAR). The robot gathers sensorial information and processes using neural networks, actuating in real time. The robot's two arms allow object and machine interaction. Its anthropomorphic structure is advantageous since machines are designed and optimized for human interaction. Sound output allows it to relay information to workers and provide feedback. Allying these features with communication with a database or remote operator results in establishment of a bridge between the physical environment and virtual domain. The goal is an increase in information flow and accessibility. This paper presents the current state of the project, intended features and how it can contribute to the development of Industry 4.0. Focus is given to already finished work, detailing the methodology used for two of the robot's subsystems: locomotion system; lower limbs of the robot.- This project has been supported by the ALGORITMI Research Centre of University of Minho's School of Engineering

    Cross-benefits between virtual reality and games

    Get PDF
    electronic version (8 pp.)International audienceIn one hand, video games are dedicated to entertainment. In recent years, the emerging of consumers hardware dedicated to games induced great progress for realism and gameplay. Graphics rendering and physical engines, digital surround sound and new interaction interfaces are examples of areas which have benefited of these last improvements and widely contribute to the gaming experience. In another hand, virtual reality focus on user's presence which is its indubitable feeling of belonging to the virtual environment. As this goal is very hard to reach, studies have to focus on human through several research directions like immersion (3D vision, sound spatialization, haptic devices) and interaction which has to be as natural and non intrusive as possible. Recent researches on intersensoriality possibilities, metaphorical interactions or brain computer interfaces are examples of what would be achieved in immersion and interaction. At this point, we can argue that virtual reality can be a provider of new methods and resources for games. Unfortunately virtual reality room are expensive and difficult to deploy, what is probably the main reasons why virtual reality is still a laboratory experiment or confined to industrial simulator. Here is our double contribution : to combine video games and virtual reality through two different virtual reality game solutions and to design them with consumer grade components. This paper first presents a survey of both current video game evolutions and virtual reality researches. We will also give some examples of cross-benefits between video games and virtual reality. To illustrate this last point we will describe two virtual reality applications created by our research team and dedicated to gaming. Finally, as a prospective talk we will deal with three points : some recent virtual reality systems supposedly applicable to home gaming, some good points from DG that VR developers should incorporate in VR systems and last point, some lines of enquiry so that the union between VR and DG be at last consummate

    THE METAVERSE: A VIRTUAL WORLD IN THE PALM OF YOUR HAND

    Get PDF
    This paper explores the actual and future impact of the Metaverse as a virtual space. Thus, it focuses the probe on the technical challenges that face this everlasting emerging technology. Today, the Metaverse presents a digital environment to build collective architecture and historical heritage in a virtual space. In this digital world, the modeling and design methodology is based on individual archetypes that can puzzle new elements. Currently, traditional methods require change and adaptation in both the education and work market, especially due to the remote-work integration in the last few years. For example, many components are required to build a Virtual Reality (VR) laboratory or a VR museum. Virtual environments present us with novel opportunities to bring together the real world with a virtual extension or duplication. This technology will remove physical boundaries and design constraints and consequently will open a gate to a metaphysical world. Imagine a world with limitless space where gravity doesn’t exist, and water can float upward. There is no limit for art and architecture but even this magic has its limitation related to computer technology. Therefore, this paper surveys the state-of-the-art computational technologies and the ecosystems of the Metaverse. The paper covers the fields of Computer Vision, Human-Computer Interaction, Artificial Intelligence, Robotics, Internet of Things (IoT), Cloud Computing, and future mobile networks. In application, the Metaverse will allow users to have a fantastic experience being part of worldwide entertainment and socio-economic network

    Human-Robot Collaboration in Automotive Industry

    Get PDF
    Human–Robot Collaboration is a new trend in the field of industrial and service. Application of human-robot-collaboration techniques in automotive industries has many advantages on productivity, production quality and workers’ ergonomic; however, workers’ safety aspects play the vital role during this collaboration. Previously, the machine is allowed to be at automatic work only if operators are out of its workspace but today collaborative robots provide the opportunity to establish the human robot cooperation. In this thesis, efforts have been made to present innovative solutions for using human-robot collaboration to develop a manufacturing cell. These solutions are not only used to facilitate the operator working with collaborative robots but also consider the worker safety and ergonomic. After proposing different solutions for improving the safety of operations during the collaboration with industrial robots, the efficiency of the solutions is tested in both laboratory and virtual environments. In this research, firstly, Analytic Hierarchy Process (AHP) has been used as a potential decision maker to prove the efficiency of human-robot collaboration system over the manual one. In the second step, detailed task decomposition has been done using Hierarchical Task Analysis (HTA) to allocate operational tasks to human and robot reducing the chance of duty interference. In the International Organization of Standardization's technical specification 15066 on collaborative robot safety four methodologies have been proposed to reduce the risk of injury in the work area. The four methods implied in ISO/TS 15066 are safety-rated monitored stop (SMS), hand-guided (HG), speed and separation monitoring (SSM) and power force limiting (PFL). SMS method reduces the risk of operator’s injury by stopping the robot motion whenever the operator is in the collaborative workspace. HG method reduces the chance of operator’s injury by providing the possibility of having control over the robot motion at all times in the workstation using emergency system or enabling device. The SSM method determines the minimum protective distance between a robot and an operator in the collaborative workspace, below which the robot will stop any kind of motion and PFL method reduces the momentum of a robot in a way that contact between an operator and the robot will not cause any injury. After determining the requirements and specifications of hybrid assembly cell, few of the above-mentioned methods for evaluating the safety of human-robot-collaboration procedure have been tasted in the laboratory environment. Due to the lack of safety camera (sensors) in the laboratory workstation, the ISO methods such as SSM, that needs sensors in the workstation, have been modeled in virtual environment to evaluate different scenario of human-robot-interaction and feasibility of the assembly process. Implementing different scenarios of ISO methods in hybrid assembly workstation not only improves the operator safety who is in interaction with the collaborative robot but also improves the worker ergonomic during the performing of repetitive heavy tasks

    RealTHASC—a cyber-physical XR testbed for AI-supported real-time human autonomous systems collaborations

    Get PDF
    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engineℱ, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world

    Affordances and Safe Design of Assistance Wearable Virtual Environment of Gesture

    Get PDF
    Safety and reliability are the main issues for designing assistance wearable virtual environment of technical gesture in aerospace, or health application domains. That needs the integration in the same isomorphic engineering framework of human requirements, systems requirements and the rationale of their relation to the natural and artifactual environment.To explore coupling integration and design functional organization of support technical gesture systems, firstly ecological psychologyprovides usa heuristicconcept: the affordance. On the other hand mathematical theory of integrative physiology provides us scientific concepts: the stabilizing auto-association principle and functional interaction.After demonstrating the epistemological consistence of these concepts, we define an isomorphic framework to describe and model human systems integration dedicated to human in-the-loop system engineering.We present an experimental approach of safe design of assistance wearable virtual environment of gesture based in laboratory and parabolic flights. On the results, we discuss the relevance of our conceptual approach and the applications to future assistance of gesture wearable systems engineering
    • 

    corecore