10 research outputs found

    Controlling a remotely located Robot using Hand Gestures in real time: A DSP implementation

    Full text link
    Telepresence is a necessity for present time as we can't reach everywhere and also it is useful in saving human life at dangerous places. A robot, which could be controlled from a distant location, can solve these problems. This could be via communication waves or networking methods. Also controlling should be in real time and smooth so that it can actuate on every minor signal in an effective way. This paper discusses a method to control a robot over the network from a distant location. The robot was controlled by hand gestures which were captured by the live camera. A DSP board TMS320DM642EVM was used to implement image pre-processing and fastening the whole system. PCA was used for gesture classification and robot actuation was done according to predefined procedures. Classification information was sent over the network in the experiment. This method is robust and could be used to control any kind of robot over distance

    Comparison of depth cameras for three-dimensional Reconstruction in Medicine

    Get PDF
    KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor

    Innovative mechanism to identify robot alignment in an automation system

    Get PDF
    Robotic applications are commonly used in industrial automation systems. Such systems are often comprised of a series of equipment, including robotic arms, conveyors, a workspace, and fixtures. While each piece of equipment may be calibrated with the highest precision, their alignment in relation to each other is an important issue in defining the accuracy of the system. Currently, a variety of complex automated and manual methods are used to align a robotic arm to a workspace. These methods often use either expensive equipment or are slow and skill-dependent. This paper presents a novel low-cost method for aligning an industrial robot to its workcell at 6 degrees of freedom (DoF). The solution is new, simple and easy to use and intended for the SMEs dealing with low volume, high complexity automated systems. The proposed method uses three dial indicators mounted to a robot end effector and a fixed measurement cube, positioned on a workcell. The robot is pre-programmed for a procedure around the cube. The changes on the dial indicators are used to calculate the misalignment between the robot and the workcell. Despite simplicity of the design, the solution is supported with complex real-time mathematical calculations and proven to identify and eliminate misalignment up to 3mm and 5 degrees to an accuracy of 0.003mm and 0.002 degrees: much higher than the precision required for a conventional industrial robot. In this article, the authors describe a proposed solution, validate the computation both theoretically and through a laboratory test rig and simulation

    振動触覚情報支援に基づく産業用ロボットの遠隔操作

    Get PDF

    Creating virtual reality user interface using only ROS framework

    Get PDF
    Eesti keeles: Robotite kasutamine on muutunud vältimatuks tänapäeva maailmas. Nende intuitiivsemaks juhtimiseks on võimalus kasutada virtuaalreaalset (VR) keskkonda. ROS-raamistik on robootikas laialdaselt kasutusel. Raamistikus on olemas mitmed huvitavad kimbud, progammid ja tööriistad, mis aitavad virtuaalreaalsust luua. Praegusel hetkel puudub ROS-kimp, mis kasutaks ROSi võimalusi ja looks ainult ROSl põhineva kasutajaliidese. Selle bakalaureusetöö eesmärk on uurida erinevaid ROS-raamistiku võimalusi virtuaalreaalsuse loomiseks, luua roboti juhtimiseks virtuaalreaalsuse kasutajaliidest sisaldav ROS-kimp ja seda testida. Kasutajaliidese loomisel kasutatakse maksimaalselt olemasolevaid ROSi vahendeid nagu RViz. Loodud keskkond kuvatakse omakorda kasutaja peas olevasse OSVR visiiri. Kasutaja interakteerub robotiga kasutades Leap Motion kontrollerit (LM-kontroller), mida kasutades on kasutaja käed vabad. In English: Using robots is unavoidable in modern world. We can use virtual reality (VR) to control them more intuitively. ROS framework is widely used in robotics. The framework has a lot of packages, programs and tool to build VR, but we are missing a ROS package that takes all these parts and puts them all together. The purpose of this thesis is to research different ROS framework opportunities on creating virtual reality, to build a virtual reality user interface and to test it. User interface is created using visualization markers that are presented in ROS visualization program RViz. Using RViz plugin, OSVR headset is used as a head mounted display (HMD). User can interact with the package using Leap Motion controller, which will keep user’s hands free

    Analysis of a user interface based on multimodal interaction to control a robotic arm for EOD applications

    Get PDF
    Una interfaz humano-robot global que satisfaga las necesidades de los Técnicos Especialistas en Desactivación de Artefactos Explosivos (TEDAX) para la manipulación de un brazo robótico es de suma importancia para que la tarea de manipulación de explosivos sea más segura e intuitiva, además de proporcionar una alta usabilidad y eficiencia. El objetivo de este artículo es evaluar el rendimiento de un sistema multimodal para un brazo robótico basado en una interfaz natural de usuario (NUI) y una interfaz gráfica de usuario (GUI). Se comparan las interfaces mencionadas para determinar la mejor configuración para el control del brazo robótico en aplicaciones de desactivación de artefactos explosivos (EOD) y mejorar la experiencia de usuario de los agentes TEDAX. Las pruebas se realizaron con el apoyo de agentes policiales de la Unidad de Desactivación de Artefactos Explosivos-Arequipa (UDEX-AQP), quienes evaluaron las interfaces desarrolladas para encontrar un sistema más intuitivo que genere la menor carga de estrés al operador, resultando que nuestra interfaz multimodal propuesta presenta mejores resultados en comparación con las interfaces tradicionales. La evaluación de las experiencias de laboratorio se basó en la medición de la carga de trabajo y usabilidad de cada interfaz evaluada

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    Nonlinear control of an exoskeleton seven degrees of freedom robot to realize an active and passive rehabilitation tasks

    Get PDF
    This doctoral thesis proposes the developments of an exoskeleton robot used to rehabilitate patients with upper-limb impairment, named ETS-MARSE robot. The developments included in this work are the design, and validation of a kinematic inverse solution and nonlinear control strategy for an upper limb exoskeleton robot. These approaches are used in passive and active rehabilitation motion in presence of dynamics and kinematics uncertainties and unexpected disturbances. Considering the growing population of post-stroke victims, there is a need to improve accessibility to physiotherapy by using the modern robotic rehabilitation technology. Recently, rehabilitation robotics attracted a lot of attention from the scientific community since it is able to overcome the limitations of conventional physical therapy. The importance of the rehabilitation robot lies in its ability to provide intensive physiotherapy for a long period time. The measured data of the robot allows the physiotherapist to accurately evaluate the patient’s performance. However, these devices are still part of an emerging area and present many challenges compared to the conventional robotic manipulators, such as the high nonlinearity, dimensional (high number of DOFs) and unknown dynamics (uncertainties). These limitations are provoked due to their complex mechanical structure designed for human use, the types of assistive motion, and the sensitivity of the interaction with a large diversity of human wearers. As a result, these conditions make the robot system vulnerable to dynamic uncertainties and external disturbances such as saturation, friction forces, backlash, and payload. Likewise, the interaction between human and the exoskeleton make the system subjected to external disturbances due to different physiological conditions of the subjects like the different weight of the upper limb for each subject. During a rehabilitation movement, the nonlinear uncertain dynamic model and external forces can turn into unknown function that can affect the performance of the exoskeleton robot. The main challenges addressed in this thesis are firstly to design a human inverse kinematics solution to perform a smooth movement similar to natural human movement (human-like motion). Secondly, to develop controllers characterized by a high-level of robustness and accuracy without any sensitivity to uncertain nonlinear dynamics and unexpected disturbances. This will give the control system more flexibility to handle the uncertainties and parameters’ variation in different modes of rehabilitation motion (passive and active)
    corecore