8 research outputs found

    Development of Sensory-Motor Fusion-Based Manipulation and Grasping Control for a Robotic Hand-Eye System

    Get PDF

    Real-time visual and EMG signals recognition to control dexterous prosthetic hand based on deep learning and machine learning

    Get PDF
    The revolution in prosthetic hands allows the evolution of a new generation of prostheses that increase artificial intelligence to control an adept hand. A suitable gripping and grasping action for different shapes of the objects is currently a challenging task of prosthetic hand design. The most artificial hands are based on electromyography signals. A novel approach has been proposed in this work using deep learning classification method for assorting items into seven gripping patterns based on EMG and image recognition. Hence, this approach conducting two scenarios; The first scenario is recording the EMG signals for five healthy participants for the basic hand movement (cylindrical, tip, spherical, lateral, palmar, and hook). Then three time-domain (standard deviation, mean absolute value, and the principal component analysis) are used to extract the EMG signal features. After that, the SVM is used to find the proper classes and achieve an accuracy that reaches 89%. The second scenario is collecting the 723 RGB images for 24 items and sorting them into seven classes, i.e., cylindrical, tip, spherical, lateral, palmar, hook, and full hand. The GoogLeNet algorithm is used for training based on 144 layers; these layers include the convolutional layers, ReLU activation layers, max-pooling layers, drop-out layers, and a softmax layer. The GoogLeNet achieves high training accuracy reaches 99%. Finally, the system is tested, and the experiments showed that the proposed visual hand based on the myoelectric control method (Vision-EMG) could significantly give recognition accuracy reaches 95%

    Computer Vision-based Robotic Arm for Object Color, Shape, and Size Detection

    Get PDF
    Various aspects of the human workplace have been influenced by robotics due to its precision and accessibility. Nowadays, industrial activities have become more automated, increasing efficiency while reducing the production time, human labor, and risks involved. With time, electronic technology has advanced, and the ultimate goal of such technological advances is to make robotic systems as human-like as possible. As a result of this blessing of technological advances, robots will perform jobs far more efficiently than humans in challenging situations. In this paper, an automatic computer vision-based robotic gripper has been built that can select and arrange objects to complete various tasks. This study utilizes the image processing methodology of the PixyCMU camera sensor to distinguish multiple objects according to their distinct colors (red, yellow, and green). Next, a preprogrammed command is generated in the robotic arm to pick the item employing Arduino Mega and four MG996R servo motors. Finally, the device releases the object according to its color behind the fixed positions of the robotic arm to a specific place. The proposed system can also detect objects' geometrical shapes (circle, triangle, square, rectangle, pentagon, and star) and sizes (large, medium, and small) by utilizing OpenCV image processing libraries in Python language. Empirical results demonstrate that the designed robotic arm detects colored objects with 80% accuracy. It performs an excellent size and shapes recognition precision in real-time with 100% accuracy

    Haptic identification by ELM-controlled uncertain manipulator

    Get PDF
    This paper presents an extreme learning machine (ELM) based control scheme for uncertain robot manipulators to perform haptic identification. ELM is used to compensate for the unknown nonlinearity in the manipulator dynamics. The ELM enhanced controller ensures that the closed-loop controlled manipulator follows a specified reference model, in which the reference point as well as the feedforward force is adjusted after each trial for haptic identification of geometry and stiffness of an unknown object. A neural learning law is designed to ensure finite-time convergence of the neural weight learning, such that exact matching with the reference model can be achieved after the initial iteration. The usefulness of the proposed method is tested and demonstrated by extensive simulation studies. Index Terms鈥擡xtreme learning machine; haptic identification; adaptive control; robot manipulator

    Analysis of ANN and Fuzzy Logic Dynamic Modelling to Control the Wrist Exoskeleton

    Get PDF
    Human intention has long been a primary emphasis in the field of electromyography (EMG) research. This being considered, the movement of the exoskeleton hand can be accurately predicted based on the user's preferences. The EMG is a nonlinear signal formed by muscle contractions as the human hand moves and easily captured noise signal from its surroundings. Due to this fact, this study aims to estimate wrist desired velocity based on EMG signals using ANN and FL mapping methods. The output was derived using EMG signals and wrist position were directly proportional to control wrist desired velocity. Ten male subjects, ranging in age from 21 to 40, supplied EMG signal data set used for estimating the output in single and double muscles experiments. To validate the performance, a physical model of an exoskeleton hand was created using Sim-mechanics program tool. The ANN used Levenberg training method with 1 hidden layer and 10 neurons, while FL used a triangular membership function to represent muscles contraction signals amplitude at different MVC levels for each wrist position. As a result, PID was substituted to compensate fluctuation of mapping outputs, resulting in a smoother signal reading while improving the estimation of wrist desired velocity performance. As a conclusion, ANN compensates for complex nonlinear input to estimate output, but it works best with large data sets. FL allowed designers to design rules based on their knowledge, but the system will struggle due to the large number of inputs. Based on the results achieved, FL was able to show a distinct separation of wrist desired velocity hand movement when compared to ANN for similar testing datasets due to the decision making based on rules setting setup by the designer

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    A Review on Human-Computer Interaction and Intelligent Robots

    Get PDF
    In the field of artificial intelligence, human鈥揷omputer interaction (HCI) technology and its related intelligent robot technologies are essential and interesting contents of research. From the perspective of software algorithm and hardware system, these above-mentioned technologies study and try to build a natural HCI environment. The purpose of this research is to provide an overview of HCI and intelligent robots. This research highlights the existing technologies of listening, speaking, reading, writing, and other senses, which are widely used in human interaction. Based on these same technologies, this research introduces some intelligent robot systems and platforms. This paper also forecasts some vital challenges of researching HCI and intelligent robots. The authors hope that this work will help researchers in the field to acquire the necessary information and technologies to further conduct more advanced research

    Sistema de visi贸n computacional estereosc贸pico aplicado a un robot cil铆ndrico accionado neum谩ticamente

    Get PDF
    En el 谩rea industrial los robots forman parte importante del recurso tecnol贸gico disponible para tareas de manipulaci贸n en manufactura, ensamble, manejo de residuos peligrosos y aplicaciones varias. Los sistemas de visi贸n computacional se han ingresado al mercado como soluciones a problemas que otros tipos de sensores y m茅todos no han podido solucionar. El presente trabajo analiza un sistema de visi贸n estereosc贸pico aplicado a un robot. Este arreglo permite la medici贸n de coordenadas del centro de un objeto en las tres dimensiones, de modo que, le da al robot la posibilidad de trabajar en el espacio y no solo en un plano. El sistema estereosc贸pico consiste en el uso de dos o m谩s c谩maras alineadas en alguno de sus ejes, mediante las cuales, es posible calcular la profundidad a la que se encuentran los objetos. En el presente, se mide la posici贸n de un objeto haciendo una combinaci贸n entre el reconocimiento 2D y la medici贸n de las coordenadas y de su centro calculadas usando momentos. En el sistema estereosc贸pico, se a帽ade la medici贸n de la 煤ltima coordenada mediante el c谩lculo de la disparidad encontrada entre las im谩genes de las c谩maras inal谩mbricas izquierda y derecha, que convierte al sistema en un visor 3D de la realidad, emulando los ojos humanos capaces de distinguir profundidades con cierta precisi贸n. El sistema de visi贸n computacional propuesto es integrado a un robot neum谩tico de 5 grados de libertad el cual puede ser programado desde la metodolog铆a GRAFCET mediante software de uso comercial. Las c谩maras del sistema de visi贸n est谩n montadas en el plano lateral del robot de modo tal, que es posible visualizar las piezas que quedan dentro de su volumen de trabajo. En la implementaci贸n, se desarrolla un algoritmo de reconocimiento y medici贸n de posici贸n, haciendo uso de software libre en lenguaje C++. De modo que, en la integraci贸n con el robot, el sistema pueda ser lo m谩s abierto posible. La validaci贸n del trabajo se logra tomando muestras de los objetos a ser manipulados y generando trayectorias para el robot, a fin de visualizar si la pieza pudo ser captada por su garra neum谩tica o no. Los resultados muestran que es posible lograr la manipulaci贸n de piezas en un ambiente visualmente cargado y con una precisi贸n aceptable. Sin embargo, se observa que la precisi贸n no permite que el sistema pueda ser usado en aplicaciones donde se requiere precisi贸n al nivel de los procesos de ensamblado de piezas peque帽as o de soldadura.In the industrial area, robots are an important part of the technological resources available to perform manipulation tasks in manufacturing, assembly, the transportation of dangerous waste, and a variety of applications. Specialized systems of computer vision have entered the market to solve problems that other technologies have been unable to address. This document analyzes a stereo vision system that is used to provide the center of mass of an object in three dimensions. This kind of application is mounted using two or more cameras that are aligned along the same axis and give the possibility to measure the depth of a point in the space. The stereoscopic system described, measures the position of an object using a combination between the 2D recognition, which implies the calculus of the coordinates of the center of mass and using moments, and the disparity that is found comparing two images: one of the right and one of the left. This converts the system into a 3D reality viewfinder, emulating the human eyes, which are capable of distinguishing depth with good precision.The proposed stereo vision system is integrated into a 5 degree of freedom pneumatic robot, which can be programmed using the GRAFCET method by means of commercial software. The cameras are mounted in the lateral plane of the robot to ensure that all the pieces in the robot's work area can be observed.For the implementation, an algorithm is developed for recognition and position measurement using open sources in C++. This ensures that the system can remain as open as possible once it is integrated with the robot. The validation of the work is accomplished by taking samples of the objects to be manipulated and generating robot's trajectories to see if the object can be manipulated by its end effector or not. The results show that is possible to manipulate pieces in a visually crowded space with acceptable precision. However, the precision reached does not allow the robot to perform tasks that require higher accuracy as the one is needed in manufacturing assembly process of little pieces or in welding applications
    corecore