333,783 research outputs found

    An integrated dexterous robotic testbed for space applications

    Get PDF
    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Real-Time Collaborative Robot Control Using Hand Gestures Recognised By Deep Learning

    Get PDF
    In an ever-changing and demanding world new technologies, which allow more efficient and easier industrial processes, are needed. Furthermore, until now, traditional vision technologies and algorithms have been used in the industrial area. These techniques, even though they achieve good results in simple vision tasks, they are really limited since any change in the processed image affects their performance. For example, in code reading tasks, if the code has a mark or it is not completely visible, the piece with the code would be discarded which leads to losses for the company. These kind of problems can be solved by the implementation of machine learning techniques for vision purposes. Moreover, these techniques learn from example images and even though a perfect performance is difficult to get, machine learning techniques are much more flexible than traditional techniques. Even though the term machine learning was coined for the first time in 1959, until now, these techniques have barely been implemented in the industrial area. They have mostly been used for investigation purposes. Apart from the new vision techniques, new types of robots are being implemented in industrial environments such as collaborative or social robots. On the one hand, collaborative robots allow the workers to work next to or with the robot without any type of physical interference between them. On the other hand, social robots allow an easier communication between the robot and the user which can be applied in different parts of the industry such as introducing the company to new visitors. The present project gathers information in regard to the analysis, training and implementation of a vision artificial neuronal network based software called ViDi Cognex software. By the use of this software, three different vision tasks were trained. The most important one is the hand gesture recognition task since the obtained hand gesture controls the action performed by the YuMi robot, which is programmed in RAPID language. It is believed that the development of the different artificial neuronal networks with industrial purposes can show the applicability of machine learning techniques in an industrial environment. Apart from that, the hand gesture recognition shows an easy way to control the movements of a robot which could be used by a person with no knowledge of robots or programming. To finish, the use of a two arm collaborative robot, could show the potential and versatility of collaborative robots for industrial purposes

    Real-time visual and EMG signals recognition to control dexterous prosthetic hand based on deep learning and machine learning

    Get PDF
    The revolution in prosthetic hands allows the evolution of a new generation of prostheses that increase artificial intelligence to control an adept hand. A suitable gripping and grasping action for different shapes of the objects is currently a challenging task of prosthetic hand design. The most artificial hands are based on electromyography signals. A novel approach has been proposed in this work using deep learning classification method for assorting items into seven gripping patterns based on EMG and image recognition. Hence, this approach conducting two scenarios; The first scenario is recording the EMG signals for five healthy participants for the basic hand movement (cylindrical, tip, spherical, lateral, palmar, and hook). Then three time-domain (standard deviation, mean absolute value, and the principal component analysis) are used to extract the EMG signal features. After that, the SVM is used to find the proper classes and achieve an accuracy that reaches 89%. The second scenario is collecting the 723 RGB images for 24 items and sorting them into seven classes, i.e., cylindrical, tip, spherical, lateral, palmar, hook, and full hand. The GoogLeNet algorithm is used for training based on 144 layers; these layers include the convolutional layers, ReLU activation layers, max-pooling layers, drop-out layers, and a softmax layer. The GoogLeNet achieves high training accuracy reaches 99%. Finally, the system is tested, and the experiments showed that the proposed visual hand based on the myoelectric control method (Vision-EMG) could significantly give recognition accuracy reaches 95%

    Gesture Controlled Collaborative Robot Arm and Lab Kit

    Get PDF
    In this paper, a mechatronics system was designed and implemented to include the subjects of artificial intelligence, control algorithms, robot servo motor control, and human-machine interface (HMI). The goal was to create an inexpensive, multi-functional robotics lab kit to promote students’ interest in STEM fields including computing and mechtronics. Industrial robotic systems have become vastly popular in manufacturing and other industries, and the demand for individuals with related skills is rapidly increasing. Robots can complete jobs that are dangerous, dull, or dirty for humans to perform. Recently, more and more collaborative robotic systems have been developed and implemented in the industry. Collaborative robots utilize artificial intelligence to become aware of and capable of interacting with a human operator in progressively natural ways. The work created a computer vision-based collaborative robotic system that can be controlled via several different methods including a touch screen HMI, hand gestures, and hard coding via the microcontroller integrated development environment (IDE). The flexibility provided in the framework resulted in an educational lab kit with varying levels of difficulty across several topics such as C and Python programming, machine learning, HMI design, and robotics. The hardware being used in this project includes a Raspberry Pi 4, an Arduino Due, a Braccio Robotics Kit, a Raspberry Pi 4 compatible vision module, and a 5-inch touchscreen display. We anticipate this education lab kit will improve the effectiveness of student learning in the field of mechatronics

    Deep learning-based artificial vision for grasp classification in myoelectric hands

    Get PDF
    Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at 5∘{{5}^{\circ}} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85%85 \% for the seen and 75%75 \% for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84%84 \% in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%88 \% . In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably

    Solving Multi-agent planning tasks by using automated planning

    Get PDF
    This dissertation consists on developing a control system for an autonomous multiagent system using Automated Planning and Computer Vision to solve warehouse organization tasks. This work presents an heterogeneous multi-agent system where each robot has different capabilities. In order to complete the proposed task, the robots will need to collaborate. On one hand, there are coordinator robots that collect information about the boxes to get their destination storage position using Computer Vision. On the other hand, there are cargo robots that push the boxes more easily than the coordinators but they have no camera devices to identify the boxes. Then, both robots must collaborate in order to solve the warehouse problem due to the different sensors and actuators that they have available. This work has been developed in Java. It uses JNAOqi to communicate with the NAO robots (coordinators) and rosjava to communicate with the P3DX robots (cargos). The control modules are deployed in the PELEA architecuture. The empirical evaluation has been conducted in a real environment using two robots: one NAO8 Robot and one P3DX robot.Este trabajo presenta el desarrollo de un sistema de control para un sistema autónomo multi-agente con Planificación Automática y Visión Artificial para resolver tareas de ordenación de almacenes. En el proyecto se presenta un sistema multi-agente heterogéneo donde cada agente tiene diferentes habilidades. Para poder completar la tarea propuesta, los agentes, en este caso robots, deben colaborar. Por un lado, hay robots coordinadores que recogen información de las cajas medinte Visión Artificial para conocer la posición de almacenaje de la caja. Por otro lado, hay robots de carga que empujan las cajas hasta su destino con mayor facilidad que los coordinadores pero que no tienen cámaras de video para identificar las cajas. Por ello, ambos robots tienen que colaborar para resolver el problema de ordenación debido a los diferentes sensores y actuadores que tienen disponibles. El proyecto se ha desarrollado en Java. Se ha utilizado JNAOqi para comunicarse con los robots NAO (coordinadores) y rosjava para comunicarse con los robots P3DX (carga). La evaluación empírica se ha realizado en un entorno real utilizando dos robots: un robot NAO y un robot P3DX.Ingeniería Informátic
    • …
    corecore