387 research outputs found

    Implementation of Adaptive Neural Networks Controller for NXT SCARA Robot System

    Get PDF
    Several neural network controllers for robotic manipulators have been developed during the last decades due to their capability to learn the dynamic properties and the improvements in the global stability of the system. In this paper, an adaptive neural controller has been designed with self learning to resolve the problems caused by using a classical controller. A comparison between the improved unsupervised adaptive neural network controller and the P controller for the NXT SCARA robot system is done, and the result shows the improvement of the self learning controller to track the determined trajectory of robotic automated controllers with uncertainties. Implementation and practical results were designed to guarantee online real-time

    Adaptive motor control and learning in a spiking neural network realised on a mixed-signal neuromorphic processor

    Full text link
    Neuromorphic computing is a new paradigm for design of both the computing hardware and algorithms inspired by biological neural networks. The event-based nature and the inherent parallelism make neuromorphic computing a promising paradigm for building efficient neural network based architectures for control of fast and agile robots. In this paper, we present a spiking neural network architecture that uses sensory feedback to control rotational velocity of a robotic vehicle. When the velocity reaches the target value, the mapping from the target velocity of the vehicle to the correct motor command, both represented in the spiking neural network on the neuromorphic device, is autonomously stored on the device using on-chip plastic synaptic weights. We validate the controller using a wheel motor of a miniature mobile vehicle and inertia measurement unit as the sensory feedback and demonstrate online learning of a simple 'inverse model' in a two-layer spiking neural network on the neuromorphic chip. The prototype neuromorphic device that features 256 spiking neurons allows us to realise a simple proof of concept architecture for the purely neuromorphic motor control and learning. The architecture can be easily scaled-up if a larger neuromorphic device is available.Comment: 6+1 pages, 4 figures, will appear in one of the Robotics conference

    Robotic Manipulator Control in the Presence of Uncertainty

    Get PDF
    openThis research focuses on the problem of manipulator control in the presence of uncertainty and aims to compare different approaches for handling uncertainty while developing robust and adaptive methods that can control the robot without explicit knowledge of uncertainty bounds. Uncertainty is a pervasive challenge in robotics, arising from various sources such as sensor noise, modeling errors, and external disturbances. Effectively addressing uncertainty is crucial for achieving accurate and reliable manipulator control. The research will explore and compare existing methods for uncertainty handling such as robust feedback linearization , sliding mode control and robust adaptive control. These methods provide mechanisms to model and compensate for uncertainty in the control system. Additionally, modified robust and adaptive control methods will be developed that can dynamically adjust control laws based on the observed states, without requiring explicit knowledge of uncertainty bounds. To evaluate the performance of the different approaches, comprehensive experiments will be conducted on a manipulator platform. Various manipulation tasks will be performed under different levels of uncertainty, and the performance of each control approach will be assessed in terms of accuracy, stability, and adaptability. Comparative analysis will be conducted to highlight the strengths and weaknesses of each method and identify the most effective approach for handling uncertainty in manipulator control. The outcomes of this research will contribute to the advancement of manipulator control by providing insights into the effectiveness of different approaches for uncertainty handling. The development of new robust and adaptive control methods will enable manipulators to operate in uncertain environments without requiring explicit knowledge of uncertainty bounds. Ultimately, this research will facilitate the deployment of more reliable and adaptive robotic systems capable of handling uncertainty and improving their performance in various real-world applications.This research focuses on the problem of manipulator control in the presence of uncertainty and aims to compare different approaches for handling uncertainty while developing robust and adaptive methods that can control the robot without explicit knowledge of uncertainty bounds. Uncertainty is a pervasive challenge in robotics, arising from various sources such as sensor noise, modeling errors, and external disturbances. Effectively addressing uncertainty is crucial for achieving accurate and reliable manipulator control. The research will explore and compare existing methods for uncertainty handling such as robust feedback linearization , sliding mode control and robust adaptive control. These methods provide mechanisms to model and compensate for uncertainty in the control system. Additionally, modified robust and adaptive control methods will be developed that can dynamically adjust control laws based on the observed states, without requiring explicit knowledge of uncertainty bounds. To evaluate the performance of the different approaches, comprehensive experiments will be conducted on a manipulator platform. Various manipulation tasks will be performed under different levels of uncertainty, and the performance of each control approach will be assessed in terms of accuracy, stability, and adaptability. Comparative analysis will be conducted to highlight the strengths and weaknesses of each method and identify the most effective approach for handling uncertainty in manipulator control. The outcomes of this research will contribute to the advancement of manipulator control by providing insights into the effectiveness of different approaches for uncertainty handling. The development of new robust and adaptive control methods will enable manipulators to operate in uncertain environments without requiring explicit knowledge of uncertainty bounds. Ultimately, this research will facilitate the deployment of more reliable and adaptive robotic systems capable of handling uncertainty and improving their performance in various real-world applications

    Applying RBF Neural Nets for Position Control of an Inter/Scara Robot

    Get PDF
    This paper describes experimental results applying artificial neural networks to perform the position control of a real scara manipulator robot. The general control strategy consists of a neural controller that operates in parallel with a conventional controller based on the feedback error learning architecture. The main advantage of this architecture is that it does not require any modification of the previous conventional controller algorithm. MLP and RBF neural networks trained on-line have been used, without requiring any previous knowledge about the system to be controlled. These approach has performed very successfully, with better results obtained with the RBF networks when compared to PID and sliding mode positional controllers

    Task level disentanglement learning in robotics using βVAE

    Get PDF
    Humans observe and infer things in a disentanglement way. Instead of remembering all pixel by pixel, learn things with factors like shape, scale, colour etc. Robot task learning is an open problem in the field of robotics. The task planning in the robot workspace with many constraints makes it even more challenging. In this work, a disentanglement learning of robot tasks with Convolutional Variational Autoencoder is learned, effectively capturing the underlying variations in the data. A robot dataset for disentanglement evaluation is generated with the Selective Compliance Assembly Robot Arm. The disentanglement score of the proposed model is increased to 0.206 with a robot path position accuracy of 0.055, while the state-of-the-art model (VAE) score was 0.015, and the corresponding path position accuracy is 0.053. The proposed algorithm is developed in Python and validated on the simulated robot model in Gazebo interfaced with Robot Operating System

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Improving robotic grasping system using deep learning approach

    Get PDF
    Traditional robots can only move according to a pre-planned trajectory which limits the range of applications that they could be engaged in. Despite their long history, the use of computer vision technology for grasp prediction and object detection is still an active research area. However, the generating of a full grasp configuration of a target object is the main challenge to plan a successful robotic operation of the physical robotic grasp. Integrating computer vision technology with tactile sensing feedback has given rise to a new capability of robots that can accomplish various robotic tasks. However, the recently conducted studies had used tactile sensing with grasp detection models to improve prediction accuracy, not physical grasp success. Thus, the problem of detecting the slip event of the grasped objects that have different weights is addressed in this research. This research aimed to develop a Deep Learning grasp detection model and a slip detection algorithm and integrating them into one innovative robotic grasping system. By proposing a four-step data augmentation technique, the achieved grasping accuracy was 98.2 % exceeding the best-reported results by almost 0.5 % where 625 new instances were generated per original image with different grasp labels. Besides, using the twostage- transfer-learning technique improved the obtained results in the second stage by 0.3 % compared to the first stage results. For the physical robot grasp, the proposed sevendimensional grasp representations method allows the autonomous prediction of the grasp size and depth. The developed model achieved 74.8 milliseconds as prediction time, which makes it possible to use the model in real-time robotic applications. By observing the real-time feedback of a force sensing resistor sensor, the proposed slip detection algorithm indicated a quick response within 86 milliseconds. These results allowed the system to maintain holding the target objects by an immediate increase of the grasping force. The integration of the Deep Learning and slip detection models has shown a significant improvement of 18.4% in the results of the experimental grasps conducted on a SCARA robot. Besides, the utilized Zerocross-Canny edge detector has improved the robot positioning error by 0.27 mm compared to the related studies. The achieved results introduced an innovative robotic grasping system with a Grasp-NoDrop-Place scheme

    Neural Network Learning Algorithms for High-Precision Position Control and Drift Attenuation in Robotic Manipulators

    Get PDF
    In this paper, different learning methods based on Artificial Neural Networks (ANNs) are examined to replace the default speed controller for high-precision position control and drift attenuation in robotic manipulators. ANN learning methods including Levenberg–Marquardt and Bayesian Regression are implemented and compared using a UR5 robot with six degrees of freedom to improve trajectory tracking and minimize position error. Extensive simulation and experimental tests on the identification and control of the robot by means of the neural network controllers yield comparable results with respect to the classical controller, showing the feasibility of the proposed approach
    • …
    corecore