65,613 research outputs found

    Simulating development in a real robot: on the concurrent increase of sensory, motor, and neural complexity

    Get PDF
    We present a quantitative investigation on the effects of a discrete developmental progression on the acquisition of a foveation behavior by a robotic hand-arm-eyes system. Development is simulated by (a) increasing the resolution of visual and tactile systems, (b) freezing and freeing mechanical degrees of freedom, and (c) adding neuronal units to the neural control architecture. Our experimental results show that a system starting with a low-resolution sensory system, a low precision motor system, and a low complexity neural structure, learns faster that a system which is more complex at the beginning

    A Novel Uncalibrated Visual Servoing Controller Baesd on Model-Free Adaptive Control Method with Neural Network

    Full text link
    Nowadays, with the continuous expansion of application scenarios of robotic arms, there are more and more scenarios where nonspecialist come into contact with robotic arms. However, in terms of robotic arm visual servoing, traditional Position-based Visual Servoing (PBVS) requires a lot of calibration work, which is challenging for the nonspecialist to cope with. To cope with this situation, Uncalibrated Image-Based Visual Servoing (UIBVS) frees people from tedious calibration work. This work applied a model-free adaptive control (MFAC) method which means that the parameters of controller are updated in real time, bringing better ability of suppression changes of system and environment. An artificial intelligent neural network is applied in designs of controller and estimator for hand-eye relationship. The neural network is updated with the knowledge of the system input and output information in MFAC method. Inspired by "predictive model" and "receding-horizon" in Model Predictive Control (MPC) method and introducing similar structures into our algorithm, we realizes the uncalibrated visual servoing for both stationary targets and moving trajectories. Simulated experiments with a robotic manipulator will be carried out to validate the proposed algorithm.Comment: 16 pages, 8 figure

    Robotic Ball Catching with an Eye-in-Hand Single-Camera System

    Get PDF
    In this paper, a unified control framework is proposed to realize a robotic ball catching task with only a moving single-camera (eye-in-hand) system able to catch flying, rolling, and bouncing balls in the same formalism. The thrown ball is visually tracked through a circle detection algorithm. Once the ball is recognized, the camera is forced to follow a baseline in the space so as to acquire an initial dataset of visual measurements. A first estimate of the catching point is initially provided through a linear algorithm. Then, additional visual measurements are acquired to constantly refine the current estimate by exploiting a nonlinear optimization algorithm and a more accurate ballistic model. A classic partitioned visual servoing approach is employed to control the translational and rotational components of the camera differently. Experimental results performed on an industrial robotic system prove the effectiveness of the presented solution. A motion-capture system is employed to validate the proposed estimation process via ground truth

    Uncalibrated eye-to-hand visual servoing using inverse fuzzy models

    Get PDF
    (c) 2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.A new uncalibrated eye-to-hand visual servoing based on inverse fuzzy modeling is proposed in this paper. In classical visual servoing, the Jacobian plays a decisive role in the convergence of the controller, as its analytical model depends on the selected image features. This Jacobian must also be inverted online. Fuzzy modeling is applied to obtain an inverse model of the mapping between image feature variations and joint velocities. This approach is independent from the robot's kinematic model or camera calibration and also avoids the necessity of inverting the Jacobian online. An inverse model is identified for the robot workspace, using measurement data of a robotic manipulator. This inverse model is directly used as a controller. The inverse fuzzy control scheme is applied to a robotic manipulator performing visual servoing for random positioning in the robot workspace. The obtained experimental results show the effectiveness of the proposed control scheme. The fuzzy controller can position the robotic manipulator at any point in the workspace with better accuracy than the classic visual servoing approach.info:eu-repo/semantics/publishedVersio

    Abstract and proportional myoelectric control for multi-fingered hand prostheses

    Get PDF
    Powered hand prostheses with many degrees of freedom are moving from research into the market for prosthetics. In order to make use of the prostheses’ full functionality, it is essential to study efficient ways of high dimensional myoelectric control. Human subjects can rapidly learn to employ electromyographic (EMG) activity of several hand and arm muscles to control the position of a cursor on a computer screen, even if the muscle-cursor map contradicts directions in which the muscles would act naturally. But can a similar control scheme be translated into real-time operation of a dexterous robotic hand? We found that despite different degrees of freedom in the effector output, the learning process for controlling a robotic hand was surprisingly similar to that for a virtual two-dimensional cursor. Control signals were derived from the EMG in two different ways, with a linear and a Bayesian filter, to test how stable user intentions could be conveyed through them. Our analysis indicates that without visual feedback, control accuracy benefits from filters that reject high EMG amplitudes. In summary, we conclude that findings on myoelectric control principles, studied in abstract, virtual tasks can be transferred to real-life prosthetic applications. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10439-013-0876-5) contains supplementary material, which is available to authorized users

    Interaction with a hand rehabilitation exoskeleton in EMG-driven bilateral therapy: Influence of visual biofeedback on the users’ performance

    Get PDF
    Producción CientíficaThe effectiveness of EMG biofeedback with neurorehabilitation robotic platforms has not been previously addressed. The present work evaluates the influence of an EMG-based visual biofeedback on the user performance when performing EMG-driven bilateral exercises with a robotic hand exoskeleton. Eighteen healthy subjects were asked to perform 1-min randomly generated sequences of hand gestures (rest, open and close) in four different conditions resulting from the combination of using or not (1) EMG-based visual biofeedback and (2) kinesthetic feedback from the exoskeleton movement. The user performance in each test was measured by computing similarity between the target gestures and the recognized user gestures using the L2 distance. Statistically significant differences in the subject performance were found in the type of provided feedback (p-value 0.0124). Pairwise comparisons showed that the L2 distance was statistically significantly lower when only EMG-based visual feedback was present (2.89 ± 0.71) than with the presence of the kinesthetic feedback alone (3.43 ± 0.75, p-value = 0.0412) or the combination of both (3.39 ± 0.70, p-value = 0.0497). Hence, EMG-based visual feedback enables subjects to increase their control over the movement of the robotic platform by assessing their muscle activation in real time. This type of feedback could benefit patients in learning more quickly how to activate robot functions, increasing their motivation towards rehabilitation.Ministerio de Ciencia e Innovación - (project RTC2019-007350-1)Consejería de Educación, Fondo Social Europeo, Gobierno Vasco - (BERC 2022-2025) y (project 3KIA (KK-2020/00049)Ministerio de Ciencia, Innovación y Universidades - (BCAM Severo Ochoa: SEV-2017-0718

    Robotic Harvesting of Fruiting Vegetables: A Simulation Approach in V-REP, ROS and MATLAB

    Get PDF
    In modern agriculture, there is a high demand to move from tedious manual harvesting to a continuously automated operation. This chapter reports on designing a simulation and control platform in V-REP, ROS, and MATLAB for experimenting with sensors and manipulators in robotic harvesting of sweet pepper. The objective was to provide a completely simulated environment for improvement of visual servoing task through easy testing and debugging of control algorithms with zero damage risk to the real robot and to the actual equipment. A simulated workspace, including an exact replica of different robot manipulators, sensing mechanisms, and sweet pepper plant, and fruit system was created in V-REP. Image moment method visual servoing with eye-in-hand configuration was implemented in MATLAB, and was tested on four robotic platforms including Fanuc LR Mate 200iD, NOVABOT, multiple linear actuators, and multiple SCARA arms. Data from simulation experiments were used as inputs of the control algorithm in MATLAB, whose outputs were sent back to the simulated workspace and to the actual robots. ROS was used for exchanging data between the simulated environment and the real workspace via its publish-and-subscribe architecture. Results provided a framework for experimenting with different sensing and acting scenarios, and verified the performance functionality of the simulator

    Intrinsic somatosensory feedback supports motor control and learning to operate artificial body parts

    Get PDF
    Objective Considerable resources are being invested to enhance the control and usability of artificial limbs through the delivery of unnatural forms of somatosensory feedback. Here, we investigated whether intrinsic somatosensory information from the body part(s) remotely controlling an artificial limb can be leveraged by the motor system to support control and skill learning. Approach In a placebo-controlled design, we used local anaesthetic to attenuate somatosensory inputs to the big toes while participants learned to operate through pressure sensors a toe-controlled and hand-worn robotic extra finger. Motor learning outcomes were compared against a control group who received sham anaesthetic and quantified in three different task scenarios: while operating in isolation from, in synchronous coordination, and collaboration with, the biological fingers. Main results Both groups were able to learn to operate the robotic extra finger, presumably due to abundance of visual feedback and other relevant sensory cues. Importantly, the availability of displaced somatosensory cues from the distal bodily controllers facilitated the acquisition of isolated robotic finger movements, the retention and transfer of synchronous hand-robot coordination skills, and performance under cognitive load. Motor performance was not impaired by toes anaesthesia when tasks involved close collaboration with the biological fingers, indicating that the motor system can close the sensory feedback gap by dynamically integrating task-intrinsic somatosensory signals from multiple, and even distal, body- parts. Significance Together, our findings demonstrate that there are multiple natural avenues to provide intrinsic surrogate somatosensory information to support motor control of an artificial body part, beyond artificial stimulation

    A decoupled image space approach to visual servo control of a robotic manipulator.

    Get PDF
    International audienceAn image-based visual servo control is presented for a robotic manipulator. The proposed control design ad- dresses visual servo of 'eye-in-hand' type systems. Us- ing a novel representation of the visual error based on a spherical representation of target centroid information along with a measure of the rotation between the camera and the target, the control of the position and orientation is decoupled. A non-linear gain introduced into the ori- entation feedback kinematics prevents the target image from leaving the visual field. Semi-global convergence of the closed loop system is proved

    Keytar: Melodic control of multisensory feedback from virtual strings

    Get PDF
    A multisensory virtual environment has been designed, aiming at recreating a realistic interaction with a set of vibrating strings. Haptic, auditory and visual cues progressively istantiate the environment: force and tactile feedback are provided by a robotic arm reporting for string reaction, string surface properties, and furthermore defining the physical touchpoint in form of a virtual plectrum embodied by the arm stylus. Auditory feedback is instantaneously synthesized as a result of the contacts of this plectrum against the strings, reproducing guitar sounds. A simple visual scenario contextualizes the plectrum in action along with the vibrating strings. Notes and chords are selected using a keyboard controller, in ways that one hand is engaged in the creation of a melody while the other hand plucks virtual strings. Such components have been integrated within the Unity3D simulation environment for game development, and run altogether on a PC. As also declared by a group of users testing a monophonic Keytar prototype with no keyboard control, the most significant contribution to the realism of the strings is given by the haptic feedback, in particular by the textural nuances that the robotic arm synthesizes while reproducing physical attributes of a metal surface. Their opinion, hence, argues in favor of the importance of factors others than auditory feedback for the design of new musical interfaces
    • …
    corecore