8 research outputs found

    Shared Control of Assistive Robotic Manipulators

    Get PDF
    The continuum of controlling an assistive robotic manipulator (ARM) ranges from manual control to full autonomy. Shared control of an ARM operates in the space between manual control and full autonomy. This paper reviews the status quo on shared control of ARMs. Though users and ARMs can divide responsibilities for a manipulation task in different ways, most research in this area focus on maximizing robot autonomy and minimizing user control, while other work split the responsibilities more evenly between the ARM and the user. User studies in this area are very limited. More research is needed to investigate the overall performance, workload, and satisfaction across different levels of autonomy for the shared control of ARMs

    Three Dimentional Computer Vision-Based Alternative Control Method for Assistive Robotic Manipulator

    Get PDF
    JACO (Kinova Technology, Montreal, QC, Canada) is an assistive robotic manipulator that is gaining popularity for its ability to assist individuals with physical impairments in activities of daily living. To accommodate a wider range of user population especially those with severe physical limitations, alternative control methods need to be developed. In this paper, we presented a vision-based assistive robotic manipulation assistance algorithm (AROMA) for JACO, which uses a low-cost 3D depth sensing camera and an improved inverse kinematic algorithm to enable semi-autonomous or autonomous operation of the JACO. The benchtop tests on a series of grasping tasks showed that the AROMA was able to reliably determine target gripper poses. The success rates for the grasping tasks ranged from 85% to 100% for different objects

    Asservissement d'un bras robotique d'assistance à l'aide d'un systÚme de stéréo vision artificielle et d'un suiveur de regard

    Get PDF
    RÉSUMÉ L’utilisation rĂ©cente de bras robotiques sĂ©riels dans le but d’assister des personnes ayant des problĂšmes de motricitĂ©s sĂ©vĂšres des membres supĂ©rieurs soulĂšve une nouvelle problĂ©matique au niveau de l’interaction humain-machine (IHM). En effet, jusqu’à maintenant le « joystick » est utilisĂ© pour contrĂŽler un bras robotiques d’assistance (BRA). Pour les utilisateurs ayant des problĂšmes de motricitĂ© sĂ©vĂšres des membres supĂ©rieurs, ce type de contrĂŽle n’est pas une option adĂ©quate. Ce mĂ©moire prĂ©sente une autre option afin de pallier cette problĂ©matique. La solution prĂ©sentĂ©e est composĂ©e de deux composantes principales. La premiĂšre est une camĂ©ra de stĂ©rĂ©o vision utilisĂ©e afin d’informer le BRA des objets prĂ©sents dans son espace de travail. Il est important qu’un BRA soit conscient de ce qui est prĂ©sent dans son espace de travail puisqu’il doit ĂȘtre en mesure d’éviter les objets non voulus lorsqu’il parcourt un trajet afin d’atteindre l’objet d’intĂ©rĂȘt pour l'utilisateur. La deuxiĂšme composante est l’IHM qui est dans ce travail reprĂ©sentĂ©e par un suiveur de regard Ă  bas coĂ»t. Effectivement, le suiveur de regard a Ă©tĂ© choisi puisque, gĂ©nĂ©ralement, les yeux d’un patient ayant des problĂšmes sĂ©vĂšres de motricitĂ©s au niveau des membres supĂ©rieurs restent toujours fonctionnels. Le suiveur de regard est gĂ©nĂ©ralement utilisĂ© avec un Ă©cran pour des applications en 2D ce qui n’est pas intuitif pour l’utilisateur puisque celui-ci doit constamment regarder une reproduction 2D de la scĂšne sur un Ă©cran. En d’autres mots, il faut rendre le suiveur de regard viable dans un environnement 3D sans l’utilisation d’un Ă©cran, ce qui a Ă©tĂ© fait dans ce mĂ©moire. Un systĂšme de stĂ©rĂ©o vision, un suiveur de regard ainsi qu’un BRA sont les composantes principales du systĂšme prĂ©sentĂ© qui se nomme PoGARA qui est une abrĂ©viation pour Point of Gaze Assistive Robotic Arm. En utilisant PoGARA, l’utilisateur a Ă©tĂ© capable d’atteindre et de prendre un objet pour 80% des essais avec un temps moyen de 13.7 secondes sans obstacles, 15.3 secondes avec un obstacle et 16.3 secondes avec deux obstacles.----------ABSTRACT The recent increased interest in the use of serial robots to assist individuals with severe upper limb disability brought-up an important issue which is the design of the right human computer interaction (HCI). Indeed, so far, the control of assistive robotic arms (ARA) is often done using a joystick. For the users who have a severe upper limb disability, this type of control is not a suitable option. In this master’s thesis, a novel solution is presented to overcome this issue. The developed solution is composed of two main components. The first one is a stereo vision system which is used to inform the ARA of the content of its workspace. It is important for the ARA to be aware of what is present in its workspace since it needs to avoid the unwanted objects while it is on its way to grasp the object of interest. The second component is the actual HCI, where an eye tracker is used. Indeed, the eye tracker was chosen since the eyes, often, remain functional even for patients with severe upper limb disability. However, usually, low-cost, commercially available eye trackers are mainly designed for 2D applications with a screen which is not intuitive for the user since he needs to constantly watch a reproduction of the scene on a 2D screen instead of the 3D scene itself. In other words, the eye tracker needs to be made viable for usage in a 3D environment without the use of a screen. This was achieved in this master thesis work. A stereo vision system, an eye tracker as well as an ARA are the main components of the developed system named PoGARA which is short for Point of Gaze Assistive Robotic Arm. Using PoGARA, during the tests, the user was able to reach and grasp an object for 80% of the trials with an average time of 13.7 seconds without obstacles, 15.3 seconds with one obstacles and 16.3 seconds with two obstacles

    Design and Development of Assistive Robots for Close Interaction with People with Disabilities

    Get PDF
    People with mobility and manipulation impairments wish to live and perform tasks as independently as possible; however, for many tasks, compensatory technology does not exist, to do so. Assistive robots have the potential to address this need. This work describes various aspects of the development of three novel assistive robots: the Personal Mobility and Manipulation Appliance (PerMMA), the Robotic Assisted Transfer Device (RATD), and the Mobility Enhancement Robotic Wheelchair (MEBot). PerMMA integrates mobility with advanced bi-manual manipulation to assist people with both upper and lower extremity impairments. The RATD is a wheelchair mounted robotic arm that can lift higher payloads and its primary aim is to assist caregivers of people who cannot independently transfer from their electric powered wheelchair to other surfaces such as a shower bench or toilet. MEBot is a wheeled robot that has highly reconfigurable kinematics, which allow it to negotiate challenging terrain, such as steep ramps, gravel, or stairs. A risk analysis was performed on all three robots which included a Fault Tree Analysis (FTA) and a Failure Mode Effect Analysis (FMEA) to identify potential risks and inform strategies to mitigate them. Identified risks or PerMMA include dropping sharp or hot objects. Critical risks identified for RATD included tip over, crush hazard, and getting stranded mid-transfer, and risks for MEBot include getting stranded on obstacles and tip over. Lastly, several critical factors, such as early involvement of people with disabilities, to guide future assistive robot design are presented

    Eye-in-hand stereo visual servoing of an assistive robot arm in unstructured environments

    No full text
    We document the progress in the design and implementation of a motion control strategy that exploits visual feedback from a narrow baseline stereo head mounted in the hand of a wheelchair mounted robot arm (WMRA) to recognize and grasp textured ADL objects for which one or more templates exist in a large image database. The problem is made challenging by kinematic uncertainty in the robot, imperfect camera and stereo calibration, as well as the fact that we work in unstructured environments. The approach relies on separating the overall motion into gross and fine motion components. During the gross motion phase, local structure on an object around a user selected point of interest (POI) is extracted using sparse stereo information which is then utilized to converge on and roughly align the object with the image plane in order to be able to pursue object recognition and fine motion with strong likelihood of success. Fine motion is utilized to grasp the target object by relying on feature correspondences between the live object view and its template image. While features are detected using a robust real-time keypoint tracker, a hybrid visual servoing technique is exploited in which tracked pixel space features are utilized to generate translational motion commands while a Euclidean homography decomposition scheme is utilized for generation of orientation setpoints for the robot gripper. Experimental results are presented to demonstrate the efficacy of the proposed algorithm

    Real-Time Stereo Visual Servoing of a 6-DOF Robot for Tracking and Grasping Moving Objects

    Get PDF
    Robotic systems have been increasingly employed in various industrial, urban, mili-tary and exploratory applications during last decades. To enhance the robot control per-formance, vision data are integrated into the robot control systems. Using visual feedback has a great potential for increasing the flexibility of conventional robotic and mechatronic systems to deal with changing and less-structured environments. How to use visual in-formation in control systems has always been a major research area in robotics and mechatronics. Visual servoing methods which utilize direct feedback from image features to motion control have been proposed to handle many stability and reliability issues in vision-based control systems. This thesis introduces a stereo Image-based Visual Servoing (IBVS) (to the contrary Position-based Visual Servoing (PBVS)) with eye‐in‐hand configuration that is able to track and grasp a moving object in real time. The robustness of the control system is in-creased by the means of accurate 3-D information extracted from binocular images. At first, an image-based visual servoing (IBVS) approach based on stereo vision is proposed for 6 DOF robots. A classical proportional control strategy has been designed and the ste-reo image interaction matrix which relates the image feature velocity to the cameras’ ve-locity screw has been developed for two cases of parallel and non-parallel cameras in-stalled on the end-effector of the robot. Then, the properties of tracking a moving target and corresponding variant feature points on visual servoing system has been investigated. Second, a method for position prediction and trajectory estimation of the moving tar-get in order to use in the proposed image-based stereo visual servoing for a real-time grasping task has been proposed and developed through the linear and nonlinear model-ing of the system dynamics. Three trajectory estimation algorithms, “Kalman Filter”, “Recursive Least Square (RLS)” and “Extended Kalman Filter (EKF)” have been applied to predict the position of moving object in image planes. Finally, computer simulations and real implementation have been carried out to verify the effectiveness of the proposed method for the task of tracking and grasping a moving object using a 6-DOF manipulator

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    High-precision grasping and placing for mobile robots

    Get PDF
    This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation
    corecore