115 research outputs found

    Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review

    Get PDF
    The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Real-Time Stereo Visual Servoing of a 6-DOF Robot for Tracking and Grasping Moving Objects

    Get PDF
    Robotic systems have been increasingly employed in various industrial, urban, mili-tary and exploratory applications during last decades. To enhance the robot control per-formance, vision data are integrated into the robot control systems. Using visual feedback has a great potential for increasing the flexibility of conventional robotic and mechatronic systems to deal with changing and less-structured environments. How to use visual in-formation in control systems has always been a major research area in robotics and mechatronics. Visual servoing methods which utilize direct feedback from image features to motion control have been proposed to handle many stability and reliability issues in vision-based control systems. This thesis introduces a stereo Image-based Visual Servoing (IBVS) (to the contrary Position-based Visual Servoing (PBVS)) with eye‐in‐hand configuration that is able to track and grasp a moving object in real time. The robustness of the control system is in-creased by the means of accurate 3-D information extracted from binocular images. At first, an image-based visual servoing (IBVS) approach based on stereo vision is proposed for 6 DOF robots. A classical proportional control strategy has been designed and the ste-reo image interaction matrix which relates the image feature velocity to the cameras’ ve-locity screw has been developed for two cases of parallel and non-parallel cameras in-stalled on the end-effector of the robot. Then, the properties of tracking a moving target and corresponding variant feature points on visual servoing system has been investigated. Second, a method for position prediction and trajectory estimation of the moving tar-get in order to use in the proposed image-based stereo visual servoing for a real-time grasping task has been proposed and developed through the linear and nonlinear model-ing of the system dynamics. Three trajectory estimation algorithms, “Kalman Filter”, “Recursive Least Square (RLS)” and “Extended Kalman Filter (EKF)” have been applied to predict the position of moving object in image planes. Finally, computer simulations and real implementation have been carried out to verify the effectiveness of the proposed method for the task of tracking and grasping a moving object using a 6-DOF manipulator

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Development of an Intelligent Robotic Manipulator

    Get PDF
    The presence of hazards to human health in chemical process plant and nuclear waste stores leads to the use of robots and more specifically manipulators in unmanned spaces. Rapid and accurate performance of robotic arm movement and positioning, coupled with a reliable manipulator gripping mechanism for variable orientation and a range of deformable and/or geometric and coloured products, will lead to smarter/intelligent operation of high precision equipment. The aim of the research is to design a more effective robot arm manipulator for use in a glovebox environment utilising control kinematics together with image processing / object recognition algorithms and in particular the work is aimed at improving the movement of the robot arm in the case of unresolved kinematics, seeking improved speed and performance of object recognition along with improved sensitivity of the manipulator gripper mechanism A virtual robot arm and associated workspace was designed within the LabView 2009 environment and prototype gripper arms were designed and analysed within the Solidworks 2009 environment. Visual information was acquired by barrel cameras. Field research determines the location of identically shaped objects, and the object recognition algorithms establish the difference between them. A touch/feel device installed within the gripper arm housing ensures that the applied force is adequate to securely grasp the object without damage, but also to adapt to any slippage whilst the manipulator moves within the robot workspace. The research demonstrates that complex operations can be achieved without the expense of specialised parts/components; and that implementation of control algorithms can compensate for any ambiguous signals or fault conditions that occur through the operation of the manipulator. The results show that system performance is determined by the trade-off between speed and accuracy. The designed system can be further utilised for control of multi-functional robots connected within a production line. The Graphic User Interface illustrated within the thesis can be customised by the supervisor to suit operational needs
    corecore