478 research outputs found

    Real-Time Hybrid Visual Servoing of a Redundant Manipulator via Deep Reinforcement Learning

    Get PDF
    Fixtureless assembly may be necessary in some manufacturing tasks and environ-ments due to various constraints but poses challenges for automation due to non-deterministic characteristics not favoured by traditional approaches to industrial au-tomation. Visual servoing methods of robotic control could be effective for sensitive manipulation tasks where the desired end-effector pose can be ascertained via visual cues. Visual data is complex and computationally expensive to process but deep reinforcement learning has shown promise for robotic control in vision-based manipu-lation tasks. However, these methods are rarely used in industry due to the resources and expertise required to develop application-specific systems and prohibitive train-ing costs. Training reinforcement learning models in simulated environments offers a number of benefits for the development of robust robotic control algorithms by reducing training time and costs, and providing repeatable benchmarks for which algorithms can be tested, developed and eventually deployed on real robotic control environments. In this work, we present a new simulated reinforcement learning envi-ronment for developing accurate robotic manipulation control systems in fixtureless environments. Our environment incorporates a contemporary collaborative industrial robot, the KUKA LBR iiwa, with the goal of positioning its end effector in a generic fixtureless environment based on a visual cue. Observational inputs are comprised of the robotic joint positions and velocities, as well as two cameras, whose positioning reflect hybrid visual servoing with one camera attached to the robotic end-effector, and another observing the workspace respectively. We propose a state-of-the-art deep reinforcement learning approach to solving the task environment and make prelimi-nary assessments of the efficacy of this approach to hybrid visual servoing methods for the defined problem environment. We also conduct a series of experiments ex-ploring the hyperparameter space in the proposed reinforcement learning method. Although we could not prove the efficacy of a deep reinforcement approach to solving the task environment with our initial results, we remain confident that such an ap-proach could be feasible to solving this industrial manufacturing challenge and that our contributions in this work in terms of the novel software provide a good basis for the exploration of reinforcement learning approaches to hybrid visual servoing in accurate manufacturing contexts

    Vision-based Self-Supervised Depth Perception and Motion Control for Mobile Robots

    Get PDF
    The advances in robotics have enabled many different opportunities to deploy a mobile robot in various settings. However, many current mobile robots are equipped with a sensor suite with multiple types of sensors. This expensive sensor suite and the computationally complex program to fully utilize these sensors may limit the large-scale deployment of these robots. The recent development of computer vision has enabled the possibility to complete various robotic tasks with simply camera systems. This thesis focuses on two problems related to vision-based mobile robots: depth perception and motion control. Commercially available stereo cameras relying on traditional stereo matching algorithms are widely used in robotic applications to obtain depth information. Although their raw (predicted) disparity maps may contain incorrect estimates, they can still provide useful prior information towards more accurate predictions. We propose a data-driven pipeline to incorporate the raw disparity to predict high-quality disparity maps. The pipeline first utilizes a confidence generation component to identify raw disparity inaccuracies. Then a deep neural network, which consists of a feature extraction module, a confidence guided raw disparity fusion module, and a hierarchical occlusion-aware disparity refinement module, computes the final disparity estimates and their corresponding occlusion masks. The pipeline can be trained in a self-supervised manner, removing the need of expensive ground truth training labels. Experimental results on public datasets show that the pipeline has competitive accuracy with real-time processing rate. The pipeline is also tested with images captured by commercial stereo cameras to demonstrate its effectiveness in improving their raw disparity estimates. After the stereo matching pipeline predicts the disparity maps, they are used by a proposed disparity-based direct visual servoing controller to compute the commanded velocity to move a mobile robot towards its target pose. Many previous visual servoing methods rely on complex and error-prone feature extraction and matching steps. The proposed visual servoing framework follows the direct visual servoing approach which does not require any extraction or matching process. Hence, its performance is not affected by the potential errors introduced by these steps. Furthermore, the predicted occlusion masks are also incorporated in the controller to address the occlusion problem inherited from a stereo camera setup. The performance of the proposed control strategy is verified by extensive simulations and experiments

    Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios

    Get PDF
    The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way
    • …
    corecore