5 research outputs found

    3D Spectral Domain Registration-Based Visual Servoing

    Full text link
    This paper presents a spectral domain registration-based visual servoing scheme that works on 3D point clouds. Specifically, we propose a 3D model/point cloud alignment method, which works by finding a global transformation between reference and target point clouds using spectral analysis. A 3D Fast Fourier Transform (FFT) in R3 is used for the translation estimation, and the real spherical harmonics in SO(3) are used for the rotations estimation. Such an approach allows us to derive a decoupled 6 degrees of freedom (DoF) controller, where we use gradient ascent optimisation to minimise translation and rotational costs. We then show how this methodology can be used to regulate a robot arm to perform a positioning task. In contrast to the existing state-of-the-art depth-based visual servoing methods that either require dense depth maps or dense point clouds, our method works well with partial point clouds and can effectively handle larger transformations between the reference and the target positions. Furthermore, the use of spectral data (instead of spatial data) for transformation estimation makes our method robust to sensor-induced noise and partial occlusions. We validate our approach by performing experiments using point clouds acquired by a robot-mounted depth camera. Obtained results demonstrate the effectiveness of our visual servoing approach.Comment: Accepted to 2023 IEEE International Conference on Robotics and Automation (ICRA'23

    Direct Visual Servoing Based on Discrete Orthogonal Moments

    Full text link
    This paper proposes a new approach to achieve direct visual servoing (DVS) based on discrete orthogonal moments (DOM). DVS is conducted whereby the extraction of geometric primitives, matching and tracking steps in the conventional feature-based visual servoing pipeline can be bypassed. Although DVS enables highly precise positioning, and suffers from a small convergence domain and poor robustness, due to the high non-linearity of the cost function to be minimized and the presence of redundant data between visual features. To tackle these issues, we propose a generic and augmented framework to take DOM as visual features into consideration. Through taking Tchebichef, Krawtchouk and Hahn moments as examples, we not only present the strategies for adaptive adjusting the parameters and orders of the visual features, but also exhibit the analytical formulation of the associated interaction matrix. Simulations demonstrate the robustness and accuracy of our method, as well as the advantages over the state of the art. The real experiments have also been performed to validate the effectiveness of our approach

    Vision-based Self-Supervised Depth Perception and Motion Control for Mobile Robots

    Get PDF
    The advances in robotics have enabled many different opportunities to deploy a mobile robot in various settings. However, many current mobile robots are equipped with a sensor suite with multiple types of sensors. This expensive sensor suite and the computationally complex program to fully utilize these sensors may limit the large-scale deployment of these robots. The recent development of computer vision has enabled the possibility to complete various robotic tasks with simply camera systems. This thesis focuses on two problems related to vision-based mobile robots: depth perception and motion control. Commercially available stereo cameras relying on traditional stereo matching algorithms are widely used in robotic applications to obtain depth information. Although their raw (predicted) disparity maps may contain incorrect estimates, they can still provide useful prior information towards more accurate predictions. We propose a data-driven pipeline to incorporate the raw disparity to predict high-quality disparity maps. The pipeline first utilizes a confidence generation component to identify raw disparity inaccuracies. Then a deep neural network, which consists of a feature extraction module, a confidence guided raw disparity fusion module, and a hierarchical occlusion-aware disparity refinement module, computes the final disparity estimates and their corresponding occlusion masks. The pipeline can be trained in a self-supervised manner, removing the need of expensive ground truth training labels. Experimental results on public datasets show that the pipeline has competitive accuracy with real-time processing rate. The pipeline is also tested with images captured by commercial stereo cameras to demonstrate its effectiveness in improving their raw disparity estimates. After the stereo matching pipeline predicts the disparity maps, they are used by a proposed disparity-based direct visual servoing controller to compute the commanded velocity to move a mobile robot towards its target pose. Many previous visual servoing methods rely on complex and error-prone feature extraction and matching steps. The proposed visual servoing framework follows the direct visual servoing approach which does not require any extraction or matching process. Hence, its performance is not affected by the potential errors introduced by these steps. Furthermore, the predicted occlusion masks are also incorporated in the controller to address the occlusion problem inherited from a stereo camera setup. The performance of the proposed control strategy is verified by extensive simulations and experiments

    Direct visual servoing in the frequency domain

    Get PDF
    International audienceIn this paper, we propose an original approach to extend direct visual servoing to the frequency domain. Whereas most of visual servoing approaches relied on the geometric features, recent works have highlighted the importance of taking into account the photometric information of the entire images. This leads to direct visual servoing (DVS) approaches. In this paper we propose no longer to consider the image itself in the spatial domain but its transformation in the frequency domain. The idea is to consider the Discrete Cosine Transform (DCT) which allows to represent the image in the frequency domain in terms of a sum of cosine functions that oscillate at various frequencies. This leads to a new set of coordinates in a new precomputed orthogonal basis, the coefficients of the DCT. We propose to use these coefficients as the visual features that are then consider in a visual servoing control law. We then exhibit the analytical formulation of the interaction matrix related to these coefficients. Experimental results validate our approach
    corecore