49,163 research outputs found

    A Robust Model Predictive Control Approach for Autonomous Underwater Vehicles Operating in a Constrained workspace

    Full text link
    This paper presents a novel Nonlinear Model Predictive Control (NMPC) scheme for underwater robotic vehicles operating in a constrained workspace including static obstacles. The purpose of the controller is to guide the vehicle towards specific way points. Various limitations such as: obstacles, workspace boundary, thruster saturation and predefined desired upper bound of the vehicle velocity are captured as state and input constraints and are guaranteed during the control design. The proposed scheme incorporates the full dynamics of the vehicle in which the ocean currents are also involved. Hence, the control inputs calculated by the proposed scheme are formulated in a way that the vehicle will exploit the ocean currents, when these are in favor of the way-point tracking mission which results in reduced energy consumption by the thrusters. The performance of the proposed control strategy is experimentally verified using a 44 Degrees of Freedom (DoF) underwater robotic vehicle inside a constrained test tank with obstacles.Comment: IEEE International Conference on Robotics and Automation (ICRA-2018), Accepte

    Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments

    Full text link
    Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS) 201

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering
    • …
    corecore