20 research outputs found

    Implementation of an automated eye-in-hand scanning system using Best-Path planning

    Get PDF
    In this thesis we implemented an automated scanning system for 3D object reconstruction. This system is composed of a KUKA LWR 4+ arm with Microsoft Kinect cameras placed on its extreme and thus, in an eye-in-hand con guration. We implemented the system in ROS using Kinect Fusion software with extra features added by R. Monica's previous work [16] and MoveIt! ROS libraries [29] to control the robot movement with motion planning. To connect these nodes, we have coded a suite using ROS and MATLAB to easily operate them as well as including new features, such as an original view planner that outperforms the commonly used Next-Best-View planner. This suite incorporates a Graphical User Interface that allows new users to easily perform the reconstruction tasks. The new view planner developed in this work, called Best-Path planner, o ers a new approach using a modi ed Dijkstra algorithm. Among its bene ts, Best-Path planner o ers an optimized way to scan the objects preventing the camera to cross again the areas which have already been scanned. Moreover, viewpoint location and orientation have been studied in depth in order to obtain the most natural movements and get the best results. For this reason, this new planner makes the scanning procedure more robust as it assures trajectories through these optimized viewpoints, so the camera is always looking towards the object maintaining the optimal sensing distances. As this project is focused on its later utility in the Intelligent Robotics Laboratory, we uploaded all the source code in the Aalto GitLab repositories [37] with installation instructions and user guides to show the di erent features that the suite o ers

    Autonomous 3D mapping and surveillance of mines with MAVs

    Get PDF
    A dissertation Submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, for the degree of Master of Science. 12 July 2017.The mapping of mines, both operational and abandoned, is a long, di cult and occasionally dangerous task especially in the latter case. Recent developments in active and passive consumer grade sensors, as well as quadcopter drones present the opportunity to automate these challenging tasks providing cost and safety bene ts. The goal of this research is to develop an autonomous vision-based mapping system that employs quadrotor drones to explore and map sections of mine tunnels. The system is equipped with inexpensive, structured light, depth cameras in place of traditional laser scanners, making the quadrotor setup more viable to produce in bulk. A modi ed version of Microsoft's Kinect Fusion algorithm is used to construct 3D point clouds in real-time as the agents traverse the scene. Finally, the generated and merged point clouds from the system are compared with those produced by current Lidar scanners.LG201

    Using ToF and RGBD cameras for 3D robot perception and manipulation in human environments

    Get PDF
    Robots, traditionally confined into factories, are nowadays moving to domestic and assistive environments, where they need to deal with complex object shapes, deformable materials, and pose uncertainties at human pace. To attain quick 3D perception, new cameras delivering registered depth and intensity images at a high frame rate hold a lot of promise, and therefore many robotics researchers are now experimenting with structured-light RGBD and Time-of-Flight (ToF) cameras. In this paper both technologies are critically compared to help researchers to evaluate their use in real robots. The focus is on 3D perception at close distances for different types of objects that may be handled by a robot in a human environment. We review three robotics applications. The analysis of several performance aspects indicates the complementarity of the two camera types, since the user-friendliness and higher resolution of RGBD cameras is counterbalanced by the capability of ToF cameras to operate outdoors and perceive details.This research is partially funded by the EU GARNICS project FP7-247947, by CSIC project MANIPlus 201350E102, by the Spanish Ministry of Science and Innovation under project PAU+DPI2011-27510, and the Catalan Research Commission under Grant SGR-155.Peer Reviewe

    Optimization of Humanoid's Motions under Multiple Constraints in Vehicle-Handling Task

    Get PDF
    In this dissertation, an approach on whole body motion optimization is presented for humanoid vehicle-handling task. To achieve this goal, the author built a reinforcement-learning-agent based trajectory-optimization framework. The framework planned and optimized a guideline input trajectory with respect to various kinematic and dynamic constraints. A path planner module designed an initial suboptimal motion. Reinforcement learning was then implemented to optimize the trajectories with respect to time-varying constraints at the body and joint level. The cost functions in the body level calculated a robot's static balancing ability, collisions and validity of the end-effector movement. Quasi-static balancing and collisions were computed from kinematic models of the robot and the vehicle. Various costs such as joint angle and velocity limits were computed in the joint level. Energy consumption such as torque limit obedience was also checked at the joint level. Such physical limits of each joint ensured both spatial and temporal smoothness of the generated trajectories. Keeping overall structure of the framework, cost functions and learning algorithm were selected adaptively based on the requirements of given tasks. After the optimization process, experimental tests of the presented approach are demonstrated through simulations using a virtual robot model. Verification-and-validation process then confirmed the efficacy of the optimized trajectory approach using the robot's real physical platform. For both test and verification process, different types of robot and vehicle were used to prove potentials for extension of the trajectory-optimization framework.Ph.D., Mechanical Engineering and Mechanics -- Drexel University, 201

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot\u27s view in order to explore interaction possibilities of the scene

    The attentive robot companion: learning spatial information from observation and verbal interaction

    Get PDF
    Ziegler L. The attentive robot companion: learning spatial information from observation and verbal interaction. Bielefeld: Universität Bielefeld; 2015.This doctoral thesis investigates how a robot companion can gain a certain degree of situational awareness through observation and interaction with its surroundings. The focus lies on the representation of the spatial knowledge gathered constantly over time in an indoor environment. However, from the background of research on an interactive service robot, methods for deployment in inference and verbal communication tasks are presented. The design and application of the models are guided by the requirements of referential communication. The approach here involves the analysis of the dynamic properties of structures in the robot’s field of view allowing it to distinguish objects of interest from other agents and background structures. The use of multiple persistent models representing these dynamic properties enables the robot to track changes in multiple scenes over time to establish spatial and temporal references. This work includes building a coherent representation considering allocentric and egocentric aspects of spatial knowledge for these models. Spatial analysis is extended with a semantic interpretation of objects and regions. This top-down approach for generating additional context information enhances the grounding process in communication. A holistic, boosting-based classification approach using a wide range of 2D and 3D visual features anchored in the spatial representation allows the system to identify room types. The process of grounding referential descriptions from a human interlocutor in the spatial representation is evaluated through referencing furniture. This method uses a probabilistic network for handling ambiguities in the descriptions and employs a strategy for resolving conflicts. In order to approve the real-world applicability of these approaches, this system was deployed on the mobile robot BIRON in a realistic apartment scenario involving observation and verbal interaction with an interlocutor

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot's view in order to explore interaction possibilities of the scene
    corecore