3,897 research outputs found

    Virtual 3D Reconstruction of Archaeological Pottery Using Coarse Registration

    Get PDF
    The 3D reconstruction of objects has not only improved visualisation of digitised objects, it has helped researchers to actively carry out archaeological pottery. Reconstructing pottery is significant in archaeology but is challenging task among practitioners. For one, excavated potteries are hardly complete to provide exhaustive and useful information, hence archaeologists attempt to reconstruct them with available tools and methods. It is also challenging to apply existing reconstruction approaches in archaeological documentation. This limitation makes it difficult to carry out studies within a reasonable time. Hence, interest has shifted to developing new ways of reconstructing archaeological artefacts with new techniques and algorithms. Therefore, this study focuses on providing interventions that will ease the challenges encountered in reconstructing archaeological pottery. It applies a data acquisition approach that uses a 3D laser scanner to acquire point cloud data that clearly show the geometric and radiometric properties of the object’s surface. The acquired data is processed to remove noise and outliers before undergoing a coarse-to-fine registration strategy which involves detecting and extracting keypoints from the point clouds and estimating descriptions with them. Additionally, correspondences are estimated between point pairs, leading to a pairwise and global registration of the acquired point clouds. The peculiarity of the approach of this thesis is in its flexibility due to the peculiar nature of the data acquired. This improves the efficiency, robustness and accuracy of the approach. The approach and findings show that the use of real 3D dataset can attain good results when used with right tools. High resolution lenses and accurate calibration help to give accurate results. While the registration accuracy attained in the study lies between 0.08 and 0.14 mean squared error for the data used, further studies will validate this result. The results obtained are nonetheless useful for further studies in 3D pottery reassembly

    Upper Body Pose Estimation Using Wearable Inertial Sensors and Multiplicative Kalman Filter

    Get PDF
    Estimating the limbs pose in a wearable way may benefit multiple areas such as rehabilitation, teleoperation, human-robot interaction, gaming, and many more. Several solutions are commercially available, but they are usually expensive or not wearable/portable. We present a wearable pose estimation system (WePosE), based on inertial measurements units (IMUs), for motion analysis and body tracking. Differently from camera-based approaches, the proposed system does not suffer from occlusion problems and lighting conditions, it is cost effective and it can be used in indoor and outdoor environments. Moreover, since only accelerometers and gyroscopes are used to estimate the orientation, the system can be used also in the presence of iron and magnetic disturbances. An experimental validation using a high precision optical tracker has been performed. Results confirmed the effectiveness of the proposed approach

    Model Based Teleoperation to Eliminate Feedback Delay NSF Grant BCS89-01352 Second Report

    Get PDF
    We are conducting research in the area of teleoperation with feedback delay. Delay occurs with earth-based teleoperation in space and with surface-based teleoperation with untethered submersibles when acoustic communication links are involved. The delay in obtaining position and force feedback from remote slave arms makes teleoperation extremely difficult leading to very low productivity. We have combined computer graphics with manipulator programming to provide a solution to the problem. A teleoperator master arm is interfaced to a graphics based simulator of the remote environment. The system is then coupled with a robot manipulator at the remote, delayed site. The operator\u27s actions are monitored to provide both kinesthetic and visual feedback and to generate symbolic motion commands to the remote slave. The slave robot then executes these symbolic commands delayed in time. While much of a task proceeds error free, when an error does occur, the slave system transmits data back to the master environment which is then reset to the error state from which the operator continues the task

    Real-Time Accurate Visual SLAM with Place Recognition

    Get PDF
    El problema de localización y construcción simultánea de mapas (del inglés Simultaneous Localization and Mapping, abreviado SLAM) consiste en localizar un sensor en un mapa que se construye en línea. La tecnología de SLAM hace posible la localización de un robot en un entorno desconocido para él, procesando la información de sus sensores de a bordo y por tanto sin depender de infraestructuras externas. Un mapa permite localizarse en todo momento sin acumular deriva, a diferencia de una odometría donde se integran movimientos incrementales. Este tipo de tecnología es crítica para la navegación de robots de servicio y vehículos autónomos, o para la localización del usuario en aplicaciones de realidad aumentada o virtual. La principal contribución de esta tesis es ORB-SLAM, un sistema de SLAM monocular basado en características que trabaja en tiempo real en ambientes pequeños y grandes, de interior y exterior. El sistema es robusto a elementos dinámicos en la escena, permite cerrar bucles y relocalizar la cámara incluso si el punto de vista ha cambiado significativamente, e incluye un método de inicialización completamente automático. ORB-SLAM es actualmente la solución más completa, precisa y fiable de SLAM monocular empleando una cámara como único sensor. El sistema, estando basado en características y ajuste de haces, ha demostrado una precisión y robustez sin precedentes en secuencias públicas estándar.Adicionalmente se ha extendido ORB-SLAM para reconstruir el entorno de forma semi-densa. Nuestra solución desacopla la reconstrucción semi-densa de la estimación de la trayectoria de la cámara, lo que resulta en un sistema que combina la precisión y robustez del SLAM basado en características con las reconstrucciones más completas de los métodos directos. Además se ha extendido la solución monocular para aprovechar la información de cámaras estéreo, RGB-D y sensores inerciales, obteniendo precisiones superiores a otras soluciones del estado del arte. Con el fin de contribuir a la comunidad científica, hemos hecho libre el código de una implementación de nuestra solución de SLAM para cámaras monoculares, estéreo y RGB-D, siendo la primera solución de código libre capaz de funcionar con estos tres tipos de cámara. Bibliografía:R. Mur-Artal and J. D. Tardós.Fast Relocalisation and Loop Closing in Keyframe-Based SLAM.IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China, June 2014.R. Mur-Artal and J. D. Tardós.ORB-SLAM: Tracking and Mapping Recognizable Features.RSS Workshop on Multi VIew Geometry in RObotics (MVIGRO). Berkeley, USA, July 2014. R. Mur-Artal and J. D. Tardós.Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM.Robotics: Science and Systems (RSS). Rome, Italy, July 2015.R. Mur-Artal, J. M. M. Montiel and J. D. Tardós.ORB-SLAM: A Versatile and Accurate Monocular SLAM System.IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, October 2015.(2015 IEEE Transactions on Robotics Best Paper Award).R. Mur-Artal, and J. D. Tardós.Visual-Inertial Monocular SLAM with Map Reuse.IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796-803, April 2017. (to be presented at ICRA 17).R.Mur-Artal, and J. D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras.ArXiv preprint arXiv:1610.06475, 2016. (under Review).<br /

    Automatic Reconstruction of Textured 3D Models

    Get PDF
    Three dimensional modeling and visualization of environments is an increasingly important problem. This work addresses the problem of automatic 3D reconstruction and we present a system for unsupervised reconstruction of textured 3D models in the context of modeling indoor environments. We present solutions to all aspects of the modeling process and an integrated system for the automatic creation of large scale 3D models

    Development and testing of docking functions in industrial settings for an autonomous mobile robot based on ROS2

    Get PDF
    This dissertation is the result of a six-months internship at G.D S.p.A. for the preparation of the thesis project. The final goal is to develop algorithms on the ROS2 framework that could be used to control an Autonomous Mobile Robot during the operations of detection and approach of a docking station with high precision, needed to operate a recharge of the AMR itself or some operation on the host machines. The automation of these operations ensures a substantial increase in safety and productivity within a warehouse or host machine lines since it permits to the AMR to work without requiring an operator for longer time or even to substitute the operator itself. The presented method uses both lidars and an onboard camera. The trajectory from the starting position to the approximate area of the docking station is computed using data obtained from the three lidars around the AMR body. The final approach is implemented by detecting an ARUCO code positioned on the dock assembly through a camera. A sequence of intermediate positions is defined according to the pose estimations, and then reached with a mix of standard navigation and a proportional position control in the very last part of the movement trajectory. The precision of the docking position turned out to have less than one centimeter error around the desired target, the orientation error is a fraction of a degree. The docking times vary based on how far the AMR is from the docking station, but the last phase of the procedure is always completed in around seventeen seconds. The solution is implementable and will be evaluated on the real platform in the coming months

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    High-level environment representations for mobile robots

    Get PDF
    In most robotic applications we are faced with the problem of building a digital representation of the environment that allows the robot to autonomously complete its tasks. This internal representation can be used by the robot to plan a motion trajectory for its mobile base and/or end-effector. For most man-made environments we do not have a digital representation or it is inaccurate. Thus, the robot must have the capability of building it autonomously. This is done by integrating into an internal data structure incoming sensor measurements. For this purpose, a common solution consists in solving the Simultaneous Localization and Mapping (SLAM) problem. The map obtained by solving a SLAM problem is called ``metric'' and it describes the geometric structure of the environment. A metric map is typically made up of low-level primitives (like points or voxels). This means that even though it represents the shape of the objects in the robot workspace it lacks the information of which object a surface belongs to. Having an object-level representation of the environment has the advantage of augmenting the set of possible tasks that a robot may accomplish. To this end, in this thesis we focus on two aspects. We propose a formalism to represent in a uniform manner 3D scenes consisting of different geometric primitives, including points, lines and planes. Consequently, we derive a local registration and a global optimization algorithm that can exploit this representation for robust estimation. Furthermore, we present a Semantic Mapping system capable of building an \textit{object-based} map that can be used for complex task planning and execution. Our system exploits effective reconstruction and recognition techniques that require no a-priori information about the environment and can be used under general conditions
    corecore