1,095 research outputs found

    Automatic Reconstruction of Textured 3D Models

    Get PDF
    Three dimensional modeling and visualization of environments is an increasingly important problem. This work addresses the problem of automatic 3D reconstruction and we present a system for unsupervised reconstruction of textured 3D models in the context of modeling indoor environments. We present solutions to all aspects of the modeling process and an integrated system for the automatic creation of large scale 3D models

    \u3cem\u3eGRASP News\u3c/em\u3e, Volume 6, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory, edited by Gregory Long and Alok Gupta

    UVC Dose Mapping by Mobile Robots

    Get PDF
    As infeções adquiridas em ambientes hospitalares são um problema persistente e crescente e a sua prevenção envolve a desinfeção de áreas e superfícies. A necessidade de métodos de desinfeção eficazes aumentou muito em consequência da pandemia de Covid-19. Um método eficaz é a utilização de exposição UVC porque a radiação UVC é absorvida pelos ácidos nucleicos e, portanto, é capaz de inativar microrganismos. Este método também traz muitas vantagens quando comparado com os métodos tradicionais de desinfeção. A desinfeção UVC pode ser realizada por equipamentos fixos que têm de ser deslocados de um local para outro de modo a desinfetar toda uma área, ou por um equipamento móvel autónomo que requer intervenção humana mínima para desinfetar completamente um ambiente. Esta dissertação foca em robôs móveis que desinfetam um ambiente utilizando radiação UVC. Estes robôs móveis são capazes de se mover autonomamente enquanto mapeiam o ambiente à sua volta e simultaneamente o desinfetam. Os robôs mantêm registo da dose aplicada a cada área do ambiente de modo a construir um mapa da dose e diferenciar as áreas completamente desinfetadas das que não o estão. Esta solução tem a vantagem de o robô realizar a desinfeção UVC sem necessitar de parar em cada área nem ter conhecimentos prévios sobre o ambiente. A validação desta solução foi realizada utilizando o rviz, uma ferramenta do Robot Operating System (ROS), e a LiDAR Camera L515. A câmara foi utilizada para recolher a informação necessária para a criação do mapa do ambiente e o rviz foi utilizado para visualizar o mapa da dose.Hospital-acquired infections are a persistent and increasing problem and their prevention involves disinfecting areas and surfaces. The necessity for effective disinfection methods has highly increased in consequence of the Covid-19 pandemic. An effective method is using UVC exposure because UVC radiation is absorbed by nucleic acids and, therefore, is able to inactivate microorganisms. This method also brings many advantages when compared with traditional disinfection methods. UVC disinfection can be performed by fixed equipments that have to be moved from place to place to disinfect an entire area, or by an autonomous mobile equipment that requires minimal human intervention to completely disinfect an environment. This dissertation is focused on mobile robots that disinfect an environment using UVC radiation. These mobile robots are able to move autonomously while mapping the surrounding environment and simultaneously disinfect it. The robots keep track of the dose applied to each area of the environment in order to build a dose map and differentiate areas that are completely disinfected from those that are not. This solution has the advantage of the robot performing UVC disinfection without needing to stop in each area nor having previous knowledge of the environment. The validation of this solution was performed using rviz, a Robot Operating System (ROS) tool, and the LiDAR Camera L515. The camera was used to capture the necessary information for creating the map of the environment and rviz was used to visualize the dose map

    Two different tools for three-dimensional mapping: DE-based scan matching and feature-based loop detection

    Get PDF
    An autonomous robot must obtain information about its surroundings to accomplish multiple tasks that are greatly improved when this information is efficiently incorporated into amap. Some examples are navigation, manipulation, localization, etc. This mapping problem has been an important research area in mobile robotics during last decades. It does not have a unique solution and can be divided into multiple sub-problems. Two different aspects of the mobile robot mapping problem are addressed in this work. First, we have developed a Differential Evolution-based scan matching algorithm that operates with high accuracy in three-dimensional environments. The map obtained by an autonomous robot must be consistent after registration. It is basic to detect when the robot is navigating around a previously visited place in order to minimize the accumulated error. This phase, which is called loop detection, is the second aspect studied here. We have developed an algorithm that extracts the most important features from two different three-dimensional laser scans in order to obtain a loop indicator that is used to detect when the robot is visiting a known place. This approach allows the introduction of very different characteristics in the descriptor. First, the surface features include the geometric forms of the scan (lines, planes, and spheres). Second, the numerical features are values that describe several numerical properties of the measurements: volume, average range, curvature, etc. Both algorithms have been tested with real data to demonstrate that these are efficient tools to be used in mapping tasks.Publicad

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    High-level environment representations for mobile robots

    Get PDF
    In most robotic applications we are faced with the problem of building a digital representation of the environment that allows the robot to autonomously complete its tasks. This internal representation can be used by the robot to plan a motion trajectory for its mobile base and/or end-effector. For most man-made environments we do not have a digital representation or it is inaccurate. Thus, the robot must have the capability of building it autonomously. This is done by integrating into an internal data structure incoming sensor measurements. For this purpose, a common solution consists in solving the Simultaneous Localization and Mapping (SLAM) problem. The map obtained by solving a SLAM problem is called ``metric'' and it describes the geometric structure of the environment. A metric map is typically made up of low-level primitives (like points or voxels). This means that even though it represents the shape of the objects in the robot workspace it lacks the information of which object a surface belongs to. Having an object-level representation of the environment has the advantage of augmenting the set of possible tasks that a robot may accomplish. To this end, in this thesis we focus on two aspects. We propose a formalism to represent in a uniform manner 3D scenes consisting of different geometric primitives, including points, lines and planes. Consequently, we derive a local registration and a global optimization algorithm that can exploit this representation for robust estimation. Furthermore, we present a Semantic Mapping system capable of building an \textit{object-based} map that can be used for complex task planning and execution. Our system exploits effective reconstruction and recognition techniques that require no a-priori information about the environment and can be used under general conditions

    Event-based Simultaneous Localization and Mapping: A Comprehensive Survey

    Full text link
    In recent decades, visual simultaneous localization and mapping (vSLAM) has gained significant interest in both academia and industry. It estimates camera motion and reconstructs the environment concurrently using visual sensors on a moving robot. However, conventional cameras are limited by hardware, including motion blur and low dynamic range, which can negatively impact performance in challenging scenarios like high-speed motion and high dynamic range illumination. Recent studies have demonstrated that event cameras, a new type of bio-inspired visual sensor, offer advantages such as high temporal resolution, dynamic range, low power consumption, and low latency. This paper presents a timely and comprehensive review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks. The review covers the working principle of event cameras and various event representations for preprocessing event data. It also categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods, with detailed discussions and practical guidance for each approach. Furthermore, the paper evaluates the state-of-the-art methods on various benchmarks, highlighting current challenges and future opportunities in this emerging research area. A public repository will be maintained to keep track of the rapid developments in this field at {\url{https://github.com/kun150kun/ESLAM-survey}}
    corecore