6 research outputs found

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Improving SLI Performance in Optically Challenging Environments

    Get PDF
    The construction of 3D models of real-world scenes using non-contact methods is an important problem in computer vision. Some of the more successful methods belong to a class of techniques called structured light illumination (SLI). While SLI methods are generally very successful, there are cases where their performance is poor. Examples include scenes with a high dynamic range in albedo or scenes with strong interreflections. These scenes are referred to as optically challenging environments. The work in this dissertation is aimed at improving SLI performance in optically challenging environments. A new method of high dynamic range imaging (HDRI) based on pixel-by-pixel Kalman filtering is developed. Using objective metrics, it is show to achieve as much as a 9.4 dB improvement in signal-to-noise ratio and as much as a 29% improvement in radiometric accuracy over a classic method. Quality checks are developed to detect and quantify multipath interference and other quality defects using phase measuring profilometry (PMP). Techniques are established to improve SLI performance in the presence of strong interreflections. Approaches in compressed sensing are applied to SLI, and interreflections in a scene are modeled using SLI. Several different applications of this research are also discussed

    Clasificaci贸n autom谩tica de anomal铆as asociadas con ausencia de informaci贸n en superficies tridimensionales de objetos de forma libre

    Get PDF
    En este trabajo se propone un m茅todo computacional para clasificar anomal铆as relacionadas con ausencia de informaci贸n sobre modelos tridimensionales de forma libre. Para ello, se hizo una exploraci贸n descriptiva de las propiedades geom茅tricas globales y locales de las anomal铆as y una evaluaci贸n posterior de distintos m茅todos de clasificaci贸n utilizados en visi贸n artificial y aplicaciones de reconstrucci贸n tridimensional. El m茅todo propuesto logra un nivel de clasificaci贸n cercano al 90% y un tiempo de ejecuci贸n de alrededor de 100 milisegundos. Restringir la clasificaci贸n de acuerdo a la aplicaci贸n en espec铆fico se propone como trabajo futuro./Abstract. In this work it is proposed a computational method to classify anomalies related with information absence over free-form tridimensional models. For that, it was made a descriptive exploration of global and local geometric properties of anomalies and a posterior evaluation of different classification methods widely used in artificial vision and tridimensional reconstruction applications. The proposed method achieved a classification level near to 90% and an execution time near to 100 miliseconds. Constrain classification according to the specific application is suggested as future work.Maestr铆

    Dense RGB-D SLAM and object localisation for robotics and industrial applications

    Get PDF
    Dense reconstruction and object localisation are two critical steps in robotic and industrial applications. The former entails a joint estimation of camera egomotion and the structure of the surrounding environment, also known as Simultaneous Localisation and Mapping (SLAM), and the latter aims to locate the object in the reconstructed scenes. This thesis addresses the challenges of dense SLAM with RGB-D cameras and object localisation towards robotic and industrial applications. Camera drift is an essential issue in camera egomotion estimation. Due to the accumulated error in camera pose estimation, the estimated camera trajectory is inaccurate, and the reconstruction of the environment is inconsistent. This thesis analyses camera drift in SLAM under the probabilistic inference framework and proposes an online map fusion strategy with standard deviation estimation based on frame-to-model camera tracking. The camera pose is estimated by aligning the input image with the global map model, and the global map merges the information in the images by weighted fusion with standard deviation modelling. In addition, a pre-screening step is applied before map fusion to preclude the adverse effect of accumulated errors and noises on camera egomotion estimation. Experimental results indicated that the proposed method mitigates camera drift and improves the global consistency of camera trajectories. Another critical challenge for dense RGB-D SLAM in industrial scenarios is to handle mechanical and plastic components that usually have reflective and shiny surfaces. Photometric alignment in frame-to-model camera tracking tends to fail on such objects due to the inconsistency in intensity patterns of the images and the global map model. This thesis addresses this problem and proposes RSO-SLAM, namely a SLAM approach to reflective and shiny object reconstruction. RSO-SLAM adopts frame-to-model camera tracking and combines local photometric alignment and global geometric registration. This study revealed the effectiveness and excellent performance of the proposed RSO-SLAM on both plastic and metallic objects. In addition, a case study involving the cover of a electric vehicle battery with metallic surface demonstrated the superior performance of the RSO-SLAM approach in the reconstruction of a common industrial product. With the reconstructed point cloud model of the object, the problem of object localisation is tackled as point cloud registration in the thesis. Iterative Closest Point (ICP) is arguably the best-known method for point cloud registration, but it is susceptible to sub-optimal convergence due to the multimodal solution space. This thesis proposes the Bees Algorithm (BA) enhanced with the Singular Value Decomposition (SVD) procedure for point cloud registration. SVD accelerates the speed of the local search of the BA, helping the algorithm to rapidly identify the local optima. It also enhances the precision of the obtained solutions. At the same time, the global outlook of the BA ensures adequate exploration of the whole solution space. Experimental results demonstrated the remarkable performance of the SVD-enhanced BA in terms of consistency and precision. Additional tests on noisy datasets demonstrated the robustness of the proposed procedure to imprecision in the models

    Metodolog铆a para la correcci贸n de huecos en im谩genes de rango basada en conocimiento del dominio

    Get PDF
    El proceso de reconstrucci贸n de formas implica estimar una representaci贸n matem谩tica de la geometr铆a de un objeto a partir de un conjunto de medidas conocidas y adquiridas de dicho objeto. En relaci贸n con las muestras adquiridas, existen numerosas dificultades relacionadas con el proceso de adquisici贸n; la topolog铆a del objeto, la estructura del sensor, las caracter铆sticas f铆sicas del material del objeto, las condiciones de iluminaci贸n, entre otras dificultades. 脡stas constituyen la fuente principal de generaci贸n de anomal铆as dentro de los datos muestreados, que afectan la estimaci贸n de representaciones precisas de los objetos. En esta investigaci贸n, dichas anomal铆as se clasifican en tres grupos, los cuales son: Presencia de Ruido, Formaci贸n de Huecos y Redundancia de Informaci贸n. Para tratar con estas dificultades, se aplica t铆picamente una etapa de correcci贸n denominada Etapa de Integraci贸n. La importancia de esta etapa consiste en disminuir el efecto de estas anomal铆as en el nivel de precisi贸n de la representaci贸n final. Cualquiera que sea el tipo de anomal铆a, corresponde a un 谩rea amplia de estudios, con diversas y numerosas t茅cnicas propuestas. A pesar de esto, el tratamiento de estas anomal铆as, es un 谩rea de continuo mejoramiento y se considera a煤n un problema abierto para la comunidad cient铆fica. La dificultad radica fundamentalmente, en algunos casos en que la naturaleza exacta de la fuente de estas anomal铆as es desconocida, compleja de modelar, o sencillamente porque una soluci贸n a estos problemas necesariamente tendr谩 un nivel de incertidumbre. Lo anterior, ha generado la necesidad de desarrollar procedimientos de correcci贸n asistidos por el usuario. Aunque diferentes enfoques geom茅tricos y matem谩ticos han sido propuestos, sus debilidades radican principalmente en que sus aplicaciones son de dominio limitado debido a su poca flexibilidad de adaptaci贸n a objetos con diferentes topolog铆as. En este documento, se propone una aproximaci贸n metodol贸gica para la correcci贸n de anomal铆as asociadas con la ausencia de informaci贸n. Cada discontinuidad es clasificada como reparable o no, de acuerdo a una aproximaci贸n estimada de su irregularidad, y reparadas mediante el uso de un modelo de correcci贸n bayesiano./Abstract. The representation of the geometry of an object from a set of measures from the object. The samples have numerous difficulties associated with the acquisition process, the topology of the object, the sensor structure, the physical characteristics of the material, illumination conditions, among other difficulties. These are the main sources of defect generation in the sampled data, which affect the estimate of accurate representations of the objects. In this research, these anomalies are classified into three groups, which are: Presence of Noise, Redundancy and Holes. To deal with these difficulties, is typically applied correction stage called stage of Integration. The importance of this stage is to reduce the effect of these anomalies in the precision of the final representation. Whatever the type of anomaly, it corresponds to a broad area of study with various and numerous proposed techniques. Despite this, treatment of these anomalies is an area of continuous improvement and is still considered an open problem for the scientific community. The difficulty lies mainly, that in some cases where the exact nature of the source of these anomalies is unknown, modeling complex, or simply, because one solution to these problems necessarily has a level of uncertainty. This has generated the need to develop procedures for user-assisted correction. Different approaches and mathematical geometry have been proposed. Their main weakness lies in their domain applications are limited due to little flexibility to adapt to objects with no restrictions on the topologies. In this paper, we propose a methodological approach for the correction of anomalies associated with the loss of information. Each discontinuity is classified as repairable or not, according to an estimate approximation of its irregularity, and repaired by using a Bayesian correction model.Doctorad

    3D modeling of optically challenging objects

    Get PDF
    We present a system for constructing 3D models of real-world objects with optically challenging surfaces. The system utilizes a new range imaging concept called multipeak range imaging, which stores multiple candidates of range measurements for each point on the object surface. The multiple measurements include the erroneous range data caused by various surface properties that are not ideal for structured-light range sensing. False measurements generated by spurious reflections are eliminated by applying a series of constraint tests. The constraint tests based on local surface and local sensor visibility are applied first to individual range images. The constraint tests based on global consistency of coordinates and visibility are then applied to all range images acquired from different viewpoints. We show the effectiveness of our method by constructing 3D models of five different optically challenging objects. To evaluate the performance of the constraint tests and to examine the effects of the parameters used in the constraint tests, we acquired the ground-truth data by painting those objects to suppress the surface-related properties that cause difficulties in range sensing. Experimental results indicate that our method significantly improves upon the traditional methods for constructing reliable 3D models of optically challenging objects
    corecore