564 research outputs found

    Symmetry Signatures for Image-Based Applications in Robotics

    Get PDF

    Toward an object-based semantic memory for long-term operation of mobile service robots

    Get PDF
    Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time

    The Geometry and Usage of the Supplementary Fisheye Lenses in Smartphones

    Get PDF
    Nowadays, mobile phones are more than a device that can only satisfy the communication need between people. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. In addition to this advantage, it is experimented whether fisheye lens and mobile phone combination can be used in a photogrammetric way, and if so, what will be the result. Fisheye lens equipment used with mobile phones was tested in this study. For this, standard calibration of ‘Olloclip 3 in one’ fisheye lens used with iPhone 4S mobile phone and ‘Nikon FC‐E9’ fisheye lens used with Nikon Coolpix8700 are compared based on equidistant model. This experimental study shows that Olloclip 3 in one fisheye lens developed for mobile phones has at least the similar characteristics with classic fisheye lenses. The dimensions of fisheye lenses used with smart phones are getting smaller and the prices are reducing. Moreover, as verified in this study, the accuracy of fisheye lenses used in smartphones is better than conventional fisheye lenses. The use of smartphones with fisheye lenses will give the possibility of practical applications to ordinary users in the near future

    Targeting a Practical Approach for Robot Vision with Ensembles of Visual Features

    Get PDF
    We approach the task of topological localization in mobile robotics without using a temporal continuity of the sequences of images. The provided information about the environment is contained in images taken with a perspective colour camera mounted on a robot platform. The main contributions of this work are quantifiable examinations of a wide variety of different global and local invariant features, and different distance measures. We focus on finding the optimal set of features and a deepened analysis was carried out. The characteristics of different features were analysed using widely known dissimilarity measures and graphical views of the overall performances. The quality of the acquired configurations is also tested in the localization stage by means of location recognition in the Robot Vision task, by participating at the ImageCLEF International Evaluation Campaign. The long term goal of this project is to develop integrated, stand alone capabilities for real-time topological localization in varying illumination conditions and over longer routes

    Computer Vision and Image Understanding xxx

    Get PDF
    Abstract 13 This paper presents a panoramic virtual stereo vision approach to the problem of detecting 14 and localizing multiple moving objects (e.g., humans) in an indoor scene. Two panoramic 15 cameras, residing on different mobile platforms, compose a virtual stereo sensor with a flexible 16 baseline. A novel ''mutual calibration'' algorithm is proposed, where panoramic cameras on 17 two cooperative moving platforms are dynamically calibrated by looking at each other. A de-18 tailed numerical analysis of the error characteristics of the panoramic virtual stereo vision 19 (mutual calibration error, stereo matching error, and triangulation error) is given to derive 20 rules for optimal view planning. Experimental results are discussed for detecting and localizing 21 multiple humans in motion using two cooperative robot platforms. 2

    Modeling the environment with egocentric vision systems

    Get PDF
    Cada vez más sistemas autónomos, ya sean robots o sistemas de asistencia, están presentes en nuestro día a día. Este tipo de sistemas interactúan y se relacionan con su entorno y para ello necesitan un modelo de dicho entorno. En función de las tareas que deben realizar, la información o el detalle necesario del modelo varía. Desde detallados modelos 3D para sistemas de navegación autónomos, a modelos semánticos que incluyen información importante para el usuario como el tipo de área o qué objetos están presentes. La creación de estos modelos se realiza a través de las lecturas de los distintos sensores disponibles en el sistema. Actualmente, gracias a su pequeño tamaño, bajo precio y la gran información que son capaces de capturar, las cámaras son sensores incluidos en todos los sistemas autónomos. El objetivo de esta tesis es el desarrollar y estudiar nuevos métodos para la creación de modelos del entorno a distintos niveles semánticos y con distintos niveles de precisión. Dos puntos importantes caracterizan el trabajo desarrollado en esta tesis: - El uso de cámaras con punto de vista egocéntrico o en primera persona ya sea en un robot o en un sistema portado por el usuario (wearable). En este tipo de sistemas, las cámaras son solidarias al sistema móvil sobre el que van montadas. En los últimos años han aparecido muchos sistemas de visión wearables, utilizados para multitud de aplicaciones, desde ocio hasta asistencia de personas. - El uso de sistemas de visión omnidireccional, que se distinguen por su gran campo de visión, incluyendo mucha más información en cada imagen que las cámara convencionales. Sin embargo plantean nuevas dificultades debido a distorsiones y modelos de proyección más complejos. Esta tesis estudia distintos tipos de modelos del entorno: - Modelos métricos: el objetivo de estos modelos es crear representaciones detalladas del entorno en las que localizar con precisión el sistema autónomo. Ésta tesis se centra en la adaptación de estos modelos al uso de visión omnidireccional, lo que permite capturar más información en cada imagen y mejorar los resultados en la localización. - Modelos topológicos: estos modelos estructuran el entorno en nodos conectados por arcos. Esta representación tiene menos precisión que la métrica, sin embargo, presenta un nivel de abstracción mayor y puede modelar el entorno con más riqueza. %, por ejemplo incluyendo el tipo de área de cada nodo, la localización de objetos importantes o el tipo de conexión entre los distintos nodos. Esta tesis se centra en la creación de modelos topológicos con información adicional sobre el tipo de área de cada nodo y conexión (pasillo, habitación, puertas, escaleras...). - Modelos semánticos: este trabajo también contribuye en la creación de nuevos modelos semánticos, más enfocados a la creación de modelos para aplicaciones en las que el sistema interactúa o asiste a una persona. Este tipo de modelos representan el entorno a través de conceptos cercanos a los usados por las personas. En particular, esta tesis desarrolla técnicas para obtener y propagar información semántica del entorno en secuencias de imágen

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Applications of Intelligent Vision in Low-Cost Mobile Robots

    Get PDF
    With the development of intelligent information technology, we have entered an era of 5G and AI. Mobile robots embody both of these technologies, and as such play an important role in future developments. However, the development of perception vision in consumer-grade low-cost mobile robots is still in its infancies. With the popularity of edge computing technology in the future, high-performance vision perception algorithms are expected to be deployed on low-power edge computing chips. Within the context of low-cost mobile robotic solutions, a robot intelligent vision system is studied and developed in this thesis. The thesis proposes and designs the overall framework of the higher-level intelligent vision system. The core system includes automatic robot navigation and obstacle object detection. The core algorithm deployments are implemented through a low-power embedded platform. The thesis analyzes and investigates deep learning neural network algorithms for obstacle object detection in intelligent vision systems. By comparing a variety of open source object detection neural networks on high performance hardware platforms, combining the constraints of hardware platform, a suitable neural network algorithm is selected. The thesis combines the characteristics and constraints of the low-power hardware platform to further optimize the selected neural network. It introduces the minimize mean square error (MMSE) and the moving average minmax algorithms in the quantization process to reduce the accuracy loss of the quantized model. The results show that the optimized neural network achieves a 20-fold improvement in inference performance on the RK3399PRO hardware platform compared to the original network. The thesis concludes with the application of the above modules and systems to a higher-level intelligent vision system for a low-cost disinfection robot, and further optimization is done for the hardware platform. The test results show that while achieving the basic service functions, the robot can accurately identify the obstacles ahead and locate and navigate in real time, which greatly enhances the perception function of the low-cost mobile robot
    corecore