640 research outputs found

    Towards Robust Visual-Controlled Flight of Single and Multiple UAVs in GPS-Denied Indoor Environments

    Get PDF
    Having had its origins in the minds of science fiction authors, mobile robot hardware has become reality many years ago. However, most envisioned applications have yet remained fictional - a fact that is likely to be caused by the lack of sufficient perception systems. In particular, mobile robots need to be aware of their own location with respect to their environment at all times to act in a reasonable manner. Nevertheless, a promising application for mobile robots in the near future could be, e.g., search and rescue tasks on disaster sites. Here, small and agile flying robots are an ideal tool to effectively create an overview of the scene since they are largely unaffected by unstructured environments and blocked passageways. In this respect, this thesis first explores the problem of ego-motion estimation for quadrotor Unmanned Aerial Vehicles (UAVs) based entirely on onboard sensing and processing hardware. To this end, cameras are an ideal choice as the major sensory modality. They are light, cheap, and provide a dense amount of information on the environment. While the literature provides camera-based algorithms to estimate and track the pose of UAVs over time, these solutions lack the robustness required for many real-world applications due to their inability to recover a loss of tracking fast. Therefore, in the first part of this thesis, a robust algorithm to estimate the velocity of a quadrotor UAV based on optical flow is presented. Additionally, the influence of the incorporated measurements from an Inertia Measurement Unit (IMU) on the precision of the velocity estimates is discussed and experimentally validated. Finally, we introduce a novel nonlinear observation scheme to recover the metric scale factor of the state estimate through fusion with acceleration measurements. This nonlinear model allows now to predict the convergence behavior of the presented filtering approach. All findings are experimentally evaluated, including the first presented human-controlled closed-loop flights based entirely on onboard velocity estimation. In the second part of this thesis, we address the problem of collaborative multi robot operations based on onboard visual perception. For instances of a direct line-of-sight between the robots, we propose a distributed formation control based on ego-motion detection and visually detected bearing angles between the members of the formation. To overcome the limited field of view of real cameras, we add an artificial yaw-rotation to track robots that would be invisible to static cameras. Afterwards, without the need for direct visual detections, we present a novel contribution to the mutual localization problem. In particular, we demonstrate a precise global localization of a monocular camera with respect to a dense 3D map. To this end, we propose an iterative algorithm that aims to estimate the location of the camera for which the photometric error between a synthesized view of the dense map and the real camera image is minimal

    Precise and Robust Visual SLAM with Inertial Sensors and Deep Learning.

    Get PDF
    Dotar a los robots con el sentido de la percepción destaca como el componente más importante para conseguir máquinas completamente autónomas. Una vez que las máquinas sean capaces de percibir el mundo, podrán interactuar con él. A este respecto, la localización y la reconstrucción de mapas de manera simultánea, SLAM (por sus siglas en inglés) comprende todas las técnicas que permiten a los robots estimar su posición y reconstruir el mapa de su entorno al mismo tiempo, usando únicamente el conjunto de sensores a bordo. El SLAM constituye el elemento clave para la percepción de las máquinas, estando ya presente en diferentes tecnologías y aplicaciones como la conducción autónoma, la realidad virtual y aumentada o los robots de servicio. Incrementar la robustez del SLAM expandiría su uso y aplicación, haciendo las máquinas más seguras y requiriendo una menor intervención humana.En esta tesis hemos combinado sensores inerciales (IMU) y visuales para incrementar la robustez del SLAM ante movimientos rápidos, oclusiones breves o entornos con poca textura. Primero hemos propuesto dos técnicas rápidas para la inicialización del sensor inercial, con un bajo error de escala. Estas han permitido empezar a usar la IMU tan pronto como 2 segundos después de lanzar el sistema. Una de estas inicializaciones ha sido integrada en un nuevo sistema de SLAM visual inercial, acuñado como ORB-SLAM3, el cual representa la mayor contribución de esta tesis. Este es el sistema de SLAM visual-inercial de código abierto más completo hasta la fecha, que funciona con cámaras monoculares o estéreo, estenopeicas o de ojo de pez, y con capacidades multimapa. ORB-SLAM3 se basa en una formulación de Máximo a Posteriori, tanto en la inicialización como en el refinamiento y el ajuste de haces visual-inercial. También explota la asociación de datos en el corto, medio y largo plazo. Todo esto hace que ORB-SLAM3 sea el sistema SLAM visual-inercial más preciso, como así demuestran nuestros resultados en experimentos públicos.Además, hemos explorado la aplicación de técnicas de aprendizaje profundo para mejorar la robustez del SLAM. En este aspecto, primero hemos propuesto DynaSLAM II, un sistema SLAM estéreo para entornos dinámicos. Los objetos dinámicos son segmentados mediante una red neuronal, y sus puntos y medidas son incluidas eficientemente en la optimización de ajuste de haces. Esto permite estimar y hacer seguimiento de los objetos en movimiento, al mismo tiempo que se mejora la estimación de la trayectoria de la cámara. En segundo lugar, hemos desarrollado un SLAM monocular y directo basado en predicciones de profundidad a través de redes neuronales. Optimizamos de manera conjunta tanto los residuos de predicción de profundidad como los fotométricos de distintas vistas, lo que da lugar a un sistema monocular capaz de estimar la escala. No sufre el problema de deriva de escala, siendo más robusto y varias veces más preciso que los sistemas monoculares clásicos.<br /

    RELIABILITY TESTING & BAYESIAN MODELING OF HIGH POWER LEDS FOR USE IN A MEDICAL DIAGNOSTIC APPLICATION

    Get PDF
    While use of LEDs in fiber optics and lighting applications is common, their use in medical diagnostic applications is rare. Since the precise value of light intensity is used to interpret patient results, understanding failure modes is very important. The contributions of this thesis is that it represents the first measurements of reliability of AlGaInP LEDs for the medical environment of short pulse bursts and hence the uncovering of unique failure mechanisms. Through accelerated life tests (ALT), the reliability degradation model has been developed and other LED failure modes have been compared through a failure modes and effects criticality analysis (FMECA). Appropriate ALTs and accelerated degradation tests (ADT) were designed and carried out for commercially available AlGaInP LEDs. The bias conditions were current pulse magnitude and duration, current density and temperature. The data was fitted to both an Inverse Power Law model with current density J as the accelerating agent and also to an Arrhenius model with T as the accelerating agent. The optical degradation during ALT/ADT was found to be logarithmic with time at each test temperature. Further, the LED bandgap temporarily shifts towards the longer wavelength at high current and high junction temperature. Empirical coefficients for Varshini's equation were determined, and are now available for future reliability tests of LEDs for medical applications. In order to incorporate prior knowledge, the Bayesian analysis was carried out for LEDs. This consisted of identifying pertinent prior data and combining the experimental ALT results into a Weibull probability model for time to failure determination. The Weibull based Bayesian likelihood function was derived. For the 1st Bayesian updating, a uniform distribution function was used as the Prior for Weibull á-â parameters. Prior published data was used as evidence to get the 1st posterior joint á-â distribution. For the 2nd Bayesian updating, ALT data was used as evidence to obtain the 2nd posterior joint á-â distribution. The predictive posterior failure distribution was estimated by averaging over the range of á-â values. This research provides a unique contribution in reliability degradation model development based on physics of failure by modeling the LED output characterization (logarithmic degradation, TTF â<1), temperature dependence and a degree of Relevance parameter `R' in the Bayesian analysis

    The Pan-STARRS Moving Object Processing System

    Full text link
    We describe the Pan-STARRS Moving Object Processing System (MOPS), a modern software package that produces automatic asteroid discoveries and identifications from catalogs of transient detections from next-generation astronomical survey telescopes. MOPS achieves > 99.5% efficiency in producing orbits from a synthetic but realistic population of asteroids whose measurements were simulated for a Pan-STARRS4-class telescope. Additionally, using a non-physical grid population, we demonstrate that MOPS can detect populations of currently unknown objects such as interstellar asteroids. MOPS has been adapted successfully to the prototype Pan-STARRS1 telescope despite differences in expected false detection rates, fill-factor loss and relatively sparse observing cadence compared to a hypothetical Pan-STARRS4 telescope and survey. MOPS remains >99.5% efficient at detecting objects on a single night but drops to 80% efficiency at producing orbits for objects detected on multiple nights. This loss is primarily due to configurable MOPS processing limits that are not yet tuned for the Pan-STARRS1 mission. The core MOPS software package is the product of more than 15 person-years of software development and incorporates countless additional years of effort in third-party software to perform lower-level functions such as spatial searching or orbit determination. We describe the high-level design of MOPS and essential subcomponents, the suitability of MOPS for other survey programs, and suggest a road map for future MOPS development.Comment: 57 Pages, 26 Figures, 13 Table

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    Full text link
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper
    • …
    corecore