1,248 research outputs found

    Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments

    Get PDF
    RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.United States. Office of Naval Research (Grant MURI N00014-07-1-0749)United States. Office of Naval Research (Science of Autonomy Program N00014-09-1-0641)United States. Army Research Office (MAST CTA)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-09-1-1052)National Science Foundation (U.S.) (Contract IIS-0812671)United States. Army Research Office (Robotics Consortium Agreement W911NF-10-2-0016)National Science Foundation (U.S.). Division of Information, Robotics, and Intelligent Systems (Grant 0546467

    Indoor Mapping and Reconstruction with Mobile Augmented Reality Sensor Systems

    Get PDF
    Augmented Reality (AR) ermöglicht es, virtuelle, dreidimensionale Inhalte direkt innerhalb der realen Umgebung darzustellen. Anstatt jedoch beliebige virtuelle Objekte an einem willkürlichen Ort anzuzeigen, kann AR Technologie auch genutzt werden, um Geodaten in situ an jenem Ort darzustellen, auf den sich die Daten beziehen. Damit eröffnet AR die Möglichkeit, die reale Welt durch virtuelle, ortbezogene Informationen anzureichern. Im Rahmen der vorliegenen Arbeit wird diese Spielart von AR als "Fused Reality" definiert und eingehend diskutiert. Der praktische Mehrwert, den dieses Konzept der Fused Reality bietet, lässt sich gut am Beispiel seiner Anwendung im Zusammenhang mit digitalen Gebäudemodellen demonstrieren, wo sich gebäudespezifische Informationen - beispielsweise der Verlauf von Leitungen und Kabeln innerhalb der Wände - lagegerecht am realen Objekt darstellen lassen. Um das skizzierte Konzept einer Indoor Fused Reality Anwendung realisieren zu können, müssen einige grundlegende Bedingungen erfüllt sein. So kann ein bestimmtes Gebäude nur dann mit ortsbezogenen Informationen augmentiert werden, wenn von diesem Gebäude ein digitales Modell verfügbar ist. Zwar werden größere Bauprojekt heutzutage oft unter Zuhilfename von Building Information Modelling (BIM) geplant und durchgeführt, sodass ein digitales Modell direkt zusammen mit dem realen Gebäude ensteht, jedoch sind im Falle älterer Bestandsgebäude digitale Modelle meist nicht verfügbar. Ein digitales Modell eines bestehenden Gebäudes manuell zu erstellen, ist zwar möglich, jedoch mit großem Aufwand verbunden. Ist ein passendes Gebäudemodell vorhanden, muss ein AR Gerät außerdem in der Lage sein, die eigene Position und Orientierung im Gebäude relativ zu diesem Modell bestimmen zu können, um Augmentierungen lagegerecht anzeigen zu können. Im Rahmen dieser Arbeit werden diverse Aspekte der angesprochenen Problematik untersucht und diskutiert. Dabei werden zunächst verschiedene Möglichkeiten diskutiert, Indoor-Gebäudegeometrie mittels Sensorsystemen zu erfassen. Anschließend wird eine Untersuchung präsentiert, inwiefern moderne AR Geräte, die in der Regel ebenfalls über eine Vielzahl an Sensoren verfügen, ebenfalls geeignet sind, als Indoor-Mapping-Systeme eingesetzt zu werden. Die resultierenden Indoor Mapping Datensätze können daraufhin genutzt werden, um automatisiert Gebäudemodelle zu rekonstruieren. Zu diesem Zweck wird ein automatisiertes, voxel-basiertes Indoor-Rekonstruktionsverfahren vorgestellt. Dieses wird außerdem auf der Grundlage vierer zu diesem Zweck erfasster Datensätze mit zugehörigen Referenzdaten quantitativ evaluiert. Desweiteren werden verschiedene Möglichkeiten diskutiert, mobile AR Geräte innerhalb eines Gebäudes und des zugehörigen Gebäudemodells zu lokalisieren. In diesem Kontext wird außerdem auch die Evaluierung einer Marker-basierten Indoor-Lokalisierungsmethode präsentiert. Abschließend wird zudem ein neuer Ansatz, Indoor-Mapping Datensätze an den Achsen des Koordinatensystems auszurichten, vorgestellt

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    REBA: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics

    Get PDF
    This paper describes an architecture for robots that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with logic-based and probabilistic descriptions of uncertainty and domain knowledge. An action language is extended to support non-boolean fluents and non-deterministic causal laws. This action language is used to describe tightly-coupled transition diagrams at two levels of granularity, with a fine-resolution transition diagram defined as a refinement of a coarse-resolution transition diagram of the domain. The coarse-resolution system description, and a history that includes (prioritized) defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. A probabilistic representation of the uncertainty in sensing and actuation is then included in this zoomed fine-resolution system description, and used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions, with the corresponding observations being recorded in the coarse-resolution history and used for subsequent reasoning. The architecture is evaluated in simulation and on a mobile robot moving objects in an indoor domain, to show that it supports reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains.Comment: 72 pages, 14 figure

    3D Neighborhood Convolution: Learning Depth-Aware Features for RGB-D and RGB Semantic Segmentation

    Get PDF
    A key challenge for RGB-D segmentation is how to effectively incorporate 3D geometric information from the depth channel into 2D appearance features. We propose to model the effective receptive field of 2D convolution based on the scale and locality from the 3D neighborhood. Standard convolutions are local in the image space (u,vu, v), often with a fixed receptive field of 3x3 pixels. We propose to define convolutions local with respect to the corresponding point in the 3D real-world space (x,y,zx, y, z), where the depth channel is used to adapt the receptive field of the convolution, which yields the resulting filters invariant to scale and focusing on the certain range of depth. We introduce 3D Neighborhood Convolution (3DN-Conv), a convolutional operator around 3D neighborhoods. Further, we can use estimated depth to use our RGB-D based semantic segmentation model from RGB input. Experimental results validate that our proposed 3DN-Conv operator improves semantic segmentation, using either ground-truth depth (RGB-D) or estimated depth (RGB)

    Exergy-based Planning and Thermography-based Monitoring for energy efficient buildings - Progress Report (KIT Scientific Reports ; 7632)

    Get PDF
    Designing and monitoring energy efficiency of buildings is vital since they account for up to 40% of end-use energy. In this study, exergy analysis is investigated as a life cycle design tool to strike a balance between thermodynamic efficiency of energy conversion and economic and environmental costs of construction. Quantitative geo-referenced thermography is proposed for monitoring and quantitative assessment via continued simulation and parameter estimation during the operating phase

    Accurate dense depth from light field technology for object segmentation and 3D computer vision

    Get PDF

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph
    corecore