7,897 research outputs found

    Data-Efficient Decentralized Visual SLAM

    Full text link
    Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this work, we integrate state-of-the-art decentralized SLAM components into a new, complete decentralized visual SLAM system. To allow for data association and co-optimization, existing decentralized visual SLAM systems regularly exchange the full map data between all robots, incurring large data transfers at a complexity that scales quadratically with the robot count. In contrast, our method performs efficient data association in two stages: in the first stage a compact full-image descriptor is deterministically sent to only one robot. In the second stage, which is only executed if the first stage succeeded, the data required for relative pose estimation is sent, again to only one robot. Thus, data association scales linearly with the robot count and uses highly compact place representations. For optimization, a state-of-the-art decentralized pose-graph optimization method is used. It exchanges a minimum amount of data which is linear with trajectory overlap. We characterize the resulting system and identify bottlenecks in its components. The system is evaluated on publicly available data and we provide open access to the code.Comment: 8 pages, submitted to ICRA 201

    AFFECTIVE COMPUTING AND AUGMENTED REALITY FOR CAR DRIVING SIMULATORS

    Get PDF
    Car simulators are essential for training and for analyzing the behavior, the responses and the performance of the driver. Augmented Reality (AR) is the technology that enables virtual images to be overlaid on views of the real world. Affective Computing (AC) is the technology that helps reading emotions by means of computer systems, by analyzing body gestures, facial expressions, speech and physiological signals. The key aspect of the research relies on investigating novel interfaces that help building situational awareness and emotional awareness, to enable affect-driven remote collaboration in AR for car driving simulators. The problem addressed relates to the question about how to build situational awareness (using AR technology) and emotional awareness (by AC technology), and how to integrate these two distinct technologies [4], into a unique affective framework for training, in a car driving simulator

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    SpiderFab(TradeMark):Process for On-Orbit Construction of Kilometer-Scale Apertures

    Get PDF
    The SpiderFab effort investigated the value proposition and technical feasibility of radically changing the way we build and deploy spacecraft by enabling space systems to fabricate and integrate key components on-orbit. Weeveloped an architecture for a SpiderFab system, identifying the key capabilities, and detailed two concept implementations of this architecture, one specialized for fabricating support trusses for large solar arrays, and the second a robotic system capable of fabricating spacecraft components such as antenna reflectors. We then performed analyses to evaluate the value proposition for on-orbit fabrication, and in each case found that the dramatic improvements in structural performance and packing efficiency enabled by on-orbit fabrication can provide order-of-magnitude improvements in key system metrics. For phased-array radars, SpiderFab enables order-of-magnitude increases in gain-per-stowed-volume. For the New Worlds Observer mission, SpiderFab construction of a starshade can provide a ten-fold increase in the number of Earth-like planets discovered per dollar. For communications systems, SpiderFab can change the cost equation for large antenna reflectors, enabling affordable deployment of much larger apertures than feasible with current deployable technologies. To establish the technical feasibility, we identified methods for combining several additive manufacturing technologies with robotic assembly technologies, metrology sensors, and thermal control techniques to provide the capabilities required to implement a SpiderFab system. We performed proof-of-concept level testing of these approaches, in each case demonstrating that the proposed solutions are feasible, and establishing the SpiderFab architecture at TRL-3. Further maturation of SpiderFab to mission-readiness is well-suited to an incremental development program. Affordable smallsat demonstrations will prepare the technology for full-scale demonstration that will unlock the full potential of the SpiderFab architecture by flight qualifying and validating an on-orbit fabrication and integration process that can be re-used to reduce the life-cycle cost and increase power, bandwidth, resolution, and sensitivity for a wide range of NASA Science and Exploration missions

    Five challenges in cloud-enabled intelligence and control

    Get PDF
    The proliferation of connected embedded devices, or the Internet of Things (IoT), together with recent advances in machine intelligence, will change the profile of future cloud services and introduce a variety of new research problems centered around empowering resource-limited edge devices to exhibit intelligent behavior, both in sensing and control. Cloud services will enable learning from data, performing inference, and executing control, all with assurances on outcomes. The paper discusses such emerging services and outlines five resulting new research directions towards enabling and optimizing intelligent, cloud-assisted sensing and control in the age of the Internet of Things

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Development, Evaluation and Validation of a Stereo Camera Underwater SLAM Algorithm

    Get PDF
    In this work the development of an algorithm for visual underwater localization is described. It spans the complete process from the initial idea, the development of a suitable underwater vehicle for testing to the algorithm's experimental validation in real underwater environments. Besides the development and validation of the visual SLAM algorithm, the methodology for its evaluation is a key aspect of this work. The resulting SURE-SLAM algorithm uses a stereo camera system and basic vehicle sensors (AHRS, DPS) to compute a complete, error-bounded localization solution for underwater vehicles in real-time with similar quality as state-of-the-art acoustically stabilized dead-reckoning approaches. The robustness of the algorithm as well as its limitations and failure-cases are established by extensive field testing with the AUV Dagon, which was developed during this thesis as test and evaluation vehicle
    corecore