2,844 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A Framework and Classification for Fault Detection Approaches in Wireless Sensor Networks with an Energy Efficiency Perspective

    Get PDF
    Wireless Sensor Networks (WSNs) are more and more considered a key enabling technology for the realisation of the Internet of Things (IoT) vision. With the long term goal of designing fault-tolerant IoT systems, this paper proposes a fault detection framework for WSNs with the perspective of energy efficiency to facilitate the design of fault detection methods and the evaluation of their energy efficiency. Following the same design principle of the fault detection framework, the paper proposes a classification for fault detection approaches. The classification is applied to a number of fault detection approaches for the comparison of several characteristics, namely, energy efficiency, correlation model, evaluation method, and detection accuracy. The design guidelines given in this paper aim at providing an insight into better design of energy-efficient detection approaches in resource-constraint WSNs

    MACHINE LEARNING OPERATIONS (MLOPS) ARCHITECTURE CONSIDERATIONS FOR DEEP LEARNING WITH A PASSIVE ACOUSTIC VECTOR SENSOR

    Get PDF
    As machine learning augmented decision-making becomes more prevalent, defense applications for these techniques are needed to prevent being outpaced by peer adversaries. One area that has significant potential is deep learning applications to classify passive sonar acoustic signatures, which would accelerate tactical, operational, and strategic decision-making processes in one of the most contested and difficult warfare domains. Convolutional Neural Networks have achieved some of the greatest success in accomplishing this task; however, a full production pipeline to continually train, deploy, and evaluate acoustic deep learning models throughout their lifecycle in a realistic architecture is a barrier to further and more rapid success in this field of research. Two main contributions of this thesis are a proposed production architecture for model lifecycle management using Machine Learning Operations (MLOps) and evaluation of the same on live passive sonar stream. Using the proposed production architecture, this work evaluates model performance differences in a production setting and explores methods to improve model performance in production. Through documenting considerations for creating a platform and architecture to continuously train, deploy, and evaluate various deep learning acoustic classification models, this study aims to create a framework and recommendations to accelerate progress in acoustic deep learning classification research.Los Alamos National LabLieutenant, United States NavyApproved for public release. Distribution is unlimited
    • …
    corecore