40,856 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Monocular Vision as a Range Sensor

    Get PDF
    One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robot’s field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360º view of the surrounding environment as part of map construction

    Non-overlapping dual camera fall detection using the NAO humanoid robot

    Get PDF
    With an aging population and a greater desire for independence, the dangers of falling incidents in the elderly have become particularly pronounced. In light of this, several technologies have been developed with the aim of preventing or monitoring falls. Failing to strike the balance between several factors including reliability, complexity and invasion of privacy has seen prohibitive in the uptake of these systems. Some systems rely on cameras being mounted in all rooms of a user's home while others require being worn 24 hours a day. This paper explores a system using a humanoid NAO robot with dual vertically mounted cameras to perform the task of fall detection

    CoBe -- Coded Beacons for Localization, Object Tracking, and SLAM Augmentation

    Full text link
    This paper presents a novel beacon light coding protocol, which enables fast and accurate identification of the beacons in an image. The protocol is provably robust to a predefined set of detection and decoding errors, and does not require any synchronization between the beacons themselves and the optical sensor. A detailed guide is then given for developing an optical tracking and localization system, which is based on the suggested protocol and readily available hardware. Such a system operates either as a standalone system for recovering the six degrees of freedom of fast moving objects, or integrated with existing SLAM pipelines providing them with error-free and easily identifiable landmarks. Based on this guide, we implemented a low-cost positional tracking system which can run in real-time on an IoT board. We evaluate our system's accuracy and compare it to other popular methods which utilize the same optical hardware, in experiments where the ground truth is known. A companion video containing multiple real-world experiments demonstrates the accuracy, speed, and applicability of the proposed system in a wide range of environments and real-world tasks. Open source code is provided to encourage further development of low-cost localization systems integrating the suggested technology at its navigation core
    • …
    corecore