898 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot

    Full text link
    Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 201

    A REDUNDANT MONITORING SYSTEM FOR HUMAN WELDER OPERATION USING IMU AND VISION SENSORS

    Get PDF
    In manual control, the welding gun’s moving speed can significantly influence the welding results and critical welding operations usually require welders to concentrate consistently in order to react rapidly and accurately. However, human welders always have some habitual action which can have some subtle influence the welding process. It takes countless hours to train an experienced human welder. Using vision and IMU sensor will be able to set up a system and allow the worker got more accurate visual feedback like an experienced worker. The problem is that monitor and measuring of the control process not always easy under a complex working environment like welding. In this thesis, a new method is developed that use two different methods to compensate each other to obtain accurate monitoring results. Vision sensor and IMU sensor both developed to obtain the accurate data from the control process in real-time but don’t influence other. Although both vision and IMU sensor has their own limits, they also have their own advantage which can contribute to the measuring system

    Event-based Simultaneous Localization and Mapping: A Comprehensive Survey

    Full text link
    In recent decades, visual simultaneous localization and mapping (vSLAM) has gained significant interest in both academia and industry. It estimates camera motion and reconstructs the environment concurrently using visual sensors on a moving robot. However, conventional cameras are limited by hardware, including motion blur and low dynamic range, which can negatively impact performance in challenging scenarios like high-speed motion and high dynamic range illumination. Recent studies have demonstrated that event cameras, a new type of bio-inspired visual sensor, offer advantages such as high temporal resolution, dynamic range, low power consumption, and low latency. This paper presents a timely and comprehensive review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks. The review covers the working principle of event cameras and various event representations for preprocessing event data. It also categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods, with detailed discussions and practical guidance for each approach. Furthermore, the paper evaluates the state-of-the-art methods on various benchmarks, highlighting current challenges and future opportunities in this emerging research area. A public repository will be maintained to keep track of the rapid developments in this field at {\url{https://github.com/kun150kun/ESLAM-survey}}

    Miniaturized GPS/MEMS IMU integrated board

    Get PDF
    This invention documents the efforts on the research and development of a miniaturized GPS/MEMS IMU integrated navigation system. A miniaturized GPS/MEMS IMU integrated navigation system is presented; Laser Dynamic Range Imager (LDRI) based alignment algorithm for space applications is discussed. Two navigation cameras are also included to measure the range and range rate which can be integrated into the GPS/MEMS IMU system to enhance the navigation solution

    Sea Ice Field Analysis Using Machine Vision

    Get PDF
    Sea ice field analysis has motivation in various areas, such as environmental, logistics or ship maintenance. Among other methods, local ice field analysis from ship-based visual observations are currently done by human volunteers and therefore are liable to human errors and subjective interpretations. The goal of the thesis is to develop and implement a complete process for obtaining dimensions, distribution and concentration of sea-ice floes, which aims at assisting and improving part of the aforementioned visual observations. Such process involves numerous, organized steps which take advantage of techniques from image processing (lens calibration, vignetting removal and orthorectification), robotics (transformation frames) and machine vision (thresholding and texture analysis methods, and morphological operations). An experimental system setup for collecting the required information is provided as well, which includes a machine vision camera for image acquisition, an IMU device for determining the dynamic attitude of the cameras with respect to the world, two GPS sensors providing a redundant positioning and clock data, and a desktop computer used as the main logging platform for all the collected data. Through a number of experiments, the proposed system setup and image analysis methods have proved to provide promising results in pack ice and brash ice conditions, thus encouraging further research on the topic. Further improvements should target the accuracy of ice floes detection, and over and under-segmentation of the detected sea-ice floes
    • …
    corecore