5,648 research outputs found

    An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor

    Full text link
    This paper presents a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system with loop-closing and relocalization capabilities targeted for the underwater domain. Our previous work, SVIn, augmented the state-of-the-art visual-inertial state estimation package OKVIS to accommodate acoustic data from sonar in a non-linear optimization-based framework. This paper addresses drift and loss of localization -- one of the main problems affecting other packages in underwater domain -- by providing the following main contributions: a robust initialization method to refine scale using depth measurements, a fast preprocessing step to enhance the image quality, and a real-time loop-closing and relocalization method using bag of words (BoW). An additional contribution is the addition of depth measurements from a pressure sensor to the tightly-coupled optimization formulation. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle from challenging underwater environments with poor visibility demonstrate performance never achieved before in terms of accuracy and robustness

    An Autonomous Surface Vehicle for Long Term Operations

    Full text link
    Environmental monitoring of marine environments presents several challenges: the harshness of the environment, the often remote location, and most importantly, the vast area it covers. Manual operations are time consuming, often dangerous, and labor intensive. Operations from oceanographic vessels are costly and limited to open seas and generally deeper bodies of water. In addition, with lake, river, and ocean shoreline being a finite resource, waterfront property presents an ever increasing valued commodity, requiring exploration and continued monitoring of remote waterways. In order to efficiently explore and monitor currently known marine environments as well as reach and explore remote areas of interest, we present a design of an autonomous surface vehicle (ASV) with the power to cover large areas, the payload capacity to carry sufficient power and sensor equipment, and enough fuel to remain on task for extended periods. An analysis of the design and a discussion on lessons learned during deployments is presented in this paper.Comment: In proceedings of MTS/IEEE OCEANS, 2018, Charlesto

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Developing Predictive Models of Driver Behaviour for the Design of Advanced Driving Assistance Systems

    Get PDF
    World-wide injuries in vehicle accidents have been on the rise in recent years, mainly due to driver error. The main objective of this research is to develop a predictive system for driving maneuvers by analyzing the cognitive behavior (cephalo-ocular) and the driving behavior of the driver (how the vehicle is being driven). Advanced Driving Assistance Systems (ADAS) include different driving functions, such as vehicle parking, lane departure warning, blind spot detection, and so on. While much research has been performed on developing automated co-driver systems, little attention has been paid to the fact that the driver plays an important role in driving events. Therefore, it is crucial to monitor events and factors that directly concern the driver. As a goal, we perform a quantitative and qualitative analysis of driver behavior to find its relationship with driver intentionality and driving-related actions. We have designed and developed an instrumented vehicle (RoadLAB) that is able to record several synchronized streams of data, including the surrounding environment of the driver, vehicle functions and driver cephalo-ocular behavior, such as gaze/head information. We subsequently analyze and study the behavior of several drivers to find out if there is a meaningful relation between driver behavior and the next driving maneuver
    • …
    corecore