17 research outputs found

    Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors

    Full text link
    Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art collision avoidance systems (CAS). However, the detection of obstacles especially during night-time is still a challenging task since the lighting conditions are not sufficient for traditional cameras to function properly. Therefore, we exploit the powerful attributes of event-based cameras to perform obstacle detection in low lighting conditions. Event cameras trigger events asynchronously at high output temporal rate with high dynamic range of up to 120 dBdB. The algorithm filters background activity noise and extracts objects using robust Hough transform technique. The depth of each detected object is computed by triangulating 2D features extracted utilising LC-Harris. Finally, asynchronous adaptive collision avoidance (AACA) algorithm is applied for effective avoidance. Qualitative evaluation is compared using event-camera and traditional camera.Comment: Accepted to IEEE SENSORS 202

    A Survey on Odometry for Autonomous Navigation Systems

    Get PDF
    The development of a navigation system is one of the major challenges in building a fully autonomous platform. Full autonomy requires a dependable navigation capability not only in a perfect situation with clear GPS signals but also in situations, where the GPS is unreliable. Therefore, self-contained odometry systems have attracted much attention recently. This paper provides a general and comprehensive overview of the state of the art in the field of self-contained, i.e., GPS denied odometry systems, and identifies the out-coming challenges that demand further research in future. Self-contained odometry methods are categorized into five main types, i.e., wheel, inertial, laser, radar, and visual, where such categorization is based on the type of the sensor data being used for the odometry. Most of the research in the field is focused on analyzing the sensor data exhaustively or partially to extract the vehicle pose. Different combinations and fusions of sensor data in a tightly/loosely coupled manner and with filtering or optimizing fusion method have been investigated. We analyze the advantages and weaknesses of each approach in terms of different evaluation metrics, such as performance, response time, energy efficiency, and accuracy, which can be a useful guideline for researchers and engineers in the field. In the end, some future research challenges in the field are discussed

    Energy-Efficient Mobile Robot Control via Run-time Monitoring of Environmental Complexity and Computing Workload

    Get PDF
    We propose an energy-efficient controller to minimize the energy consumption of a mobile robot by dynamically manipulating the mechanical and computational actuators of the robot. The mobile robot performs real-time vision-based applications based on an event-based camera. The actuators of the controller are CPU voltage/frequency for the computation part and motor voltage for the mechanical part. We show that independently considering speed control of the robot and voltage/frequency control of the CPU does not necessarily result in an energy-efficient solution. In fact, to obtain the highest efficiency, the computation and mechanical parts should be controlled together in synergy. We propose a fast hill-climbing optimization algorithm to allow the controller to find the best CPU/motor configuration at run-time and whenever the mobile robot is facing a new environment during its travel. Experimental results on a robot with Brushless DC Motors, Jetson TX2 board as the computing unit, and a DAVIS-346 event-based camera show that the proposed control algorithm can save battery energy by an average of 50.5%, 41%, and 30%, in low-complexity, medium-complexity, and high-complexity environments, over baselines

    Estimation of capacity and critical density network by using re-sampled NFDs Case study: a part of Mashhad city roads network

    No full text
    Network Fundamental Diagram (NFD) can provide an aggregated simple vision of urban traffic networks; thereby, it is a robust tool for measuring traffic flow on the network scale. The NFDs are delineated by different parameters, amongst which the critical density and capacity are vital in the implementation of network-wide traffic control strategies such as pricing and perimeter control. Since heterogeneity effect exists in reality and casts uncertainty about the estimation of the NFD parameters, measuring and reducing the heterogeneity effect would improve the efficiency of network-wide traffic control strategies. In this paper, firstly, the NFD of Mashhad was estimated by fusion of collected data from Inductive loop detectors (ILD), Automatic Vehicle Locating system (AVL), and Automatic Fare Collection system (AFC). As NFD estimation needs data that could directly or indirectly derive average flows and densities/speeds, ILDs have been widely exploited to estimate average flows and AVLs were used to extract average speeds. Going through the current procedure, AFCs were operated for measuring dwell time and cruising bus speed. Eventually, a random re-sampled method was applied to decline the heterogeneity effect and reveal the congestion branch in re-sampled NFDs. The outcomes highlight the application of this method to estimate the critical density and capacity in a heterogeneous condition with limited data. Also, they emphasize that applying this method is quite simple and required pretty few inputs. For the validation purpose, the relative errors of estimated values are calculated. In this measurement, critical densities from daily observation were the actual values, and those from full observations were the expected values

    Dynamic resource-aware corner detection for bio-inspired vision sensors

    Get PDF
    Event-based cameras are vision devices that transmit only brightness changes with low latency and ultra-low power consumption. Such characteristics make event-based cameras attractive in the field of localization and object tracking in resource-constrained systems. Since the number of generated events in such cameras is huge, the selection and filtering of the incoming events are beneficial from both increasing the accuracy of the features and reducing the computational load. In this paper, we present an algorithm to detect asynchronous corners form a stream of events in real-time on embedded systems. The algorithm is called the Three Layer Filtering-Harris or TLF-Harris algorithm. The algorithm is based on an events' filtering strategy whose purpose is 1) to increase the accuracy by deliberately eliminating some incoming events, i.e., noise and 2) to improve the real-time performance of the system, i.e., preserving a constant throughput in terms of input events per second, by discarding unnecessary events with a limited accuracy loss. An approximation of the Harris algorithm, in turn, is used to exploit its high-quality detection capability with a low-complexity implementation to enable seamless real-time performance on embedded computing platforms. The proposed algorithm is capable of selecting the best corner candidate among neighbors and achieves an average execution time savings of 59% compared with the conventional Harris score. Moreover, our approach outperforms the competing methods, such as eFAST, eHarris, and FA-Harris, in terms of real-time performance, and surpasses Arc* in terms of accuracy

    Twelfth International Conference on Machine Vision (ICMV 2019)

    No full text
    Visual odometry (VO) is one of the most challenging techniques in computer vision for autonomous vehicle/vessels. In VO, the camera pose that also represents the robot pose in ego-motion is estimated analyzing the features and pixels extracted from the camera images. Different VO techniques mainly provide different trade-offs among the resources that are being considered for odometry, such as camera resolution, computation/communication capacity, power/energy consumption, and accuracy. In this paper, a hybrid technique is proposed for camera pose estimation by combining odometry based on triangulation using the long-term period of direct-based odometry and the short-term period of inverse depth mapping. Experimental results based on the EuRoC data set shows that the proposed technique significantly outperforms the traditional direct-based pose estimation method for Micro Aerial Vehicle (MAV), keeping its potential negative effect on performance negligible.</p
    corecore