3,111 research outputs found
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.Comment: 34 page
Applying Evolutionary Optimisation to Robot Obstacle Avoidance
This paper presents an artificial evolutionbased method for stereo image
analysis and its application to real-time obstacle detection and avoidance for
a mobile robot. It uses the Parisian approach, which consists here in splitting
the representation of the robot's environment into a large number of simple
primitives, the "flies", which are evolved following a biologically inspired
scheme and give a fast, low-cost solution to the obstacle detection problem in
mobile robotics
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Overview of Environment Perception for Intelligent Vehicles
This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The
state-of-the-art algorithms and modeling methods for intelligent
vehicles are given, with a summary of their pros and cons. A
special attention is paid to methods for lane and road detection,
traffic sign recognition, vehicle tracking, behavior analysis, and
scene understanding. In addition, we provide information about
datasets, common performance analysis, and perspectives on
future research directions in this area
An improved "flies" method for stereo vision: application to pedestrian detection
In the vast research field of intelligent transportation systems, the problem of detection (and recognition) of environment objects, for example pedestrians and vehicles, is indispensable but challenging. The research work presented in this paper is devoted to stereo-vision based method with pedestrian detection as its application (a sub-part of the French national project “LOVe”: Logiciels d'Observation des Vulnerables). With a prospect of benefiting from an innovative method i.e. the genetic evolutionary “flies” method proposed by former researchers on continuous data updating and asynchronous data reading, we have carried on the “flies” method through the task of pedestrian detection affiliated with the “LOVe” project. Compared with former work of the “flies” method, two main contributions have been incorporated into the architecture of the “flies” method: first, an improved fitness function has been proposed instead of the original one; second, a technique coined “concentrating” has been integrated into the evolution procedure. The improved “flies” method is used to offer range information of possible objects in the detection field. The integrate scheme of pedestrian detection is presented as well. Some experimental results are given for validating the performance improvements brought by the improved “flies” method and for validating the pedestrian detection method based on the improved “flies” method
Software Porting of a 3D Reconstruction Algorithm to Razorcam Embedded System on Chip
A method is presented to calculate depth information for a UAV navigation system from Keypoints in two consecutive image frames using a monocular camera sensor as input and the OpenCV library. This method was first implemented in software and run on a general-purpose Intel CPU, then ported to the RazorCam Embedded Smart-Camera System and run on an ARM CPU onboard the Xilinx Zynq-7000. The results of performance and accuracy testing of the software implementation are then shown and analyzed, demonstrating a successful port of the software to the RazorCam embedded system on chip that could potentially be used onboard a UAV with tight constraints of size, weight, and power. The potential impacts will be seen through the continuation of this research in the Smart ES lab at University of Arkansas
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed
- …