93,487 research outputs found
Detecting obstacles from camera image at open sea
While self-driving cars are a hot topic in these days, fewer people know that the same level of automation is being developed in the maritime industry. To enhance safety on board and to ensure the optimal utilization of crew members, automated assistant solutions are implemented on cargo ships and vessels.
This thesis deals with a monocular camera-based system, that is capable of detection obstacles in open sea scenarios, and to estimate surrounding vehiclesā distance and bearing. After a solid research of existing methods and literature, an algorithm has been developed, containing three main parts. First, the real-world measurement data and camera images are being processed. Secondly, object detection is achieved with the YOLO deep learning methods that achieves at a high framerate and can be used for real-time applications. Lastly, distance and bearing values of detected obstacles are estimated based on geometrical calculations and mathematical equations that are validated with ground truth measurement data.
Having multiple weeks of recorded measurement data from a RoPax vessel operating from Helsinki, allowed testing and validation already during the development phase. Results have shown that the systemsā detection capability is highly affected by the image resolution, and that distance estimation performance is reliable until 2-3 kilometers, but estimation errors rise at farther distances, due to physical limitations of cameras. In addition, as an interesting evaluation method, a survey has been conducted with industry professionals, to compare human distance estimation capability with the developed system. As a conclusion it can be stated that a significant need and huge potential can be found in automated safety solution in the maritime industry
Recommended from our members
An evaluation framework for stereo-based driver assistance
This is the post-print version of the Article - Copyright @ 2012 Springer VerlagThe accuracy of stereo algorithms or optical flow methods is commonly assessed by comparing the results against the Middlebury
database. However, equivalent data for automotive or robotics applications
rarely exist as they are difficult to obtain. As our main contribution, we introduce an evaluation framework tailored for stereo-based driver assistance able to deliver excellent performance measures while
circumventing manual label effort. Within this framework one can combine several ways of ground-truthing, different comparison metrics, and use large image databases.
Using our framework we show examples on several types of ground truthing techniques: implicit ground truthing (e.g. sequence recorded without a crash occurred), robotic vehicles with high precision sensors, and to a small extent, manual labeling. To show the effectiveness of our evaluation framework we compare three different stereo algorithms on
pixel and object level. In more detail we evaluate an intermediate representation
called the Stixel World. Besides evaluating the accuracy of the Stixels, we investigate the completeness (equivalent to the detection rate) of the StixelWorld vs. the number of phantom Stixels. Among many findings, using this framework enables us to reduce the number of phantom Stixels by a factor of three compared to the base parametrization. This base parametrization has already been optimized by test driving vehicles for distances exceeding 10000 km
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles
- ā¦