1,217 research outputs found
Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review
Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle
sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and
foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object
detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages
and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image
types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In
particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and
compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of
these approaches. The arti
Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges
Radar is a key component of the suite of perception sensors used for safe and
reliable navigation of autonomous vehicles. Its unique capabilities include
high-resolution velocity imaging, detection of agents in occlusion and over
long ranges, and robust performance in adverse weather conditions. However, the
usage of radar data presents some challenges: it is characterized by low
resolution, sparsity, clutter, high uncertainty, and lack of good datasets.
These challenges have limited radar deep learning research. As a result,
current radar models are often influenced by lidar and vision models, which are
focused on optical features that are relatively weak in radar data, thus
resulting in under-utilization of radar's capabilities and diminishing its
contribution to autonomous perception. This review seeks to encourage further
deep learning research on autonomous radar data by 1) identifying key research
themes, and 2) offering a comprehensive overview of current opportunities and
challenges in the field. Topics covered include early and late fusion,
occupancy flow estimation, uncertainty modeling, and multipath detection. The
paper also discusses radar fundamentals and data representation, presents a
curated list of recent radar datasets, and reviews state-of-the-art lidar and
vision models relevant for radar research. For a summary of the paper and more
results, visit the website: autonomous-radars.github.io
Non-implementation of property rating practice, any impact on community healthcare in Bauchi Metropolis Nigeria?
The practice of rating real estate is essentially an internal revenue source, synonymous to tenement tax levied on the owner/occupier. Property rating in Nigeria is bedevilled by many factors that impeded its smooth implementation and operation, thus, this form of taxation yields zero revenue in Bauchi, due to failure of implementation. This study is aimed at measuring the impact of non-implementation of property rating on community healthcare in Bauchi metropolis of Nigeria. Two hundred and fifty (250) closed-ended questionnaires composed in five-level Likert scale were distributed to professionals in the field of real estate and facilities management, in the academia and estate firms, and two hundred and twenty one questionnaires (221) were mailed back for analysis. The Structural Equation Modelling (SEM) in IBM version of SPSS with AMOS was used to establish relationship between the variables. Findings from this study reveals that PRP does not command direct impact on community healthcare services, however, the services financed by property rating in the area of sanitation and sewage cleaning has the tendencies to curb the occurrence of diseases like cholera and malaria. Thus, it can be understood that a fully institutionalized practice of property rating could avert the outbreak of diseases
Satellite Navigation for the Age of Autonomy
Global Navigation Satellite Systems (GNSS) brought navigation to the masses.
Coupled with smartphones, the blue dot in the palm of our hands has forever
changed the way we interact with the world. Looking forward, cyber-physical
systems such as self-driving cars and aerial mobility are pushing the limits of
what localization technologies including GNSS can provide. This autonomous
revolution requires a solution that supports safety-critical operation,
centimeter positioning, and cyber-security for millions of users. To meet these
demands, we propose a navigation service from Low Earth Orbiting (LEO)
satellites which deliver precision in-part through faster motion, higher power
signals for added robustness to interference, constellation autonomous
integrity monitoring for integrity, and encryption / authentication for
resistance to spoofing attacks. This paradigm is enabled by the 'New Space'
movement, where highly capable satellites and components are now built on
assembly lines and launch costs have decreased by more than tenfold. Such a
ubiquitous positioning service enables a consistent and secure standard where
trustworthy information can be validated and shared, extending the electronic
horizon from sensor line of sight to an entire city. This enables the
situational awareness needed for true safe operation to support autonomy at
scale.Comment: 11 pages, 8 figures, 2020 IEEE/ION Position, Location and Navigation
Symposium (PLANS
Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
We address the problem of semantic nighttime image segmentation and improve
the state-of-the-art, by adapting daytime models to nighttime without using
nighttime annotations. Moreover, we design a new evaluation framework to
address the substantial uncertainty of semantics in nighttime images. Our
central contributions are: 1) a curriculum framework to gradually adapt
semantic segmentation models from day to night through progressively darker
times of day, exploiting cross-time-of-day correspondences between daytime
images from a reference map and dark images to guide the label inference in the
dark domains; 2) a novel uncertainty-aware annotation and evaluation framework
and metric for semantic segmentation, including image regions beyond human
recognition capability in the evaluation in a principled fashion; 3) the Dark
Zurich dataset, comprising 2416 unlabeled nighttime and 2920 unlabeled twilight
images with correspondences to their daytime counterparts plus a set of 201
nighttime images with fine pixel-level annotations created with our protocol,
which serves as a first benchmark for our novel evaluation. Experiments show
that our map-guided curriculum adaptation significantly outperforms
state-of-the-art methods on nighttime sets both for standard metrics and our
uncertainty-aware metric. Furthermore, our uncertainty-aware evaluation reveals
that selective invalidation of predictions can improve results on data with
ambiguous content such as our benchmark and profit safety-oriented applications
involving invalid inputs.Comment: IEEE T-PAMI 202
Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review
Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems
A Self-Guided Docking Architecture for Autonomous Surface Vehicles
Autonomous Surface Vehicles (ASVs) provide the ideal platform to further explore the many opportunities in the cargo shipping industry, by making it more profitable and safer. Information retrieved from a 3D LIDAR, IMU, GPS, and Camera is combined to extract the geometric features of the floating platform and to estimate the relative position and orientation of the moor to the ASV. Then, a trajectory is planned to a specific target position, guaranteeing that the ASV will not collide with the mooring facility. To ensure that the sensors are within range of operation, a module has been developed to generate a trajectory that will deliver the ASV to a catch zone where it is able to function properly.A High-Level controler is also implemented, resorting to an heuristic to evaluate if the ASV is within this operating range and also its current orientation relative to the docking platform
- …