1,291 research outputs found
Rain Removal in Traffic Surveillance: Does it Matter?
Varying weather conditions, including rainfall and snowfall, are generally
regarded as a challenge for computer vision algorithms. One proposed solution
to the challenges induced by rain and snowfall is to artificially remove the
rain from images or video using rain removal algorithms. It is the promise of
these algorithms that the rain-removed image frames will improve the
performance of subsequent segmentation and tracking algorithms. However, rain
removal algorithms are typically evaluated on their ability to remove synthetic
rain on a small subset of images. Currently, their behavior is unknown on
real-world videos when integrated with a typical computer vision pipeline. In
this paper, we review the existing rain removal algorithms and propose a new
dataset that consists of 22 traffic surveillance sequences under a broad
variety of weather conditions that all include either rain or snowfall. We
propose a new evaluation protocol that evaluates the rain removal algorithms on
their ability to improve the performance of subsequent segmentation, instance
segmentation, and feature tracking algorithms under rain and snow. If
successful, the de-rained frames of a rain removal algorithm should improve
segmentation performance and increase the number of accurately tracked
features. The results show that a recent single-frame-based rain removal
algorithm increases the segmentation performance by 19.7% on our proposed
dataset, but it eventually decreases the feature tracking performance and
showed mixed results with recent instance segmentation methods. However, the
best video-based rain removal algorithm improves the feature tracking accuracy
by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
Benchmarking the Robustness of LiDAR Semantic Segmentation Models
When using LiDAR semantic segmentation models for safety-critical
applications such as autonomous driving, it is essential to understand and
improve their robustness with respect to a large range of LiDAR corruptions. In
this paper, we aim to comprehensively analyze the robustness of LiDAR semantic
segmentation models under various corruptions. To rigorously evaluate the
robustness and generalizability of current approaches, we propose a new
benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR
corruptions in three groups, namely adverse weather, measurement noise and
cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic
segmentation models, especially spanning different input representations (e.g.,
point clouds, voxels, projected images, and etc.), network architectures and
training schemes. Through this study, we obtain two insights: 1) We find out
that the input representation plays a crucial role in robustness. Specifically,
under specific corruptions, different representations perform variously. 2)
Although state-of-the-art methods on LiDAR semantic segmentation achieve
promising results on clean data, they are less robust when dealing with noisy
data. Finally, based on the above observations, we design a robust LiDAR
segmentation model (RLSeg) which greatly boosts the robustness with simple but
effective modifications. It is promising that our benchmark, comprehensive
analysis, and observations can boost future research in robust LiDAR semantic
segmentation for safety-critical applications.Comment: IJCV-2024. The benchmark will be made available at
https://yanx27.github.io/RobustLidarSeg
Continual Domain Adaptation on Aerial Images under Gradually Degrading Weather
Domain adaptation (DA) strives to mitigate the domain gap between the source
domain where a model is trained, and the target domain where the model is
deployed. When a deep learning model is deployed on an aerial platform, it may
face gradually degrading weather conditions during operation, leading to
widening domain gaps between the training data and the encountered evaluation
data. We synthesize two such gradually worsening weather conditions on real
images from two existing aerial imagery datasets, generating a total of four
benchmark datasets. Under the continual, or test-time adaptation setting, we
evaluate three DA models on our datasets: a baseline standard DA model and two
continual DA models. In such setting, the models can access only one small
portion, or one batch of the target data at a time, and adaptation takes place
continually, and over only one epoch of the data. The combination of the
constraints of continual adaptation, and gradually deteriorating weather
conditions provide the practical DA scenario for aerial deployment. Among the
evaluated models, we consider both convolutional and transformer architectures
for comparison. We discover stability issues during adaptation for existing
buffer-fed continual DA methods, and offer gradient normalization as a simple
solution to curb training instability
Energy-based Detection of Adverse Weather Effects in LiDAR Data
Autonomous vehicles rely on LiDAR sensors to perceive the environment.
Adverse weather conditions like rain, snow, and fog negatively affect these
sensors, reducing their reliability by introducing unwanted noise in the
measurements. In this work, we tackle this problem by proposing a novel
approach for detecting adverse weather effects in LiDAR data. We reformulate
this problem as an outlier detection task and use an energy-based framework to
detect outliers in point clouds. More specifically, our method learns to
associate low energy scores with inlier points and high energy scores with
outliers allowing for robust detection of adverse weather effects. In extensive
experiments, we show that our method performs better in adverse weather
detection and has higher robustness to unseen weather effects than previous
state-of-the-art methods. Furthermore, we show how our method can be used to
perform simultaneous outlier detection and semantic segmentation. Finally, to
help expand the research field of LiDAR perception in adverse weather, we
release the SemanticSpray dataset, which contains labeled vehicle spray data in
highway-like scenarios. The dataset is available at
http://dx.doi.org/10.18725/OPARU-48815 .Comment: Accepted for publication in IEEE Robotics and Automation Letters
(RA-L
- …