1,583 research outputs found
Performance of object recognition in wearable videos
Wearable technologies are enabling plenty of new applications of computer
vision, from life logging to health assistance. Many of them are required to
recognize the elements of interest in the scene captured by the camera. This
work studies the problem of object detection and localization on videos
captured by this type of camera. Wearable videos are a much more challenging
scenario for object detection than standard images or even another type of
videos, due to lower quality images (e.g. poor focus) or high clutter and
occlusion common in wearable recordings. Existing work typically focuses on
detecting the objects of focus or those being manipulated by the user wearing
the camera. We perform a more general evaluation of the task of object
detection in this type of video, because numerous applications, such as
marketing studies, also need detecting objects which are not in focus by the
user. This work presents a thorough study of the well known YOLO architecture,
that offers an excellent trade-off between accuracy and speed, for the
particular case of object detection in wearable video. We focus our study on
the public ADL Dataset, but we also use additional public data for
complementary evaluations. We run an exhaustive set of experiments with
different variations of the original architecture and its training strategy.
Our experiments drive to several conclusions about the most promising
directions for our goal and point us to further research steps to improve
detection in wearable videos.Comment: Emerging Technologies and Factory Automation, ETFA, 201
Event Transformer+. A multi-purpose solution for efficient event data processing
Event cameras record sparse illumination changes with high temporal
resolution and high dynamic range. Thanks to their sparse recording and low
consumption, they are increasingly used in applications such as AR/VR and
autonomous driving. Current top-performing methods often ignore specific
event-data properties, leading to the development of generic but
computationally expensive algorithms, while event-aware methods do not perform
as well. We propose Event Transformer+, that improves our seminal work evtprev
EvT with a refined patch-based event representation and a more robust backbone
to achieve more accurate results, while still benefiting from event-data
sparsity to increase its efficiency. Additionally, we show how our system can
work with different data modalities and propose specific output heads, for
event-stream predictions (i.e. action recognition) and per-pixel predictions
(dense depth estimation). Evaluation results show better performance to the
state-of-the-art while requiring minimal computation resources, both on GPU and
CPU
Domain Adaptation in LiDAR Semantic Segmentation by Aligning Class Distributions
LiDAR semantic segmentation provides 3D semantic information about the
environment, an essential cue for intelligent systems during their decision
making processes. Deep neural networks are achieving state-of-the-art results
on large public benchmarks on this task. Unfortunately, finding models that
generalize well or adapt to additional domains, where data distribution is
different, remains a major challenge. This work addresses the problem of
unsupervised domain adaptation for LiDAR semantic segmentation models. Our
approach combines novel ideas on top of the current state-of-the-art approaches
and yields new state-of-the-art results. We propose simple but effective
strategies to reduce the domain shift by aligning the data distribution on the
input space. Besides, we propose a learning-based approach that aligns the
distribution of the semantic classes of the target domain to the source domain.
The presented ablation study shows how each part contributes to the final
performance. Our strategy is shown to outperform previous approaches for domain
adaptation with comparisons run on three different domains.Comment: 7 pages, 3 figure
Analyzing and Decoding Natural Reach-and-Grasp Actions Using Gel, Water and Dry EEG Systems
Reaching and grasping is an essential part of everybody’s life, it allows meaningful interaction with the environment and is key to independent lifestyle. Recent electroencephalogram (EEG)-based studies have already shown that neural correlates of natural reach-and-grasp actions can be identified in the EEG. However, it is still in question whether these results obtained in a laboratory environment can make the transition to mobile applicable EEG systems for home use. In the current study, we investigated whether EEG-based correlates of natural reach-and-grasp actions can be successfully identified and decoded using mobile EEG systems, namely the water-based EEG-VersatileTM system and the dry-electrodes EEG-HeroTM headset. In addition, we also analyzed gel-based recordings obtained in a laboratory environment (g.USBamp/g.Ladybird, gold standard), which followed the same experimental parameters. For each recording system, 15 study participants performed 80 self-initiated reach-and-grasp actions toward a glass (palmar grasp) and a spoon (lateral grasp). Our results confirmed that EEG-based correlates of reach-and-grasp actions can be successfully identified using these mobile systems. In a single-trial multiclass-based decoding approach, which incorporated both movement conditions and rest, we could show that the low frequency time domain (LFTD) correlates were also decodable. Grand average peak accuracy calculated on unseen test data yielded for the water-based electrode system 62.3% (9.2% STD), whereas for the dry-electrodes headset reached 56.4% (8% STD). For the gel-based electrode system 61.3% (8.6% STD) could be achieved. To foster and promote further investigations in the field of EEG-based movement decoding, as well as to allow the interested community to make their own conclusions, we provide all datasets publicly available in the BNCI Horizon 2020 database (http://bnci-horizon-2020.eu/database/data-sets)
- …