63 research outputs found
DeepSignals: Predicting Intent of Drivers Through Visual Signals
Detecting the intention of drivers is an essential task in self-driving,
necessary to anticipate sudden events like lane changes and stops. Turn signals
and emergency flashers communicate such intentions, providing seconds of
potentially critical reaction time. In this paper, we propose to detect these
signals in video sequences by using a deep neural network that reasons about
both spatial and temporal information. Our experiments on more than a million
frames show high per-frame accuracy in very challenging scenarios.Comment: To be presented at the IEEE International Conference on Robotics and
Automation (ICRA), 201
LSTA: Long Short-Term Attention for Egocentric Action Recognition
Egocentric activity recognition is one of the most challenging tasks in video
analysis. It requires a fine-grained discrimination of small objects and their
manipulation. While some methods base on strong supervision and attention
mechanisms, they are either annotation consuming or do not take spatio-temporal
patterns into account. In this paper we propose LSTA as a mechanism to focus on
features from spatial relevant parts while attention is being tracked smoothly
across the video sequence. We demonstrate the effectiveness of LSTA on
egocentric activity recognition with an end-to-end trainable two-stream
architecture, achieving state of the art performance on four standard
benchmarks.Comment: Accepted to CVPR 201
LSTA: Long Short-Term Attention for Egocentric Action Recognition
Egocentric activity recognition is one of the most challenging tasks in video analysis. It requires a fine-grained discrimination of small objects and their manipulation. While some methods base on strong supervision and attention mechanisms, they are either annotation consuming or do not take spatio-temporal patterns into account. In this paper we propose LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence. We demonstrate the effectiveness of LSTA on egocentric activity recognition with an end-to-end trainable two-stream architecture, achieving state-of-the-art performance on four standard benchmarks
Deep Learning Techniques for Video Instance Segmentation: A Survey
Video instance segmentation, also known as multi-object tracking and
segmentation, is an emerging computer vision research area introduced in 2019,
aiming at detecting, segmenting, and tracking instances in videos
simultaneously. By tackling the video instance segmentation tasks through
effective analysis and utilization of visual information in videos, a range of
computer vision-enabled applications (e.g., human action recognition, medical
image processing, autonomous vehicle navigation, surveillance, etc) can be
implemented. As deep-learning techniques take a dominant role in various
computer vision areas, a plethora of deep-learning-based video instance
segmentation schemes have been proposed. This survey offers a multifaceted view
of deep-learning schemes for video instance segmentation, covering various
architectural paradigms, along with comparisons of functional performance,
model complexity, and computational overheads. In addition to the common
architectural designs, auxiliary techniques for improving the performance of
deep-learning models for video instance segmentation are compiled and
discussed. Finally, we discuss a range of major challenges and directions for
further investigations to help advance this promising research field
A Deep Learning Approach to Object Affordance Segmentation
Learning to understand and infer object functionalities is an important step
towards robust visual intelligence. Significant research efforts have recently
focused on segmenting the object parts that enable specific types of
human-object interaction, the so-called "object affordances". However, most
works treat it as a static semantic segmentation problem, focusing solely on
object appearance and relying on strong supervision and object detection. In
this paper, we propose a novel approach that exploits the spatio-temporal
nature of human-object interaction for affordance segmentation. In particular,
we design an autoencoder that is trained using ground-truth labels of only the
last frame of the sequence, and is able to infer pixel-wise affordance labels
in both videos and static images. Our model surpasses the need for object
labels and bounding boxes by using a soft-attention mechanism that enables the
implicit localization of the interaction hotspot. For evaluation purposes, we
introduce the SOR3D-AFF corpus, which consists of human-object interaction
sequences and supports 9 types of affordances in terms of pixel-wise
annotation, covering typical manipulations of tool-like objects. We show that
our model achieves competitive results compared to strongly supervised methods
on SOR3D-AFF, while being able to predict affordances for similar unseen
objects in two affordance image-only datasets.Comment: 5 pages, 4 figures, ICASSP 202
Pedestrian Attribute Recognition: A Survey
Recognizing pedestrian attributes is an important task in computer vision
community due to it plays an important role in video surveillance. Many
algorithms has been proposed to handle this task. The goal of this paper is to
review existing works using traditional methods or based on deep learning
networks. Firstly, we introduce the background of pedestrian attributes
recognition (PAR, for short), including the fundamental concepts of pedestrian
attributes and corresponding challenges. Secondly, we introduce existing
benchmarks, including popular datasets and evaluation criterion. Thirdly, we
analyse the concept of multi-task learning and multi-label learning, and also
explain the relations between these two learning algorithms and pedestrian
attribute recognition. We also review some popular network architectures which
have widely applied in the deep learning community. Fourthly, we analyse
popular solutions for this task, such as attributes group, part-based,
\emph{etc}. Fifthly, we shown some applications which takes pedestrian
attributes into consideration and achieve better performance. Finally, we
summarized this paper and give several possible research directions for
pedestrian attributes recognition. The project page of this paper can be found
from the following website:
\url{https://sites.google.com/view/ahu-pedestrianattributes/}.Comment: Check our project page for High Resolution version of this survey:
https://sites.google.com/view/ahu-pedestrianattributes
3D μ ν¬μ¦ μΈμμ μν μΈμ‘° λ°μ΄ν°μ μ΄μ©
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : μ΅ν©κ³ΌνκΈ°μ λνμ μ΅ν©κ³ΌνλΆ(μ§λ₯νμ΅ν©μμ€ν
μ 곡), 2021.8. μνμ΄.3D hand pose estimation (HPE) based on RGB images has been studied for a long time. Relevant methods have focused mainly on optimization of neural framework for graphically connected finger joints. Training RGB-based HPE models has not been easy to train because of the scarcity on RGB hand pose datasets; unlike human body pose datasets, the finger joints that span hand postures are structured delicately and exquisitely. Such structure makes accurately annotating each joint with unique 3D world coordinates difficult, which is why many conventional methods rely on synthetic data samples to cover large variations of hand postures.
Synthetic dataset consists of very precise annotations of ground truths, and further allows control over the variety of data samples, yielding a learning model to be trained with a large pose space. Most of the studies, however, have performed frame-by-frame estimation based on independent static images. Synthetic visual data can provide practically infinite diversity and rich labels, while avoiding ethical issues with privacy and bias. However, for many tasks, current models trained on synthetic data generalize poorly to real data. The task of 3D human hand pose estimation is a particularly interesting example of this synthetic-to-real problem, because learning-based approaches perform reasonably well given real training data, yet labeled 3D poses are extremely difficult to obtain in the wild, limiting scalability.
In this dissertation, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.
We propose a novel method that generates a synthetic dataset that mimics natural human hand movements by re-engineering annotations of an extant static hand pose dataset into pose-flows. With the generated dataset, we train a newly proposed recurrent framework, exploiting visuo-temporal features from sequential images of synthetic hands in motion and emphasizing temporal smoothness of estimations with a temporal consistency constraint. Our novel training strategy of detaching the recurrent layer of the framework during domain finetuning from synthetic to real allows preservation of the visuo-temporal features learned from sequential synthetic hand images. Hand poses that are sequentially estimated consequently produce natural and smooth hand movements which lead to more robust estimations. We show that utilizing temporal information for
3D hand pose estimation significantly enhances general pose estimations by outperforming state-of-the-art methods in experiments on hand pose estimation benchmarks.
Since a fixed set of dataset provides a finite distribution of data samples, the generalization of a learning pose estimation network is limited in terms of pose, RGB and viewpoint spaces. We further propose to augment the data automatically such that the augmented pose sampling is performed in favor of training pose estimators generalization performance. Such auto-augmentation of poses is performed within a learning feature space in order to avoid computational burden of generating synthetic sample for every iteration of updates. The proposed
effort can be considered as generating and utilizing synthetic samples for network training in the feature space. This allows training efficiency by requiring less number of real data samples, enhanced generalization power over multiple dataset domains and estimation performance caused by efficient augmentation.2D μ΄λ―Έμ§μμ μ¬λμ μ λͺ¨μκ³Ό ν¬μ¦λ₯Ό μΈμνκ³ κ΅¬ννλ μ°κ΅¬λ κ° μκ°λ½ μ‘°μΈνΈλ€μ 3D μμΉλ₯Ό κ²μΆνλ κ²μ λͺ©νλ‘νλ€. μ ν¬μ¦λ μκ°λ½ μ‘°μΈνΈλ€λ‘ ꡬμ±λμ΄ μκ³ μλͺ© κ΄μ λΆν° MCP, PIP, DIP μ‘°μΈνΈλ€λ‘ μ¬λ μμ ꡬμ±νλ μ 체μ μμλ€μ μλ―Ένλ€. μ ν¬μ¦ μ 보λ λ€μν λΆμΌμμ νμ©λ μ μκ³ μ μ μ€μ³ κ°μ§ μ°κ΅¬ λΆμΌμμ μ ν¬μ¦ μ λ³΄κ° λ§€μ° νλ₯ν μ
λ ₯ νΉμ§ κ°μΌλ‘ μ¬μ©λλ€.
μ¬λμ μ ν¬μ¦ κ²μΆ μ°κ΅¬λ₯Ό μ€μ μμ€ν
μ μ μ©νκΈ° μν΄μλ λμ μ νλ, μ€μκ°μ±, λ€μν κΈ°κΈ°μ μ¬μ© κ°λ₯νλλ‘ κ°λ²Όμ΄ λͺ¨λΈμ΄ νμνκ³ , μ΄κ²μ κ°λ₯μΌ νκΈ° μν΄μ νμ΅ν μΈκ³΅μ κ²½λ§ λͺ¨λΈμ νμ΅νλλ°μλ λ§μ λ°μ΄ν°κ° νμλ‘ νλ€. νμ§λ§ μ¬λ μ ν¬μ¦λ₯Ό μΈ‘μ νλ κΈ°κ³λ€μ΄ κ½€ λΆμμ νκ³ , μ΄ κΈ°κ³λ€μ μ₯μ°©νκ³ μλ μ΄λ―Έμ§λ μ¬λ μ νΌλΆ μκ³Όλ λ§μ΄ λ¬λΌ νμ΅μ μ¬μ©νκΈ°κ° μ μ νμ§ μλ€. κ·Έλ¬κΈ° λλ¬Έμ λ³Έ λ
Όλ¬Έμμλ μ΄λ¬ν λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ μΈκ³΅μ μΌλ‘ λ§λ€μ΄λΈ λ°μ΄ν°λ₯Ό μ¬κ°κ³΅ λ° μ¦λνμ¬ νμ΅μ μ¬μ©νκ³ , κ·Έκ²μ ν΅ν΄ λ μ’μ νμ΅μ±κ³Όλ₯Ό μ΄λ£¨λ €κ³ νλ€.
μΈκ³΅μ μΌλ‘ λ§λ€μ΄λΈ μ¬λ μ μ΄λ―Έμ§ λ°μ΄ν°λ€μ μ€μ μ¬λ μ νΌλΆμκ³Όλ λΉμ·ν μ§μΈμ λν
μΌν ν
μ€μ³κ° λ§μ΄ λ¬λΌ, μ€μ λ‘ μΈκ³΅ λ°μ΄ν°λ₯Ό νμ΅ν λͺ¨λΈμ μ€μ μ λ°μ΄ν°μμ μ±λ₯μ΄ νμ ν λ§μ΄ λ¨μ΄μ§λ€. μ΄ λ λ°μ΄νμ λλ©μΈμ μ€μ΄κΈ° μν΄μ 첫λ²μ§Έλ‘λ μ¬λμμ ꡬ쑰λ₯Ό λ¨Όμ νμ΅ μν€κΈ°μν΄, μ λͺ¨μ
μ μ¬κ°κ³΅νμ¬ κ·Έ μμ§μ ꡬ쑰λ₯Ό νμ€ν μκ°μ μ 보λ₯Ό λΊ λλ¨Έμ§λ§ μ€μ μ μ΄λ―Έμ§ λ°μ΄ν°μ νμ΅νμκ³ ν¬κ² ν¨κ³Όλ₯Ό λ΄μλ€.
μ΄λ μ€μ μ¬λ μλͺ¨μ
μ λͺ¨λ°©νλ λ°©λ²λ‘ μ μ μνμλ€.
λλ²μ§Έλ‘λ λ λλ©μΈμ΄ λ€λ₯Έ λ°μ΄ν°λ₯Ό λ€νΈμν¬ νΌμ³ 곡κ°μμ alignμμΌ°λ€. κ·ΈλΏλ§μλλΌ μΈκ³΅ ν¬μ¦λ₯Ό νΉμ λ°μ΄ν°λ€λ‘ augmentνμ§ μκ³ λ€νΈμν¬κ° λ§μ΄ λ³΄μ§ λͺ»ν ν¬μ¦κ° λ§λ€μ΄μ§λλ‘ νλμ νλ₯ λͺ¨λΈλ‘μ μ€μ νμ¬ κ·Έκ²μμ μνλ§νλ ꡬ쑰λ₯Ό μ μνμλ€.
λ³Έ λ
Όλ¬Έμμλ μΈκ³΅ λ°μ΄ν°λ₯Ό λ ν¨κ³Όμ μΌλ‘ μ¬μ©νμ¬ annotationμ΄ μ΄λ €μ΄ μ€μ λ°μ΄ν°λ₯Ό λ λͺ¨μΌλ μκ³ μ€λ¬μ μμ΄ μΈκ³΅ λ°μ΄ν°λ€μ λ ν¨κ³Όμ μΌλ‘ λ§λ€μ΄ λ΄λ κ² λΏλ§ μλλΌ, λ μμ νκ³ μ§μμ νΉμ§κ³Ό μκ°μ νΉμ§μ νμ©ν΄μ ν¬μ¦μ μ±λ₯μ κ°μ νλ λ°©λ²λ€μ μ μνλ€. λν, λ€νΈμν¬κ° μ€μ€λ‘ νμν λ°μ΄ν°λ₯Ό μ°Ύμμ νμ΅ν μ μλ μλ λ°μ΄ν° μ¦λ λ°©λ²λ‘ λ ν¨κ» μ μνμλ€. μ΄λ κ² μ μλ λ°©λ²μ κ²°ν©ν΄μ λ λμ μ ν¬μ¦μ μ±λ₯μ ν₯μ ν μ μλ€.1. Introduction 1
2. Related Works 14
3. Preliminaries: 3D Hand Mesh Model 27
4. SeqHAND: RGB-sequence-based 3D Hand Pose and Shape Estimation 31
5. Hand Pose Auto-Augment 66
6. Conclusion 85
Abstract (Korea) 101
κ°μ¬μ κΈ 103λ°
STMT: A Spatial-Temporal Mesh Transformer for MoCap-Based Action Recognition
We study the problem of human action recognition using motion capture (MoCap)
sequences. Unlike existing techniques that take multiple manual steps to derive
standardized skeleton representations as model input, we propose a novel
Spatial-Temporal Mesh Transformer (STMT) to directly model the mesh sequences.
The model uses a hierarchical transformer with intra-frame off-set attention
and inter-frame self-attention. The attention mechanism allows the model to
freely attend between any two vertex patches to learn non-local relationships
in the spatial-temporal domain. Masked vertex modeling and future frame
prediction are used as two self-supervised tasks to fully activate the
bi-directional and auto-regressive attention in our hierarchical transformer.
The proposed method achieves state-of-the-art performance compared to
skeleton-based and point-cloud-based models on common MoCap benchmarks. Code is
available at https://github.com/zgzxy001/STMT.Comment: CVPR 202
Deep Learning for Video Object Segmentation:A Review
As one of the fundamental problems in the field of video understanding, video object segmentation aims at segmenting objects of interest throughout the given video sequence. Recently, with the advancements of deep learning techniques, deep neural networks have shown outstanding performance improvements in many computer vision applications, with video object segmentation being one of the most advocated and intensively investigated. In this paper, we present a systematic review of the deep learning-based video segmentation literature, highlighting the pros and cons of each category of approaches. Concretely, we start by introducing the definition, background concepts and basic ideas of algorithms in this field. Subsequently, we summarise the datasets for training and testing a video object segmentation algorithm, as well as common challenges and evaluation metrics. Next, previous works are grouped and reviewed based on how they extract and use spatial and temporal features, where their architectures, contributions and the differences among each other are elaborated. At last, the quantitative and qualitative results of several representative methods on a dataset with many remaining challenges are provided and analysed, followed by further discussions on future research directions. This article is expected to serve as a tutorial and source of reference for learners intended to quickly grasp the current progress in this research area and practitioners interested in applying the video object segmentation methods to their problems. A public website is built to collect and track the related works in this field: https://github.com/gaomingqi/VOS-Review
- β¦