446 research outputs found
Deep Learning Techniques for Video Instance Segmentation: A Survey
Video instance segmentation, also known as multi-object tracking and
segmentation, is an emerging computer vision research area introduced in 2019,
aiming at detecting, segmenting, and tracking instances in videos
simultaneously. By tackling the video instance segmentation tasks through
effective analysis and utilization of visual information in videos, a range of
computer vision-enabled applications (e.g., human action recognition, medical
image processing, autonomous vehicle navigation, surveillance, etc) can be
implemented. As deep-learning techniques take a dominant role in various
computer vision areas, a plethora of deep-learning-based video instance
segmentation schemes have been proposed. This survey offers a multifaceted view
of deep-learning schemes for video instance segmentation, covering various
architectural paradigms, along with comparisons of functional performance,
model complexity, and computational overheads. In addition to the common
architectural designs, auxiliary techniques for improving the performance of
deep-learning models for video instance segmentation are compiled and
discussed. Finally, we discuss a range of major challenges and directions for
further investigations to help advance this promising research field
Advances in Object and Activity Detection in Remote Sensing Imagery
The recent revolution in deep learning has enabled considerable development in the fields of object and activity detection. Visual object detection tries to find objects of target classes with precise localisation in an image and assign each object instance a corresponding class label. At the same time, activity recognition aims to determine the actions or activities of an agent or group of agents based on sensor or video observation data. It is a very important and challenging problem to detect, identify, track, and understand the behaviour of objects through images and videos taken by various cameras. Together, objects and their activity recognition in imaging data captured by remote sensing platforms is a highly dynamic and challenging research topic. During the last decade, there has been significant growth in the number of publications in the field of object and activity recognition. In particular, many researchers have proposed application domains to identify objects and their specific behaviours from air and spaceborne imagery. This Special Issue includes papers that explore novel and challenging topics for object and activity detection in remote sensing images and videos acquired by diverse platforms
Object Tracking
Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application
Spatiotemporal Event Graphs for Dynamic Scene Understanding
Dynamic scene understanding is the ability of a computer system to interpret
and make sense of the visual information present in a video of a real-world
scene. In this thesis, we present a series of frameworks for dynamic scene
understanding starting from road event detection from an autonomous driving
perspective to complex video activity detection, followed by continual learning
approaches for the life-long learning of the models. Firstly, we introduce the
ROad event Awareness Dataset (ROAD) for Autonomous Driving, to our knowledge
the first of its kind. Due to the lack of datasets equipped with formally
specified logical requirements, we also introduce the ROad event Awareness
Dataset with logical Requirements (ROAD-R), the first publicly available
dataset for autonomous driving with requirements expressed as logical
constraints, as a tool for driving neurosymbolic research in the area. Next, we
extend event detection to holistic scene understanding by proposing two complex
activity detection methods. In the first method, we present a deformable,
spatiotemporal scene graph approach, consisting of three main building blocks:
action tube detection, a 3D deformable RoI pooling layer designed for learning
the flexible, deformable geometry of the constituent action tubes, and a scene
graph constructed by considering all parts as nodes and connecting them based
on different semantics. In a second approach evolving from the first, we
propose a hybrid graph neural network that combines attention applied to a
graph encoding of the local (short-term) dynamic scene with a temporal graph
modelling the overall long-duration activity. Finally, the last part of the
thesis is about presenting a new continual semi-supervised learning (CSSL)
paradigm.Comment: PhD thesis, Oxford Brookes University, Examiners: Prof. Dima Damen
and Dr. Matthias Rolf, 183 page
SODFormer: Streaming Object Detection with Transformer Using Events and Frames
DAVIS camera, streaming two complementary sensing modalities of asynchronous
events and frames, has gradually been used to address major object detection
challenges (e.g., fast motion blur and low-light). However, how to effectively
leverage rich temporal cues and fuse two heterogeneous visual streams remains a
challenging endeavor. To address this challenge, we propose a novel streaming
object detector with Transformer, namely SODFormer, which first integrates
events and frames to continuously detect objects in an asynchronous manner.
Technically, we first build a large-scale multimodal neuromorphic object
detection dataset (i.e., PKU-DAVIS-SOD) over 1080.1k manual labels. Then, we
design a spatiotemporal Transformer architecture to detect objects via an
end-to-end sequence prediction problem, where the novel temporal Transformer
module leverages rich temporal cues from two visual streams to improve the
detection performance. Finally, an asynchronous attention-based fusion module
is proposed to integrate two heterogeneous sensing modalities and take
complementary advantages from each end, which can be queried at any time to
locate objects and break through the limited output frequency from synchronized
frame-based fusion strategies. The results show that the proposed SODFormer
outperforms four state-of-the-art methods and our eight baselines by a
significant margin. We also show that our unifying framework works well even in
cases where the conventional frame-based camera fails, e.g., high-speed motion
and low-light conditions. Our dataset and code can be available at
https://github.com/dianzl/SODFormer.Comment: 18 pages, 15 figures, in IEEE Transactions on Pattern Analysis and
Machine Intelligenc
Embodied Visual Perception Models For Human Behavior Understanding
Many modern applications require extracting the core attributes of human behavior such as a person\u27s attention, intent, or skill level from the visual data. There are two main challenges related to this problem. First, we need models that can represent visual data in terms of object-level cues. Second, we need models that can infer the core behavioral attributes from the visual data. We refer to these two challenges as ``learning to see\u27\u27, and ``seeing to learn\u27\u27 respectively. In this PhD thesis, we have made progress towards addressing both challenges.
We tackle the problem of ``learning to see\u27\u27 by developing methods that extract object-level information directly from raw visual data. This includes, two top-down contour detectors, DeepEdge and HfL, which can be used to aid high-level vision tasks such as object detection. Furthermore, we also present two semantic object segmentation methods, Boundary Neural Fields (BNFs), and Convolutional Random Walk Networks (RWNs), which integrate low-level affinity cues into an object segmentation process. We then shift our focus to video-level understanding, and present a Spatiotemporal Sampling Network (STSN), which can be used for video object detection, and discriminative motion feature learning.
Afterwards, we transition into the second subproblem of ``seeing to learn\u27\u27, for which we leverage first-person GoPro cameras that record what people see during a particular activity. We aim to infer the core behavior attributes such as a person\u27s attention, intention, and his skill level from such first-person data. To do so, we first propose a concept of action-objects--the objects that capture person\u27s conscious visual (watching a TV) or tactile (taking a cup) interactions. We then introduce two models, EgoNet and Visual-Spatial Network (VSN), which detect action-objects in supervised and unsupervised settings respectively. Afterwards, we focus on a behavior understanding task in a complex basketball activity. We present a method for evaluating players\u27 skill level from their first-person basketball videos, and also a model that predicts a player\u27s future motion trajectory from a single first-person image
Automatic behavior recognition in laboratory animals using kinect
Tese de Mestrado Integrado. Bioengenharia. Faculdade de Engenharia. Universidade do Porto. 201
- …