109 research outputs found
Fair Latency-Aware Metric for real-time video segmentation networks
As supervised semantic segmentation is reaching satisfying results, many
recent papers focused on making segmentation network architectures faster,
smaller and more efficient. In particular, studies often aim to reach the stage
to which they can claim to be "real-time". Achieving this goal is especially
relevant in the context of real-time video operations for autonomous vehicles
and robots, or medical imaging during surgery.
The common metric used for assessing these methods is so far the same as the
ones used for image segmentation without time constraint: mean Intersection
over Union (mIoU). In this paper, we argue that this metric is not relevant
enough for real-time video as it does not take into account the processing time
(latency) of the network. We propose a similar but more relevant metric called
FLAME for video-segmentation networks, that compares the output segmentation of
the network with the ground truth segmentation of the current video frame at
the time when the network finishes the processing.
We perform experiments to compare a few networks using this metric and
propose a simple addition to network training to enhance results according to
that metric
Dynamic Face Video Segmentation via Reinforcement Learning
For real-time semantic video segmentation, most recent works utilised a
dynamic framework with a key scheduler to make online key/non-key decisions.
Some works used a fixed key scheduling policy, while others proposed adaptive
key scheduling methods based on heuristic strategies, both of which may lead to
suboptimal global performance. To overcome this limitation, we model the online
key decision process in dynamic video segmentation as a deep reinforcement
learning problem and learn an efficient and effective scheduling policy from
expert information about decision history and from the process of maximising
global return. Moreover, we study the application of dynamic video segmentation
on face videos, a field that has not been investigated before. By evaluating on
the 300VW dataset, we show that the performance of our reinforcement key
scheduler outperforms that of various baselines in terms of both effective key
selections and running speed. Further results on the Cityscapes dataset
demonstrate that our proposed method can also generalise to other scenarios. To
the best of our knowledge, this is the first work to use reinforcement learning
for online key-frame decision in dynamic video segmentation, and also the first
work on its application on face videos.Comment: CVPR 2020. 300VW with segmentation labels is available at:
https://github.com/mapleandfire/300VW-Mas
Fast Graph-Based Object Segmentation for RGB-D Images
Object segmentation is an important capability for robotic systems, in
particular for grasping. We present a graph- based approach for the
segmentation of simple objects from RGB-D images. We are interested in
segmenting objects with large variety in appearance, from lack of texture to
strong textures, for the task of robotic grasping. The algorithm does not rely
on image features or machine learning. We propose a modified Canny edge
detector for extracting robust edges by using depth information and two simple
cost functions for combining color and depth cues. The cost functions are used
to build an undirected graph, which is partitioned using the concept of
internal and external differences between graph regions. The partitioning is
fast with O(NlogN) complexity. We also discuss ways to deal with missing depth
information. We test the approach on different publicly available RGB-D object
datasets, such as the Rutgers APC RGB-D dataset and the RGB-D Object Dataset,
and compare the results with other existing methods
Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources
Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset
Linear-time Online Action Detection From 3D Skeletal Data Using Bags of Gesturelets
Sliding window is one direct way to extend a successful recognition system to
handle the more challenging detection problem. While action recognition decides
only whether or not an action is present in a pre-segmented video sequence,
action detection identifies the time interval where the action occurred in an
unsegmented video stream. Sliding window approaches for action detection can
however be slow as they maximize a classifier score over all possible
sub-intervals. Even though new schemes utilize dynamic programming to speed up
the search for the optimal sub-interval, they require offline processing on the
whole video sequence. In this paper, we propose a novel approach for online
action detection based on 3D skeleton sequences extracted from depth data. It
identifies the sub-interval with the maximum classifier score in linear time.
Furthermore, it is invariant to temporal scale variations and is suitable for
real-time applications with low latency
- …