4,303 research outputs found
Parsing Objects at a Finer Granularity: A Survey
Fine-grained visual parsing, including fine-grained part segmentation and
fine-grained object recognition, has attracted considerable critical attention
due to its importance in many real-world applications, e.g., agriculture,
remote sensing, and space technologies. Predominant research efforts tackle
these fine-grained sub-tasks following different paradigms, while the inherent
relations between these tasks are neglected. Moreover, given most of the
research remains fragmented, we conduct an in-depth study of the advanced work
from a new perspective of learning the part relationship. In this perspective,
we first consolidate recent research and benchmark syntheses with new
taxonomies. Based on this consolidation, we revisit the universal challenges in
fine-grained part segmentation and recognition tasks and propose new solutions
by part relationship learning for these important challenges. Furthermore, we
conclude several promising lines of research in fine-grained visual parsing for
future research.Comment: Survey for fine-grained part segmentation and object recognition;
Accepted by Machine Intelligence Research (MIR
Part-aware Panoptic Segmentation
In this work, we introduce the new scene understanding task of Part-aware
Panoptic Segmentation (PPS), which aims to understand a scene at multiple
levels of abstraction, and unifies the tasks of scene parsing and part parsing.
For this novel task, we provide consistent annotations on two commonly used
datasets: Cityscapes and Pascal VOC. Moreover, we present a single metric to
evaluate PPS, called Part-aware Panoptic Quality (PartPQ). For this new task,
using the metric and annotations, we set multiple baselines by merging results
of existing state-of-the-art methods for panoptic segmentation and part
segmentation. Finally, we conduct several experiments that evaluate the
importance of the different levels of abstraction in this single task.Comment: CVPR 2021. Code and data: https://github.com/tue-mps/panoptic_part
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
Towards holistic scene understanding:Semantic segmentation and beyond
This dissertation addresses visual scene understanding and enhances
segmentation performance and generalization, training efficiency of networks,
and holistic understanding. First, we investigate semantic segmentation in the
context of street scenes and train semantic segmentation networks on
combinations of various datasets. In Chapter 2 we design a framework of
hierarchical classifiers over a single convolutional backbone, and train it
end-to-end on a combination of pixel-labeled datasets, improving
generalizability and the number of recognizable semantic concepts. Chapter 3
focuses on enriching semantic segmentation with weak supervision and proposes a
weakly-supervised algorithm for training with bounding box-level and
image-level supervision instead of only with per-pixel supervision. The memory
and computational load challenges that arise from simultaneous training on
multiple datasets are addressed in Chapter 4. We propose two methodologies for
selecting informative and diverse samples from datasets with weak supervision
to reduce our networks' ecological footprint without sacrificing performance.
Motivated by memory and computation efficiency requirements, in Chapter 5, we
rethink simultaneous training on heterogeneous datasets and propose a universal
semantic segmentation framework. This framework achieves consistent increases
in performance metrics and semantic knowledgeability by exploiting various
scene understanding datasets. Chapter 6 introduces the novel task of part-aware
panoptic segmentation, which extends our reasoning towards holistic scene
understanding. This task combines scene and parts-level semantics with
instance-level object detection. In conclusion, our contributions span over
convolutional network architectures, weakly-supervised learning, part and
panoptic segmentation, paving the way towards a holistic, rich, and sustainable
visual scene understanding.Comment: PhD Thesis, Eindhoven University of Technology, October 202
- …