3,632 research outputs found
Multiscale combinatorial grouping for image segmentation and object proposal generation
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.Peer ReviewedPostprint (author's final draft
DCTM: Discrete-Continuous Transformation Matching for Semantic Flow
Techniques for dense semantic correspondence have provided limited ability to
deal with the geometric variations that commonly exist between semantically
similar images. While variations due to scale and rotation have been examined,
there lack practical solutions for more complex deformations such as affine
transformations because of the tremendous size of the associated solution
space. To address this problem, we present a discrete-continuous transformation
matching (DCTM) framework where dense affine transformation fields are inferred
through a discrete label optimization in which the labels are iteratively
updated via continuous regularization. In this way, our approach draws
solutions from the continuous space of affine transformations in a manner that
can be computed efficiently through constant-time edge-aware filtering and a
proposed affine-varying CNN-based descriptor. Experimental results show that
this model outperforms the state-of-the-art methods for dense semantic
correspondence on various benchmarks
PointAtrousGraph: Deep Hierarchical Encoder-Decoder with Point Atrous Convolution for Unorganized 3D Points
Motivated by the success of encoding multi-scale contextual information for
image analysis, we propose our PointAtrousGraph (PAG) - a deep
permutation-invariant hierarchical encoder-decoder for efficiently exploiting
multi-scale edge features in point clouds. Our PAG is constructed by several
novel modules, such as Point Atrous Convolution (PAC), Edge-preserved Pooling
(EP) and Edge-preserved Unpooling (EU). Similar with atrous convolution, our
PAC can effectively enlarge receptive fields of filters and thus densely learn
multi-scale point features. Following the idea of non-overlapping max-pooling
operations, we propose our EP to preserve critical edge features during
subsampling. Correspondingly, our EU modules gradually recover spatial
information for edge features. In addition, we introduce chained skip
subsampling/upsampling modules that directly propagate edge features to the
final stage. Particularly, our proposed auxiliary loss functions can further
improve our performance. Experimental results show that our PAG outperform
previous state-of-the-art methods on various 3D semantic perception
applications.Comment: 11 pages, 10 figure
DarSIA: An Open-Source Python Toolbox for Two-Scale Image Processing of Dynamics in Porous Media
Understanding porous media flow is inherently a multi-scale challenge, where at the core lies the aggregation of pore-level processes to a continuum, or Darcy-scale, description. This challenge is directly mirrored in image processing, where pore-scale grains and interfaces may be clearly visible in the image, yet continuous Darcy-scale parameters may be what are desirable to quantify. Classical image processing is poorly adapted to this setting, as most techniques do not explicitly utilize the fact that the image contains explicit physical processes. Here, we extend classical image processing concepts to what we define as “physical images” of porous materials and processes within them. This is realized through the development of a new open-source image analysis toolbox specifically adapted to time-series of images of porous materials.publishedVersio
A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds
This paper proposes a segmentation-free, automatic and efficient procedure to
detect general geometric quadric forms in point clouds, where clutter and
occlusions are inevitable. Our everyday world is dominated by man-made objects
which are designed using 3D primitives (such as planes, cones, spheres,
cylinders, etc.). These objects are also omnipresent in industrial
environments. This gives rise to the possibility of abstracting 3D scenes
through primitives, thereby positions these geometric forms as an integral part
of perception and high level 3D scene understanding.
As opposed to state-of-the-art, where a tailored algorithm treats each
primitive type separately, we propose to encapsulate all types in a single
robust detection procedure. At the center of our approach lies a closed form 3D
quadric fit, operating in both primal & dual spaces and requiring as low as 4
oriented-points. Around this fit, we design a novel, local null-space voting
strategy to reduce the 4-point case to 3. Voting is coupled with the famous
RANSAC and makes our algorithm orders of magnitude faster than its conventional
counterparts. This is the first method capable of performing a generic
cross-type multi-object primitive detection in difficult scenes. Results on
synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201
Structure from Articulated Motion: Accurate and Stable Monocular 3D Reconstruction without Training Data
Recovery of articulated 3D structure from 2D observations is a challenging
computer vision problem with many applications. Current learning-based
approaches achieve state-of-the-art accuracy on public benchmarks but are
restricted to specific types of objects and motions covered by the training
datasets. Model-based approaches do not rely on training data but show lower
accuracy on these datasets. In this paper, we introduce a model-based method
called Structure from Articulated Motion (SfAM), which can recover multiple
object and motion types without training on extensive data collections. At the
same time, it performs on par with learning-based state-of-the-art approaches
on public benchmarks and outperforms previous non-rigid structure from motion
(NRSfM) methods. SfAM is built upon a general-purpose NRSfM technique while
integrating a soft spatio-temporal constraint on the bone lengths. We use
alternating optimization strategy to recover optimal geometry (i.e., bone
proportions) together with 3D joint positions by enforcing the bone lengths
consistency over a series of frames. SfAM is highly robust to noisy 2D
annotations, generalizes to arbitrary objects and does not rely on training
data, which is shown in extensive experiments on public benchmarks and real
video sequences. We believe that it brings a new perspective on the domain of
monocular 3D recovery of articulated structures, including human motion
capture.Comment: 21 pages, 8 figures, 2 table
Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Self-driving cars need to understand 3D scenes efficiently and accurately in
order to drive safely. Given the limited hardware resources, existing 3D
perception models are not able to recognize small instances (e.g., pedestrians,
cyclists) very well due to the low-resolution voxelization and aggressive
downsampling. To this end, we propose Sparse Point-Voxel Convolution (SPVConv),
a lightweight 3D module that equips the vanilla Sparse Convolution with the
high-resolution point-based branch. With negligible overhead, this point-based
branch is able to preserve the fine details even from large outdoor scenes. To
explore the spectrum of efficient 3D models, we first define a flexible
architecture design space based on SPVConv, and we then present 3D Neural
Architecture Search (3D-NAS) to search the optimal network architecture over
this diverse design space efficiently and effectively. Experimental results
validate that the resulting SPVNAS model is fast and accurate: it outperforms
the state-of-the-art MinkowskiNet by 3.3%, ranking 1st on the competitive
SemanticKITTI leaderboard. It also achieves 8x computation reduction and 3x
measured speedup over MinkowskiNet with higher accuracy. Finally, we transfer
our method to 3D object detection, and it achieves consistent improvements over
the one-stage detection baseline on KITTI.Comment: ECCV 2020. The first two authors contributed equally to this work.
Project page: http://spvnas.mit.edu
- …