484 research outputs found
SeGAN: Segmenting and Generating the Invisible
Objects often occlude each other in scenes; Inferring their appearance beyond
their visible parts plays an important role in scene understanding, depth
estimation, object interaction and manipulation. In this paper, we study the
challenging problem of completing the appearance of occluded objects. Doing so
requires knowing which pixels to paint (segmenting the invisible parts of
objects) and what color to paint them (generating the invisible parts). Our
proposed novel solution, SeGAN, jointly optimizes for both segmentation and
generation of the invisible parts of objects. Our experimental results show
that: (a) SeGAN can learn to generate the appearance of the occluded parts of
objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the
invisible parts of objects; (c) trained on synthetic photo realistic images,
SeGAN can reliably segment natural images; (d) by reasoning about occluder
occludee relations, our method can infer depth layering.Comment: Accepted to CVPR18 as spotligh
Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors
Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds.
The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets.
The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model.
Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding.
A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity
Holistic, Instance-Level Human Parsing
Object parsing -- the task of decomposing an object into its semantic parts
-- has traditionally been formulated as a category-level segmentation problem.
Consequently, when there are multiple objects in an image, current methods
cannot count the number of objects in the scene, nor can they determine which
part belongs to which object. We address this problem by segmenting the parts
of objects at an instance-level, such that each pixel in the image is assigned
a part label, as well as the identity of the object it belongs to. Moreover, we
show how this approach benefits us in obtaining segmentations at coarser
granularities as well. Our proposed network is trained end-to-end given
detections, and begins with a category-level segmentation module. Thereafter, a
differentiable Conditional Random Field, defined over a variable number of
instances for every input image, reasons about the identity of each part by
associating it with a human detection. In contrast to other approaches, our
method can handle the varying number of people in each image and our holistic
network produces state-of-the-art results in instance-level part and human
segmentation, together with competitive results in category-level part
segmentation, all achieved by a single forward-pass through our neural network.Comment: Poster at BMVC 201
Microsoft COCO: Common Objects in Context
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model
- …