1,370 research outputs found
Saliency-guided integration of multiple scans
we present a novel method..
Semantically Informed Multiview Surface Refinement
We present a method to jointly refine the geometry and semantic segmentation
of 3D surface meshes. Our method alternates between updating the shape and the
semantic labels. In the geometry refinement step, the mesh is deformed with
variational energy minimization, such that it simultaneously maximizes
photo-consistency and the compatibility of the semantic segmentations across a
set of calibrated images. Label-specific shape priors account for interactions
between the geometry and the semantic labels in 3D. In the semantic
segmentation step, the labels on the mesh are updated with MRF inference, such
that they are compatible with the semantic segmentations in the input images.
Also, this step includes prior assumptions about the surface shape of different
semantic classes. The priors induce a tight coupling, where semantic
information influences the shape update and vice versa. Specifically, we
introduce priors that favor (i) adaptive smoothing, depending on the class
label; (ii) straightness of class boundaries; and (iii) semantic labels that
are consistent with the surface orientation. The novel mesh-based
reconstruction is evaluated in a series of experiments with real and synthetic
data. We compare both to state-of-the-art, voxel-based semantic 3D
reconstruction, and to purely geometric mesh refinement, and demonstrate that
the proposed scheme yields improved 3D geometry as well as an improved semantic
segmentation
Scan Integration as a Labeling Problem
Integration is a crucial step in the reconstruction of complete 3D surface model from multiple scans.
Ever-present registration errors and scanning noise make integration a nontrivial problem. In this paper,
we propose a novel method for multi-view scan integration where we solve it as a labelling problem.
Unlike previous methods, which have been based on various merging schemes, our labelling-based
method is essentially a selection strategy. The overall surface model is composed of surface patches from
selected input scans. We formulate the labelling via a higher-order Markov Random Field (MRF) which
assigns a label representing an index of some input scan to every point in a base surface. Using a higherorder
MRF allows us to more effectively capture spatial relations between 3D points. We employ belief
propagation to infer this labelling and experimentally demonstrate that this integration approach
provides significantly improved integration via both qualitative and quantitative comparisons
Clothing Co-Parsing by Joint Image Segmentation and Labeling
This paper aims at developing an integrated system of clothing co-parsing, in
order to jointly parse a set of clothing images (unsegmented but annotated with
tags) into semantic configurations. We propose a data-driven framework
consisting of two phases of inference. The first phase, referred as "image
co-segmentation", iterates to extract consistent regions on images and jointly
refines the regions over all images by employing the exemplar-SVM (E-SVM)
technique [23]. In the second phase (i.e. "region co-labeling"), we construct a
multi-image graphical model by taking the segmented regions as vertices, and
incorporate several contexts of clothing configuration (e.g., item location and
mutual interactions). The joint label assignment can be solved using the
efficient Graph Cuts algorithm. In addition to evaluate our framework on the
Fashionista dataset [30], we construct a dataset called CCP consisting of 2098
high-resolution street fashion photos to demonstrate the performance of our
system. We achieve 90.29% / 88.23% segmentation accuracy and 65.52% / 63.89%
recognition rate on the Fashionista and the CCP datasets, respectively, which
are superior compared with state-of-the-art methods.Comment: 8 pages, 5 figures, CVPR 201
Hierarchical Object Parsing from Structured Noisy Point Clouds
Object parsing and segmentation from point clouds are challenging tasks
because the relevant data is available only as thin structures along object
boundaries or other features, and is corrupted by large amounts of noise. To
handle this kind of data, flexible shape models are desired that can accurately
follow the object boundaries. Popular models such as Active Shape and Active
Appearance models lack the necessary flexibility for this task, while recent
approaches such as the Recursive Compositional Models make model
simplifications in order to obtain computational guarantees. This paper
investigates a hierarchical Bayesian model of shape and appearance in a
generative setting. The input data is explained by an object parsing layer,
which is a deformation of a hidden PCA shape model with Gaussian prior. The
paper also introduces a novel efficient inference algorithm that uses informed
data-driven proposals to initialize local searches for the hidden variables.
Applied to the problem of object parsing from structured point clouds such as
edge detection images, the proposed approach obtains state of the art parsing
errors on two standard datasets without using any intensity information.Comment: 13 pages, 16 figure
Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies
Multi-atlas segmentation is a widely used tool in medical image analysis,
providing robust and accurate results by learning from annotated atlas
datasets. However, the availability of fully annotated atlas images for
training is limited due to the time required for the labelling task.
Segmentation methods requiring only a proportion of each atlas image to be
labelled could therefore reduce the workload on expert raters tasked with
annotating atlas images. To address this issue, we first re-examine the
labelling problem common in many existing approaches and formulate its solution
in terms of a Markov Random Field energy minimisation problem on a graph
connecting atlases and the target image. This provides a unifying framework for
multi-atlas segmentation. We then show how modifications in the graph
configuration of the proposed framework enable the use of partially annotated
atlas images and investigate different partial annotation strategies. The
proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets
for hippocampal and cardiac segmentation. Experiments were performed aimed at
(1) recreating existing segmentation techniques with the proposed framework and
(2) demonstrating the potential of employing sparsely annotated atlas data for
multi-atlas segmentation
Multilayer Markov Random Field Models for Change Detection in Optical Remote Sensing Images
In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of ground truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches
Statistical interaction modeling of bovine herd behaviors
While there has been interest in modeling the group behavior of herds or flocks, much of this work has focused on simulating their collective spatial motion patterns which have not accounted for individuality in the herd and instead assume a homogenized role for all members or sub-groups of the herd. Animal behavior experts have noted that domestic animals exhibit behaviors that are indicative of social hierarchy: leader/follower type behaviors are present as well as dominance and subordination, aggression and rank order, and specific social affiliations may also exist. Both wild and domestic cattle are social species, and group behaviors are likely to be influenced by the expression of specific social interactions. In this paper, Global Positioning System coordinate fixes gathered from a herd of beef cows tracked in open fields over several days at a time are utilized to learn a model that focuses on the interactions within the herd as well as its overall movement. Using these data in this way explores the validity of existing group behavior models against actual herding behaviors. Domain knowledge, location geography and human observations, are utilized to explain the causes of these deviations from this idealized behavior
- …