382 research outputs found
Conformal Wasserstein distances: comparing surfaces in polynomial time
We present a constructive approach to surface comparison realizable by a
polynomial-time algorithm. We determine the "similarity" of two given surfaces
by solving a mass-transportation problem between their conformal densities.
This mass transportation problem differs from the standard case in that we
require the solution to be invariant under global M\"{o}bius transformations.
We present in detail the case where the surfaces to compare are disk-like; we
also sketch how the approach can be generalized to other types of surfaces.Comment: 23 pages, 3 figure
Semi-supervised Learning based on Distributionally Robust Optimization
We propose a novel method for semi-supervised learning (SSL) based on
data-driven distributionally robust optimization (DRO) using optimal transport
metrics. Our proposed method enhances generalization error by using the
unlabeled data to restrict the support of the worst case distribution in our
DRO formulation. We enable the implementation of our DRO formulation by
proposing a stochastic gradient descent algorithm which allows to easily
implement the training procedure. We demonstrate that our Semi-supervised DRO
method is able to improve the generalization error over natural supervised
procedures and state-of-the-art SSL estimators. Finally, we include a
discussion on the large sample behavior of the optimal uncertainty region in
the DRO formulation. Our discussion exposes important aspects such as the role
of dimension reduction in SSL
On the Complexity of -Closeness Anonymization and Related Problems
An important issue in releasing individual data is to protect the sensitive
information from being leaked and maliciously utilized. Famous privacy
preserving principles that aim to ensure both data privacy and data integrity,
such as -anonymity and -diversity, have been extensively studied both
theoretically and empirically. Nonetheless, these widely-adopted principles are
still insufficient to prevent attribute disclosure if the attacker has partial
knowledge about the overall sensitive data distribution. The -closeness
principle has been proposed to fix this, which also has the benefit of
supporting numerical sensitive attributes. However, in contrast to
-anonymity and -diversity, the theoretical aspect of -closeness has
not been well investigated.
We initiate the first systematic theoretical study on the -closeness
principle under the commonly-used attribute suppression model. We prove that
for every constant such that , it is NP-hard to find an optimal
-closeness generalization of a given table. The proof consists of several
reductions each of which works for different values of , which together
cover the full range. To complement this negative result, we also provide exact
and fixed-parameter algorithms. Finally, we answer some open questions
regarding the complexity of -anonymity and -diversity left in the
literature.Comment: An extended abstract to appear in DASFAA 201
Performance evaluation of image segmentation
In spite of significant advances in image segmentation techniques, evaluation of these methods thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images that are evaluated by some method, or it is otherwise left to subjective evaluation by the reader. We propose a new approach for evaluation of segmentation that takes into account not only the accuracy of the boundary localization of the created segments but also the under-segmentation and over-segmentation effects, regardless to the number of regions in each partition. In addition, it takes into account the way humans perceive visual information. This new metric can be applied both to automatically provide a ranking among different segmentation algorithms and to find an optimal set of input parameters of a given algorithm
Learning Free-Form Deformations for 3D Object Reconstruction
Representing 3D shape in deep learning frameworks in an accurate, efficient
and compact manner still remains an open challenge. Most existing work
addresses this issue by employing voxel-based representations. While these
approaches benefit greatly from advances in computer vision by generalizing 2D
convolutions to the 3D setting, they also have several considerable drawbacks.
The computational complexity of voxel-encodings grows cubically with the
resolution thus limiting such representations to low-resolution 3D
reconstruction. In an attempt to solve this problem, point cloud
representations have been proposed. Although point clouds are more efficient
than voxel representations as they only cover surfaces rather than volumes,
they do not encode detailed geometric information about relationships between
points. In this paper we propose a method to learn free-form deformations (FFD)
for the task of 3D reconstruction from a single image. By learning to deform
points sampled from a high-quality mesh, our trained model can be used to
produce arbitrarily dense point clouds or meshes with fine-grained geometry. We
evaluate our proposed framework on both synthetic and real-world data and
achieve state-of-the-art results on point-cloud and volumetric metrics.
Additionally, we qualitatively demonstrate its applicability to label
transferring for 3D semantic segmentation.Comment: 16 pages, 7 figures, 3 table
Exploiting Sparse Representations for Robust Analysis of Noisy Complex Video Scenes
Abstract. Recent works have shown that, even with simple low level visual cues, complex behaviors can be extracted automatically from crowded scenes, e.g. those depicting public spaces recorded from video surveillance cameras. However, low level features as optical flow or fore-ground pixels are inherently noisy. In this paper we propose a novel unsupervised learning approach for the analysis of complex scenes which is specifically tailored to cope directly with features â noise and uncer-tainty. We formalize the task of extracting activity patterns as a matrix factorization problem, considering as reconstruction function the robust Earth Moverâs Distance. A constraint of sparsity on the computed basis matrix is imposed, filtering out noise and leading to the identification of the most relevant elementary activities in a typical high level behavior. We further derive an alternate optimization approach to solve the pro-posed problem efficiently and we show that it is reduced to a sequence of linear programs. Finally, we propose to use short trajectory snippets to account for object motion information, in alternative to the noisy optical flow vectors used in previous works. Experimental results demonstrate that our method yields similar or superior performance to state-of-the arts approaches.
Tracking System with Re-identification Using a RGB String Kernel
International audiencePeople re-identification consists to identify a person which comes back in a scene where it has been previously detected. This key problem in visual surveillance applications may concern single or multi camera systems. Features encoding each person should be rich enough to provide an efficient re-identification while being sufficiently robust to remain significant through the different phenomena which may alter the appearance of a person in a video. We propose in this paper a method which encodes people's appearance through a string of salient points. The similarity between two such strings is encoded by a kernel. This last kernel is combined with a tracking algorithm in order to associate a set of strings to each person and to measure similarities between persons entering into the scene and persons who left it
Data-driven image color theme enhancement
Proceedings of the 3rd ACM SIGGRAPH Asia 2010, Seoul, South Korea, 15-18 December 2010It is often important for designers and photographers to convey or enhance desired color themes in their work. A color theme is typically defined as a template of colors and an associated verbal description. This paper presents a data-driven method for enhancing a desired color theme in an image. We formulate our goal as a unified optimization that simultaneously considers a desired color theme, texture-color relationships as well as automatic or user-specified color constraints. Quantifying the difference between an image and a color theme is made possible by color mood spaces and a generalization of an additivity relationship for two-color combinations. We incorporate prior knowledge, such as texture-color relationships, extracted from a database of photographs to maintain a natural look of the edited images. Experiments and a user study have confirmed the effectiveness of our method. © 2010 ACM.postprin
Sublinear time algorithms for earth mover's distance
We study the problem of estimating the Earth Moverâs Distance (EMD) between probability distributions
when given access only to samples of the distributions. We give closeness testers and additive-error
estimators over domains in [0, 1][superscript d], with sample complexities independent of domain size â permitting
the testability even of continuous distributions over infinite domains. Instead, our algorithms depend on
other parameters, such as the diameter of the domain space, which may be significantly smaller. We also
prove lower bounds showing the dependencies on these parameters to be essentially optimal. Additionally,
we consider whether natural classes of distributions exist for which there are algorithms with better
dependence on the dimension, and show that for highly clusterable data, this is indeed the case. Lastly,
we consider a variant of the EMD, defined over tree metrics instead of the usual l 1 metric, and give tight
upper and lower bounds
- âŠ