8,094 research outputs found
Hierarchical image simplification and segmentation based on Mumford-Shah-salient level line selection
Hierarchies, such as the tree of shapes, are popular representations for
image simplification and segmentation thanks to their multiscale structures.
Selecting meaningful level lines (boundaries of shapes) yields to simplify
image while preserving intact salient structures. Many image simplification and
segmentation methods are driven by the optimization of an energy functional,
for instance the celebrated Mumford-Shah functional. In this paper, we propose
an efficient approach to hierarchical image simplification and segmentation
based on the minimization of the piecewise-constant Mumford-Shah functional.
This method conforms to the current trend that consists in producing
hierarchical results rather than a unique partition. Contrary to classical
approaches which compute optimal hierarchical segmentations from an input
hierarchy of segmentations, we rely on the tree of shapes, a unique and
well-defined representation equivalent to the image. Simply put, we compute for
each level line of the image an attribute function that characterizes its
persistence under the energy minimization. Then we stack the level lines from
meaningless ones to salient ones through a saliency map based on extinction
values defined on the tree-based shape space. Qualitative illustrations and
quantitative evaluation on Weizmann segmentation evaluation database
demonstrate the state-of-the-art performance of our method.Comment: Pattern Recognition Letters, Elsevier, 201
Unsupervised Controllable Text Formalization
We propose a novel framework for controllable natural language
transformation. Realizing that the requirement of parallel corpus is
practically unsustainable for controllable generation tasks, an unsupervised
training scheme is introduced. The crux of the framework is a deep neural
encoder-decoder that is reinforced with text-transformation knowledge through
auxiliary modules (called scorers). The scorers, based on off-the-shelf
language processing tools, decide the learning scheme of the encoder-decoder
based on its actions. We apply this framework for the text-transformation task
of formalizing an input text by improving its readability grade; the degree of
required formalization can be controlled by the user at run-time. Experiments
on public datasets demonstrate the efficacy of our model towards: (a)
transforming a given text to a more formal style, and (b) introducing
appropriate amount of formalness in the output text pertaining to the input
control. Our code and datasets are released for academic use.Comment: AAA
Style Separation and Synthesis via Generative Adversarial Networks
Style synthesis attracts great interests recently, while few works focus on
its dual problem "style separation". In this paper, we propose the Style
Separation and Synthesis Generative Adversarial Network (S3-GAN) to
simultaneously implement style separation and style synthesis on object
photographs of specific categories. Based on the assumption that the object
photographs lie on a manifold, and the contents and styles are independent, we
employ S3-GAN to build mappings between the manifold and a latent vector space
for separating and synthesizing the contents and styles. The S3-GAN consists of
an encoder network, a generator network, and an adversarial network. The
encoder network performs style separation by mapping an object photograph to a
latent vector. Two halves of the latent vector represent the content and style,
respectively. The generator network performs style synthesis by taking a
concatenated vector as input. The concatenated vector contains the style half
vector of the style target image and the content half vector of the content
target image. Once obtaining the images from the generator network, an
adversarial network is imposed to generate more photo-realistic images.
Experiments on CelebA and UT Zappos 50K datasets demonstrate that the S3-GAN
has the capacity of style separation and synthesis simultaneously, and could
capture various styles in a single model
Privacy Preserving Multi-Server k-means Computation over Horizontally Partitioned Data
The k-means clustering is one of the most popular clustering algorithms in
data mining. Recently a lot of research has been concentrated on the algorithm
when the dataset is divided into multiple parties or when the dataset is too
large to be handled by the data owner. In the latter case, usually some servers
are hired to perform the task of clustering. The dataset is divided by the data
owner among the servers who together perform the k-means and return the cluster
labels to the owner. The major challenge in this method is to prevent the
servers from gaining substantial information about the actual data of the
owner. Several algorithms have been designed in the past that provide
cryptographic solutions to perform privacy preserving k-means. We provide a new
method to perform k-means over a large set using multiple servers. Our
technique avoids heavy cryptographic computations and instead we use a simple
randomization technique to preserve the privacy of the data. The k-means
computed has exactly the same efficiency and accuracy as the k-means computed
over the original dataset without any randomization. We argue that our
algorithm is secure against honest but curious and passive adversary.Comment: 19 pages, 4 tables. International Conference on Information Systems
Security. Springer, Cham, 201
Privacy-Preserving and Outsourced Multi-User k-Means Clustering
Many techniques for privacy-preserving data mining (PPDM) have been
investigated over the past decade. Often, the entities involved in the data
mining process are end-users or organizations with limited computing and
storage resources. As a result, such entities may want to refrain from
participating in the PPDM process. To overcome this issue and to take many
other benefits of cloud computing, outsourcing PPDM tasks to the cloud
environment has recently gained special attention. We consider the scenario
where n entities outsource their databases (in encrypted format) to the cloud
and ask the cloud to perform the clustering task on their combined data in a
privacy-preserving manner. We term such a process as privacy-preserving and
outsourced distributed clustering (PPODC). In this paper, we propose a novel
and efficient solution to the PPODC problem based on k-means clustering
algorithm. The main novelty of our solution lies in avoiding the secure
division operations required in computing cluster centers altogether through an
efficient transformation technique. Our solution builds the clusters securely
in an iterative fashion and returns the final cluster centers to all entities
when a pre-determined termination condition holds. The proposed solution
protects data confidentiality of all the participating entities under the
standard semi-honest model. To the best of our knowledge, ours is the first
work to discuss and propose a comprehensive solution to the PPODC problem that
incurs negligible cost on the participating entities. We theoretically estimate
both the computation and communication costs of the proposed protocol and also
demonstrate its practical value through experiments on a real dataset.Comment: 16 pages, 2 figures, 5 table
- …