532,994 research outputs found
Line Simplification
As an important practice of map generalization, the aim of line
simplification is to reduce the number of points without destroying the
essential shape or the salient character of a cartographic curve. This subject
has been well-studied in the literature. This entry attempts to introduce how
line simplification can be guided by fractal geometry, or the recurring scaling
pattern of far more small things than large ones. The line simplification
process involves nothing more than removing small things while retaining large
ones based on head/tail breaks.Comment: 6 pages, 3 figure
Integrating Transformer and Paraphrase Rules for Sentence Simplification
Sentence simplification aims to reduce the complexity of a sentence while
retaining its original meaning. Current models for sentence simplification
adopted ideas from ma- chine translation studies and implicitly learned
simplification mapping rules from normal- simple sentence pairs. In this paper,
we explore a novel model based on a multi-layer and multi-head attention
architecture and we pro- pose two innovative approaches to integrate the Simple
PPDB (A Paraphrase Database for Simplification), an external paraphrase
knowledge base for simplification that covers a wide range of real-world
simplification rules. The experiments show that the integration provides two
major benefits: (1) the integrated model outperforms multiple state- of-the-art
baseline models for sentence simplification in the literature (2) through
analysis of the rule utilization, the model seeks to select more accurate
simplification rules. The code and models used in the paper are available at
https://github.com/ Sanqiang/text_simplification
Static Contract Simplification
Contracts and contract monitoring are a powerful mechanism for specifying
properties and guaranteeing them at run time. However, run time monitoring of
contracts imposes a significant overhead. The execution time is impacted by the
insertion of contract checks as well as by the introduction of proxy objects
that perform delayed contract checks on demand.
Static contract simplification attacks this issue using program
transformation. It applies compile-time transformations to programs with
contracts to reduce the overall run time while preserving the original
behavior. Our key technique is to statically propagate contracts through the
program and to evaluate and merge contracts where possible. The goal is to
obtain residual contracts that are collectively cheaper to check at run time.
We distinguish different levels of preservation of behavior, which impose
different limitations on the admissible transformations: Strong blame
preservation, where the transformation is a behavioral equivalence, and weak
blame preservation, where the transformed program is equivalent up to the
particular violation reported. Our transformations never increase the overall
number of contract checks.Comment: Technical Repor
Global Curve Simplification
Due to its many applications, \emph{curve simplification} is a long-studied
problem in computational geometry and adjacent disciplines, such as graphics,
geographical information science, etc. Given a polygonal curve with
vertices, the goal is to find another polygonal curve with a smaller
number of vertices such that is sufficiently similar to . Quality
guarantees of a simplification are usually given in a \emph{local} sense,
bounding the distance between a shortcut and its corresponding section of the
curve. In this work, we aim to provide a systematic overview of curve
simplification problems under \emph{global} distance measures that bound the
distance between and . We consider six different curve distance
measures: three variants of the \emph{Hausdorff} distance and three variants of
the \emph{Fr\'echet} distance. And we study different restrictions on the
choice of vertices for . We provide polynomial-time algorithms for some
variants of the global curve simplification problem and show NP-hardness for
other variants. Through this systematic study we observe, for the first time,
some surprising patterns, and suggest directions for future research in this
important area.Comment: 33 pages, 16 figure
A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification
Sentence simplification reduces semantic complexity to benefit people with
language impairments. Previous simplification studies on the sentence level and
word level have achieved promising results but also meet great challenges. For
sentence-level studies, sentences after simplification are fluent but sometimes
are not really simplified. For word-level studies, words are simplified but
also have potential grammar errors due to different usages of words before and
after simplification. In this paper, we propose a two-step simplification
framework by combining both the word-level and the sentence-level
simplifications, making use of their corresponding advantages. Based on the
two-step framework, we implement a novel constrained neural generation model to
simplify sentences given simplified words. The final results on Wikipedia and
Simple Wikipedia aligned datasets indicate that our method yields better
performance than various baselines
Multiresolution topological simplification
Persistent homology has been devised as a promising tool for the topological
simplification of complex data. However, it is computationally intractable for
large data sets. In this work, we introduce multiresolution persistent homology
for tackling large data sets. Our basic idea is to match the resolution with
the scale of interest so as to create a topological microscopy for the
underlying data. We utilize flexibility-rigidity index (FRI) to access the
topological connectivity of the data set and define a rigidity density for the
filtration analysis. By appropriately tuning the resolution, we are able to
focus the topological lens on a desirable scale. The proposed multiresolution
topological analysis is validated by a hexagonal fractal image which has three
distinct scales. We further demonstrate the proposed method for extracting
topological fingerprints from DNA and RNA molecules. In particular, the
topological persistence of a virus capsid with 240 protein monomers is
successfully analyzed which would otherwise be inaccessible to the normal point
cloud method and unreliable by using coarse-grained multiscale persistent
homology. The proposed method has also been successfully applied to the protein
domain classification, which is the first time that persistent homology is used
for practical protein domain analysis, to our knowledge. The proposed
multiresolution topological method has potential applications in arbitrary data
sets, such as social networks, biological networks and graphs.Comment: 22 pages and 14 figure
Efficient LTL Decentralized Monitoring Framework Using Formula Simplification Table
This paper presents a new technique for optimizing formal analysis of
propositional logic formulas and Linear Temporal Logic (LTL) formulas, namely
the formula simplification table. A formula simplification table is a
mathematical table that shows all possible simplifications of the formula under
different truth assignments of its variables. The advantages of constructing a
simplification table of a formula are two-fold. First, it can be used to
compute the logical influence weight of each variable in the formula, which is
a metric that shows the importance of the variable in affecting the outcome of
the formula. Second, it can be used to identify variables that have the highest
logical influences on the outcome of the formula. %The simplification table can
be used to optimize %existing solutions for several interesting %LTL
verification problems. We demonstrate the effectiveness of formula
simplification table in the context of software verification by developing
efficient framework to the well-known decentralized LTL monitoring problem
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a
simple and efficient splitting algorithm based on an automatic semantic parser.
After splitting, the text is amenable for further fine-tuned simplification
operations. In particular, we show that neural Machine Translation can be
effectively used in this situation. Previous application of Machine Translation
for simplification suffers from a considerable disadvantage in that they are
over-conservative, often failing to modify the source in any way. Splitting
based on semantic parsing, as proposed here, alleviates this issue. Extensive
automatic and human evaluation shows that the proposed method compares
favorably to the state-of-the-art in combined lexical and structural
simplification
Semantic Structural Evaluation for Text Simplification
Current measures for evaluating text simplification systems focus on
evaluating lexical text aspects, neglecting its structural aspects. In this
paper we propose the first measure to address structural aspects of text
simplification, called SAMSA. It leverages recent advances in semantic parsing
to assess simplification quality by decomposing the input based on its semantic
structure and comparing it to the output. SAMSA provides a reference-less
automatic evaluation procedure, avoiding the problems that reference-based
methods face due to the vast space of valid simplifications for a given
sentence. Our human evaluation experiments show both SAMSA's substantial
correlation with human judgments, as well as the deficiency of existing
reference-based measures in evaluating structural simplification
Mastering Sketching: Adversarial Augmentation for Structured Prediction
We present an integral framework for training sketch simplification networks
that convert challenging rough sketches into clean line drawings. Our approach
augments a simplification network with a discriminator network, training both
networks jointly so that the discriminator network discerns whether a line
drawing is a real training data or the output of the simplification network,
which in turn tries to fool it. This approach has two major advantages. First,
because the discriminator network learns the structure in line drawings, it
encourages the output sketches of the simplification network to be more similar
in appearance to the training sketches. Second, we can also train the
simplification network with additional unsupervised data, using the
discriminator network as a substitute teacher. Thus, by adding only rough
sketches without simplified line drawings, or only line drawings without the
original rough sketches, we can improve the quality of the sketch
simplification. We show how our framework can be used to train models that
significantly outperform the state of the art in the sketch simplification
task, despite using the same architecture for inference. We additionally
present an approach to optimize for a single image, which improves accuracy at
the cost of additional computation time. Finally, we show that, using the same
framework, it is possible to train the network to perform the inverse problem,
i.e., convert simple line sketches into pencil drawings, which is not possible
using the standard mean squared error loss. We validate our framework with two
user tests, where our approach is preferred to the state of the art in sketch
simplification 92.3% of the time and obtains 1.2 more points on a scale of 1 to
5.Comment: 12 pages, 14 figure
- …