235,831 research outputs found
Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler
Concurrent programs are notoriously difficult to develop due to the non-deterministic nature of thread scheduling. It is desirable to have a programming language to make such development easier. Tscript comprises such a system. Tscript is an extension of JavaScript that provides multithreading support along with intent specification. These intents allow a programmer to specify how parts of the program interact in a multithreaded context. However, enforcing intents requires run-time memory checks which can be inefficient. This thesis implements an optimization in the Tscript compiler that seeks to improve this inefficiency through static analysis. Our approach utilizes both type inference and dataflow analysis to eliminate unnecessary run-time checks
Using cross-lingual information to cope with underspecification in formal ontologies
Description logics and other formal devices are frequently used as means for preventing or detecting mistakes in ontologies. Some of these devices are also capable of inferring the existence of inter-concept relationships that have not been explicitly entered into an ontology. A prerequisite, however, is that this information can be derived from those formal definitions of concepts and relationships which are included within the ontology. In this paper, we present a novel algorithm that is able to suggest relationships among existing concepts in a formal ontology that are not derivable from such formal definitions. The algorithm exploits cross-lingual information that is implicitly present in the collection of terms used in various languages to denote the concepts and relationships at issue. By using a specific experimental design, we are able to quantify the impact of cross-lingual information in coping with underspecification in formal ontologies
D-Bees: A Novel Method Inspired by Bee Colony Optimization for Solving Word Sense Disambiguation
Word sense disambiguation (WSD) is a problem in the field of computational
linguistics given as finding the intended sense of a word (or a set of words)
when it is activated within a certain context. WSD was recently addressed as a
combinatorial optimization problem in which the goal is to find a sequence of
senses that maximize the semantic relatedness among the target words. In this
article, a novel algorithm for solving the WSD problem called D-Bees is
proposed which is inspired by bee colony optimization (BCO)where artificial bee
agents collaborate to solve the problem. The D-Bees algorithm is evaluated on a
standard dataset (SemEval 2007 coarse-grained English all-words task corpus)and
is compared to simulated annealing, genetic algorithms, and two ant colony
optimization techniques (ACO). It will be observed that the BCO and ACO
approaches are on par
A Supervised Learning Approach to Acronym Identification
This paper addresses the task of finding acronym-definition pairs in text. Most of the previous work on the topic is about systems that involve manually generated rules or regular expressions. In this paper, we present a
supervised learning approach to the acronym identification task. Our approach reduces the search space of the supervised learning system by putting some weak constraints on the kinds of acronym-definition pairs that can be identified. We obtain results comparable to hand-crafted systems that use stronger constraints. We describe our method for reducing the search space, the features
used by our supervised learning system, and our experiments with various learning schemes
Mistakes in medical ontologies: Where do they come from and how can they be detected?
We present the details of a methodology for quality assurance in large medical terminologies and describe three algorithms that can help terminology developers and users to identify potential mistakes. The methodology is based in part on linguistic criteria and in part on logical and ontological principles governing sound classifications. We conclude by outlining the results of applying the methodology in the form of a taxonomy different types of errors and potential errors detected in SNOMED-CT
Parameterized Construction of Program Representations for Sparse Dataflow Analyses
Data-flow analyses usually associate information with control flow regions.
Informally, if these regions are too small, like a point between two
consecutive statements, we call the analysis dense. On the other hand, if these
regions include many such points, then we call it sparse. This paper presents a
systematic method to build program representations that support sparse
analyses. To pave the way to this framework we clarify the bibliography about
well-known intermediate program representations. We show that our approach, up
to parameter choice, subsumes many of these representations, such as the SSA,
SSI and e-SSA forms. In particular, our algorithms are faster, simpler and more
frugal than the previous techniques used to construct SSI - Static Single
Information - form programs. We produce intermediate representations isomorphic
to Choi et al.'s Sparse Evaluation Graphs (SEG) for the family of data-flow
problems that can be partitioned per variables. However, contrary to SEGs, we
can handle - sparsely - problems that are not in this family
Separating Regular Languages with First-Order Logic
Given two languages, a separator is a third language that contains the first
one and is disjoint from the second one. We investigate the following decision
problem: given two regular input languages of finite words, decide whether
there exists a first-order definable separator. We prove that in order to
answer this question, sufficient information can be extracted from semigroups
recognizing the input languages, using a fixpoint computation. This yields an
EXPTIME algorithm for checking first-order separability. Moreover, the
correctness proof of this algorithm yields a stronger result, namely a
description of a possible separator. Finally, we generalize this technique to
answer the same question for regular languages of infinite words
Many Roads to Synchrony: Natural Time Scales and Their Algorithms
We consider two important time scales---the Markov and cryptic orders---that
monitor how an observer synchronizes to a finitary stochastic process. We show
how to compute these orders exactly and that they are most efficiently
calculated from the epsilon-machine, a process's minimal unifilar model.
Surprisingly, though the Markov order is a basic concept from stochastic
process theory, it is not a probabilistic property of a process. Rather, it is
a topological property and, moreover, it is not computable from any
finite-state model other than the epsilon-machine. Via an exhaustive survey, we
close by demonstrating that infinite Markov and infinite cryptic orders are a
dominant feature in the space of finite-memory processes. We draw out the roles
played in statistical mechanical spin systems by these two complementary length
scales.Comment: 17 pages, 16 figures:
http://cse.ucdavis.edu/~cmg/compmech/pubs/kro.htm. Santa Fe Institute Working
Paper 10-11-02
- …