2,527 research outputs found
A comparison of Kernel methods for instantiating case based reasoning systems
Instance based reasoning systems and in general case based reasoning systems are normally used in problems for which it is difficult to define rules. Instance based reasoning is the term which tends to be applied to systems where there are a great amount of data (often of a numerical nature). The volume of data in such systems leads to difficulties with respect to case retrieval and matching. This paper presents a comparative study of a group of methods based on Kernels, which attempt to identify only the most significant cases with which to instantiate a case base. Kernels were originally derived in the context of Support Vector Machines which identify the smallest number of data points necessary to solve a particular problem (e.g. regression or classification). We use unsupervised Kernel methods to identify the optimal cases to instantiate a case base. The efficiencies of the Kernel models measured as Mean Absolute Percentage Error are compared on an oceanographic problem
Representing Isabelle in LF
LF has been designed and successfully used as a meta-logical framework to
represent and reason about object logics. Here we design a representation of
the Isabelle logical framework in LF using the recently introduced module
system for LF. The major novelty of our approach is that we can naturally
represent the advanced Isabelle features of type classes and locales.
Our representation of type classes relies on a feature so far lacking in the
LF module system: morphism variables and abstraction over them. While
conservative over the present system in terms of expressivity, this feature is
needed for a representation of type classes that preserves the modular
structure. Therefore, we also design the necessary extension of the LF module
system.Comment: In Proceedings LFMTP 2010, arXiv:1009.218
A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions
The paper describes the refinement algorithm for the Calculus of
(Co)Inductive Constructions (CIC) implemented in the interactive theorem prover
Matita. The refinement algorithm is in charge of giving a meaning to the terms,
types and proof terms directly written by the user or generated by using
tactics, decision procedures or general automation. The terms are written in an
"external syntax" meant to be user friendly that allows omission of
information, untyped binders and a certain liberal use of user defined
sub-typing. The refiner modifies the terms to obtain related well typed terms
in the internal syntax understood by the kernel of the ITP. In particular, it
acts as a type inference algorithm when all the binders are untyped. The
proposed algorithm is bi-directional: given a term in external syntax and a
type expected for the term, it propagates as much typing information as
possible towards the leaves of the term. Traditional mono-directional
algorithms, instead, proceed in a bottom-up way by inferring the type of a
sub-term and comparing (unifying) it with the type expected by its context only
at the end. We propose some novel bi-directional rules for CIC that are
particularly effective. Among the benefits of bi-directionality we have better
error message reporting and better inference of dependent types. Moreover,
thanks to bi-directionality, the coercion system for sub-typing is more
effective and type inference generates simpler unification problems that are
more likely to be solved by the inherently incomplete higher order unification
algorithms implemented. Finally we introduce in the external syntax the notion
of vector of placeholders that enables to omit at once an arbitrary number of
arguments. Vectors of placeholders allow a trivial implementation of implicit
arguments and greatly simplify the implementation of primitive and simple
tactics
A Comparison of Big Data Frameworks on a Layered Dataflow Model
In the world of Big Data analytics, there is a series of tools aiming at
simplifying programming applications to be executed on clusters. Although each
tool claims to provide better programming, data and execution models, for which
only informal (and often confusing) semantics is generally provided, all share
a common underlying model, namely, the Dataflow model. The Dataflow model we
propose shows how various tools share the same expressiveness at different
levels of abstraction. The contribution of this work is twofold: first, we show
that the proposed model is (at least) as general as existing batch and
streaming frameworks (e.g., Spark, Flink, Storm), thus making it easier to
understand high-level data-processing applications written in such frameworks.
Second, we provide a layered model that can represent tools and applications
following the Dataflow paradigm and we show how the analyzed tools fit in each
level.Comment: 19 pages, 6 figures, 2 tables, In Proc. of the 9th Intl Symposium on
High-Level Parallel Programming and Applications (HLPP), July 4-5 2016,
Muenster, German
Methods for Solving Necessary Equivalences
Nonmonotonic Logics such as Autoepistemic Logic, Reflective Logic, and Default Logic, are usually
defined in terms of set-theoretic fixed-point equations defined over deductively closed sets of sentences of First
Order Logic. Such systems may also be represented as necessary equivalences in a Modal Logic stronger than
S5 with the added advantage that such representations may be generalized to allow quantified variables crossing
modal scopes resulting in a Quantified Autoepistemic Logic, a Quantified Autoepistemic Kernel, a Quantified
Reflective Logic, and a Quantified Default Logic. Quantifiers in all these generalizations obey all the normal laws
of logic including both the Barcan formula and its converse. Herein, we address the problem of solving some
necessary equivalences containing universal quantifiers over modal scopes. Solutions obtained by these
methods are then compared to related results obtained in the literature by Circumscription in Second Order Logic
since the disjunction of all the solutions of a necessary equivalence containing just normal defaults in these
Quantified Logics, is equivalent to that system
Graphical Reasoning in Compact Closed Categories for Quantum Computation
Compact closed categories provide a foundational formalism for a variety of
important domains, including quantum computation. These categories have a
natural visualisation as a form of graphs. We present a formalism for
equational reasoning about such graphs and develop this into a generic proof
system with a fixed logical kernel for equational reasoning about compact
closed categories. Automating this reasoning process is motivated by the slow
and error prone nature of manual graph manipulation. A salient feature of our
system is that it provides a formal and declarative account of derived results
that can include `ellipses'-style notation. We illustrate the framework by
instantiating it for a graphical language of quantum computation and show how
this can be used to perform symbolic computation.Comment: 21 pages, 9 figures. This is the journal version of the paper
published at AIS
A Verified Information-Flow Architecture
SAFE is a clean-slate design for a highly secure computer system, with
pervasive mechanisms for tracking and limiting information flows. At the lowest
level, the SAFE hardware supports fine-grained programmable tags, with
efficient and flexible propagation and combination of tags as instructions are
executed. The operating system virtualizes these generic facilities to present
an information-flow abstract machine that allows user programs to label
sensitive data with rich confidentiality policies. We present a formal,
machine-checked model of the key hardware and software mechanisms used to
dynamically control information flow in SAFE and an end-to-end proof of
noninterference for this model.
We use a refinement proof methodology to propagate the noninterference
property of the abstract machine down to the concrete machine level. We use an
intermediate layer in the refinement chain that factors out the details of the
information-flow control policy and devise a code generator for compiling such
information-flow policies into low-level monitor code. Finally, we verify the
correctness of this generator using a dedicated Hoare logic that abstracts from
low-level machine instructions into a reusable set of verified structured code
generators
- …