404 research outputs found
Type-driven Synthesis of Evolving Data Mode
Modern commercial software is often framed under the umbrella of data-centric applications.
Data-centric applications define data as the main and permanent asset. These
applications use a single data model for application functionality, data management, and
analytical activities, which is built before the applications.
Moreover, since applications are temporary, in contrast to data, there is the need to
continuously evolve and change the data schema to accommodate new functionality. In
this sense, the continuously evolving (rich) feature set that is expected of state-of-the-art
applications is intrinsically bound by not only the amount of available data but also by
its structure, its internal dependencies, and by the ability to transparently and uniformly
grow and evolve data representations and their properties on the fly.
The GOLEM project aims to produce new methods of program automation integrated
in the development of data-centric applications in low-code frameworks. In this context,
one of the key targets for automation is the data layer itself, encompassing the data layout
and its integrity constraints, as well as validation and access control rules.
The aim of this dissertation, which is integrated in GOLEM, is to develop a synthesis
framework that, based on high-level specifications, correctly defines and evolves a
rich data layer component by means of high-level operations. The construction of the
framework was approached by defining a specification language to express richly-typed
specifications, a target language which is the goal of synthesis and a type-directed synthesis
procedure based on proof-search concepts.
The range of real database operations the framework is able to synthesize is demonstrated
through a case study. In a component-based synthesis style, with an extensible
library of base operations on database tables (specified using the target language) in context,
the case study shows that the synthesis framework is capable of expressing and
solving a wide variety of data schema creation and evolution problems.Os sistemas modernos de software comercial são frequentemente caracterizados como
aplicações centradas em dados. Estas aplicações definem os dados como o seu principal
e persistente ativo, e utilizam um único modelo de dados para as suas funcionalidades,
gestão de dados, e atividades analíticas.
Além disso, uma vez que as aplicações são efémeras, contrariamente aos dados, existe
a necessidade de continuamente evoluir o esquema de dados para introduzir novas funcionalidades.
Neste sentido, o conjunto rico de características e em constante evolução
que é esperado das aplicações modernas encontra-se restricto, não só pela quantidade de
dados disponíveis, mas também pela sua estrutura, dependências internas, e a capacidade
de crescer e evoluir a representação dos dados de uma forma uniforme e rápida.
O projeto GOLEM tem como objetivo a produção de novos métodos de automação de
programas integrado no desenvolvimento de aplicações centradas nos dados em sistemas
low-code. Neste contexto, um dos objetivos principais de automação é a camada de dados,
compreendendo a estrutura dos dados e as respectivas condições de integridade, como
também as regras de validação e controlo de acessos.
O objetivo desta dissertação, integrada no projeto GOLEM, é o desenvolvimento de
um sistema de síntese que, baseado em especificações de alto nível, define e evolui corretamente
uma camada de dados rica com recurso a operações de alto nível. A construção
deste sistema baseia-se na definição de uma linguagem de especificação que permite definir
especificações com tipos ricos, uma linguagem de expressões que é considerada o
objetivo da síntese e um procedimento de síntese orientada pelos tipos.
O espectro de operações reais de bases de dados que o sistema consegue sintetizar é
demonstrado através de um caso de estudo. Com uma biblioteca extensível de operações
sobre tabelas no contexto, o caso de estudo demonstra que o sistema de síntese é capaz
de expressar e resolver uma grande variedade de problemas de criação e evolução de
esquemas de dados
Modular Inference of Linear Types for Multiplicity-Annotated Arrows
Bernardy et al. [2018] proposed a linear type system as a
core type system of Linear Haskell. In the system, linearity is represented by
annotated arrow types , where denotes the multiplicity of the
argument. Thanks to this representation, existing non-linear code typechecks as
it is, and newly written linear code can be used with existing non-linear code
in many cases. However, little is known about the type inference of
. Although the Linear Haskell implementation is equipped with
type inference, its algorithm has not been formalized, and the implementation
often fails to infer principal types, especially for higher-order functions. In
this paper, based on OutsideIn(X) [Vytiniotis et al., 2011], we propose an
inference system for a rank 1 qualified-typed variant of , which
infers principal types. A technical challenge in this new setting is to deal
with ambiguous types inferred by naive qualified typing. We address this
ambiguity issue through quantifier elimination and demonstrate the
effectiveness of the approach with examples.Comment: The full version of our paper to appear in ESOP 202
Specifying Theorem Provers in a Higher-Order Logic Programming Language
Since logic programming systems directly implement search and unification and since these operations are essential for the implementation of most theorem provers, logic programming languages should make ideal implementation languages for theorem provers. We shall argue that this is indeed the case if the logic programming language is extended in several ways. We present an extended logic programming language where first-order terms are replaced with simply-typed λ-terms, higher-order unification replaces firstorder unification, and implication and universal quantification are allowed in queries and the bodies of clauses. This language naturally specifies inference rules for various proof systems. The primitive search operations required to search for proofs generally have very simple implementations using the logical connectives of this extended logic programming language. Higher-order unification, which provides sophisticated pattern matching on formulas and proofs, can be used to determine when and at what instance an inference rule can be employed in the search for a proof. Tactics and tacticals, which provide a framework for high-level control over search, can also be directly implemented in this extended language. The theorem provers presented in this paper have been implemented in the higher-order logic programming language λProlog
Constraint Handling Rules with Binders, Patterns and Generic Quantification
Constraint Handling Rules provide descriptions for constraint solvers.
However, they fall short when those constraints specify some binding structure,
like higher-rank types in a constraint-based type inference algorithm. In this
paper, the term syntax of constraints is replaced by -tree syntax, in
which binding is explicit; and a new generic quantifier is introduced,
which is used to create new fresh constants.Comment: Paper presented at the 33nd International Conference on Logic
Programming (ICLP 2017), Melbourne, Australia, August 28 to September 1, 2017
16 pages, LaTeX, no PDF figure
Nominal Abstraction
Recursive relational specifications are commonly used to describe the
computational structure of formal systems. Recent research in proof theory has
identified two features that facilitate direct, logic-based reasoning about
such descriptions: the interpretation of atomic judgments through recursive
definitions and an encoding of binding constructs via generic judgments.
However, logics encompassing these two features do not currently allow for the
definition of relations that embody dynamic aspects related to binding, a
capability needed in many reasoning tasks. We propose a new relation between
terms called nominal abstraction as a means for overcoming this deficiency. We
incorporate nominal abstraction into a rich logic also including definitions,
generic quantification, induction, and co-induction that we then prove to be
consistent. We present examples to show that this logic can provide elegant
treatments of binding contexts that appear in many proofs, such as those
establishing properties of typing calculi and of arbitrarily cascading
substitutions that play a role in reducibility arguments.Comment: To appear in the Journal of Information and Computatio
- …