286 research outputs found
Graph Structures for Knowledge Representation and Reasoning
This open access book constitutes the thoroughly refereed post-conference proceedings of the 6th International Workshop on Graph Structures for Knowledge Representation and Reasoning, GKR 2020, held virtually in September 2020, associated with ECAI 2020, the 24th European Conference on Artificial Intelligence. The 7 revised full papers presented together with 2 invited contributions were reviewed and selected from 9 submissions. The contributions address various issues for knowledge representation and reasoning and the common graph-theoretic background, which allows to bridge the gap between the different communities
Knowledge Components and Methods for Policy Propagation in Data Flows
Data-oriented systems and applications are at the centre of current developments of the World Wide Web (WWW). On the Web of Data (WoD), information sources can be accessed and processed for many purposes. Users need to be aware of any licences or terms of use, which are associated with the data sources they want to use. Conversely, publishers need support in assigning the appropriate policies alongside the data they distribute.
In this work, we tackle the problem of policy propagation in data flows - an expression that refers to the way data is consumed, manipulated and produced within processes. We pose the question of what kind of components are required, and how they can be acquired, managed, and deployed, to support users on deciding what policies propagate to the output of a data-intensive system from the ones associated with its input. We observe three scenarios: applications of the Semantic Web, workflow reuse in Open Science, and the exploitation of urban data in City Data Hubs. Starting from the analysis of Semantic Web applications, we propose a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects. By means of Policy Propagation Rules, it is possible to link data flow steps and policies derivable from semantic descriptions of data licences. We show how these components can be designed, how they can be effectively managed, and how to reason efficiently with them. In a second phase, the developed components are verified using a Smart City Data Hub as a case study, where we developed an end-to-end solution for policy propagation. Finally, we evaluate our approach and report on a user study aimed at assessing both the quality and the value of the proposed solution
Improving automation in model-driven engineering using examples
Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels.
Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples.
Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration, nous proposons une méthode de regroupement logiciel (Software Clustering) basée sur une méta-heuristique guidée par des exemples de regroupement. De la même façon, pour les transformations générales, nous apprenons des transformations entre modèles en utilisant un algorithme de programmation génétique qui s’inspire des exemples des transformations passées. Pour la définition précise de langages de modélisation, nous proposons une méthode basée sur une recherche méta-heuristique, qui dérive des règles de bonne formation pour les méta-modèles, avec l’objectif de bien discriminer entre modèles valides et invalides.
Les études empiriques que nous avons menées, montrent que les approches proposées obtiennent des bons résultats tant quantitatifs que qualitatifs. Ceux-ci nous permettent de conclure que l’amélioration de l’automatisation du MDE en utilisant des méthodes de recherche méta-heuristique et des exemples peut contribuer à l’adoption plus large de MDE dans l’industrie à là venir.This thesis aims to improve automation in Model Driven Engineering (MDE). MDE is a paradigm that promises to reduce software complexity by the mean of the intensive use of models and automatic model transformation (MT). Roughly speaking, in MDE vision, stakeholders use several models to represent the software, and produce source code by automatically transforming these models. Consequently, automation is a key factor and founding principle of MDE. In addition to MT, other MDE activities require automation, e.g. modeling language definition and software migration.
In this context, the main contribution of this thesis is proposing a general approach for improving automation in MDE. Our approach is based on meta-heuristic search guided by examples. We apply our approach to two important MDE problems, (1) model transformation and (2) precise modeling languages. For transformations, we distinguish between transformations in the context of migration and general model transformations.
In the case of migration, we propose a software clustering method based on a search algorithm guided by cluster examples. Similarly, for general transformations, we learn model transformations by a genetic programming algorithm taking inspiration from examples of past transformations.
For the problem of precise metamodeling, we propose a meta-heuristic search method to derive well-formedness rules for metamodels with the objective of discriminating examples of valid and invalid models.
Our empirical evaluation shows that the proposed approaches exhibit good results. These allow us to conclude that improving automation in MDE using meta-heuristic search and examples can contribute to a wider adoption of MDE in industry in the coming years
Simplifying the Analysis of C++ Programs
Based on our experience of working with different C++ front ends, this thesis identifies numerous problems that complicate the analysis of C++ programs along the entire spectrum of analysis applications. We utilize library, language, and tool extensions to address these problems and offer solutions to many of them. In particular, we present efficient, expressive and non-intrusive means of dealing with abstract syntax trees of a program, which together render the visitor design pattern obsolete. We further extend C++ with open multi-methods to deal with the broader expression problem. Finally, we offer two techniques, one based on refining the type system of a language and the other on abstract interpretation, both of which allow developers to statically ensure or verify various run-time properties of their programs without having to deal with the full language semantics or even the abstract syntax tree of a program. Together, the solutions presented in this thesis make ensuring properties of interest about C++ programs available to average language users
Fundamental Approaches to Software Engineering
computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio
Deductive Systems in Traditional and Modern Logic
The book provides a contemporary view on different aspects of the deductive systems in various types of logics including term logics, propositional logics, logics of refutation, non-Fregean logics, higher order logics and arithmetic
Workshop Notes of the Sixth International Workshop "What can FCA do for Artificial Intelligence?"
International audienc
Formal concept matching and reinforcement learning in adaptive information retrieval
The superiority of the human brain in information retrieval (IR) tasks seems to come firstly
from its ability to read and understand the concepts, ideas or meanings central to documents, in
order to reason out the usefulness of documents to information needs, and secondly from its
ability to learn from experience and be adaptive to the environment. In this work we attempt to
incorporate these properties into the development of an IR model to improve document
retrieval. We investigate the applicability of concept lattices, which are based on the theory of
Formal Concept Analysis (FCA), to the representation of documents. This allows the use of
more elegant representation units, as opposed to keywords, in order to better capture
concepts/ideas expressed in natural language text. We also investigate the use of a
reinforcement leaming strategy to learn and improve document representations, based on the
information present in query statements and user relevance feedback. Features or concepts of
each document/query, formulated using FCA, are weighted separately with respect to the
documents they are in, and organised into separate concept lattices according to a subsumption
relation. Furthen-nore, each concept lattice is encoded in a two-layer neural network structure
known as a Bidirectional Associative Memory (BAM), for efficient manipulation of the
concepts in the lattice representation. This avoids implementation drawbacks faced by other
FCA-based approaches. Retrieval of a document for an information need is based on concept
matching between concept lattice representations of a document and a query. The learning
strategy works by making the similarity of relevant documents stronger and non-relevant
documents weaker for each query, depending on the relevance judgements of the users on
retrieved documents. Our approach is radically different to existing FCA-based approaches in
the following respects: concept formulation; weight assignment to object-attribute pairs; the
representation of each document in a separate concept lattice; and encoding concept lattices in
BAM structures. Furthermore, in contrast to the traditional relevance feedback mechanism, our
learning strategy makes use of relevance feedback information to enhance document
representations, thus making the document representations dynamic and adaptive to the user
interactions. The results obtained on the CISI, CACM and ASLIB Cranfield collections are
presented and compared with published results. In particular, the performance of the system is
shown to improve significantly as the system learns from experience.The School of Computing,
University of Plymouth, UK
Recommended from our members
On the Formal Flexibility of Syntactic Categories
This dissertation explores the formal flexibility of syntactic categories. The main proposal is that Universal Grammar (UG) only provides templatic guidance for syntactic category formation and organization but leaves many other issues open, including issues internal to a single category and issues at the intercategorial, system level: these points that UG "does not care about" turn out to enrich the categorial ontology of human language in important ways.
The dissertation consists of seven chapters. After a general introduction in Chapter 1, I lay out some foundational issues regarding features and categories in Chapter 2 and delineate a featural metalanguage comprising four components: specification, valuation, typing, and granularity. Based on that I put forward a templatic definition for syntactic categories, which unifies the combinatorial and taxonomic perspectives under the notion mergeme. Then, a detailed overview of the "categorial universe" I work with is presented, which shows that the syntactic category system (SCS) is an intricate web structured by five layers of abstraction divided into three broad levels of concern: the individual level (layers 1–2), the global level (layers 3–4), and the supraglobal level (layer 5). In the subsequent chapters I explore the template-flexibility pairs at each abstraction layer, with Chapters 3–4 focusing on the first layer, Chapter 5 on the second layer, and Chapter 6 on the third and fourth layers; the fifth layer is not in the scope of this dissertation.
Chapter 3 examines a special type of category defined by an underspecified mergeme, the defective category, which behaves like a "chameleon" in that it gets assimilated into whatever nondefective category it merges with. This characteristic makes it potentially useful in analyzing certain adjunction structures, and I explore this potential by two case studies, one focusing on modifier-head compounds and the other on sentence-final particles. Chapter 4 examines another special type of category defined by the absence of a mergeme, the Root category. Deductive reasoning leads me to propose a generalized root syntax, according to which roots are not confined to lexical categorial environments but may legally merge with and hence "support" any non-Root category. I demonstrate the empirical consequences of this theory by a comprehensive study of the half-lexical–half-functional vocabulary items in Chinese.
Chapter 5 ascends to the second abstraction layer and raises the question of whether the categorial sequences (or projection hierarchies) in human language are necessarily totally ordered, as certain analytical devices (e.g., "flavored" categories) can only be theoretically maintained if we also allow categorial sequences to be partially ordered. After a diachronic study of the flavored verbalizer (stative) in Chinese resultative compounds, I conclude that while "flavoring" is indeed a possible type of flexibility in the SCS, it is the deviation rather than the norm due to non-UG or "third" factors and hence should be cautiously used in syntactic analyses.
Chapter 6 ascends even higher on the ladder of abstraction and examines the global interconnection in the SCS ontology with the aid of mathematical Category theory. I formalize the functional parallelism across major parts of speech and the inheritance-based relations across granularity levels as Category-theoretic structures, which reveal further and more abstract templates and flexibility types in the SCS. A crucial mathematical concept in the formalization is epi-Adjunction. Finally, in Chapter 7 I summarize the main results of this dissertation and briefly discuss some potential directions of future research.My PhD is funded by Cambridge Trust and China Scholarship Council. I have also received travel grants and financial aids from Gonville and Caius College and the Faculty of Modern and Medieval Languages
- …