329 research outputs found

    Intensional Updates

    Get PDF

    Computer vision algorithms on reconfigurable logic arrays

    Full text link

    ACCESS TO SPECIFIC DECLARATIVE KNOWLEDGE BY EXPERT SYSTEMS: THE IMPACT OF LOGIC PROGRAMMING

    Get PDF
    As part of the operation of an Expert System, a deductive component accesses a database of facts to help simulate the behavior of a human expert in a particular problem domain. The nature of this access is examined, and four access strategies are identified. Features of each of these strategies are addressed within the framework of a Logic-based deductive component and the relational model of data.Information Systems Working Papers Serie

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 30th European Symposium on Programming, ESOP 2021, which was held during March 27 until April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The 24 papers included in this volume were carefully reviewed and selected from 79 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Compilation Techniques for Incremental Collection Processing

    Get PDF
    Many map-reduce frameworks as well as NoSQL systems rely on collection programming as their interface of choice due to its rich semantics along with an easily parallelizable set of primitives. Unfortunately, the potential of collection programming is not entirely fulfilled by current systems as they lack efficient incremental view maintenance (IVM) techniques for queries producing large nested results. This comes as a consequence of the fact that the nesting of collections does not enjoy the same algebraic properties underscoring the optimization potential of typical collection processing constructs. We propose the first solution for the efficient incrementalization of collection programming in terms of its core constructs as captured by the positive nested relational calculus (NRC+) on bags (with integer multiplicities). We take an approach based on delta query derivation, whose goal is to generate delta queries which, given a small change in the input, can update the materialized view more efficiently than via recomputation. More precisely, we model the cost of NRC+ operators and classify queries as efficiently incrementalizable if their delta has a strictly lower cost than full re-evaluation. Then, we identify IncNRC+, a large fragment of NRC+ that is efficiently incrementalizable and we provide a semantics-preserving translation that takes any NRC+ query to a collection of IncNRC+ queries. Furthermore, we prove that incrementalmaintenance for NRC+ is within the complexity class NC0 and we showcase how Recursive IVM, a technique that has provided significant speedups over traditional IVM in the case of flat queries, can also be applied to IncNRC+ . Existing systems are also limited wrt. the size of inner collections that they can effectively handle before running into severe performance bottlenecks. In particular, in the face of nested collections with skewed cardinalities developers typically have to undergo a painful process of manual query re-writes in order to ensure that the largest inner collections in their workloads are not impacted by these limitations. To address these issues we developed SLeNDer, a compilation framework that given a nested query generates a set of semantically equivalent (partially) shredded queries that can be efficiently evaluated and incrementalized using state of the art techniques for handling skew and applying delta changes, respectively. The derived queries expose nested collections to the same opportunities for distributing their processing and incrementally updating their contents as those enjoyed by top-level collections, leading on our benchmark to up to 16.8x and 21.9x speedups in terms of offline and online processing, respectively. In order to enable efficient IVM for the increasingly common case of collection programming with functional values as in Links, we also discuss the efficient incrementalization of simplytyped lambda calculi, under the constraint that their primitives are themselves efficiently incrementalizable

    Entwurf und Implementation einer auf Graph-Grammatiken beruhenden Sprache zur Funktions-Struktur-Modellierung von Pflanzen

    Get PDF
    Increasing biological knowledge requires more and more elaborate methods to translate the knowledge into executable model descriptions, and increasing computational power allows to actually execute these descriptions. Such a simulation helps to validate, extend and question the knowledge. For plant modelling, the well-established formal description language of Lindenmayer systems reaches its limits as a method to concisely represent current knowledge and to conveniently assist in current research. On one hand, it is well-suited to represent structural and geometric aspects of plant models - of which units is a plant composed, how are these connected, what is their location in 3D space -, but on the other hand, its usage to describe functional aspects - what internal processes take place in the plant structure, how does this interact with the structure - is not as convenient as desirable. This can be traced back to the underlying representation of structure as a linear chain of units, while the intrinsic nature of the structure is a tree or even a graph. Therefore, we propose to use graphs and graph grammars as a basis for plant modelling which combines structural and functional aspects. In the first part of this thesis, we develop the necessary theoretical framework. Starting with a presentation of the state of the art concerning Lindenmayer systems and graph grammars, we develop the formalism of relational growth grammars as a variant of graph grammars. We show that this formalism has a natural embedding of Lindenmayer systems which keeps all relevant properties, but represents branched structures directly as axial trees and not as linear chains with indirect encoding of branches. In the second part, we develop the main practical result, the XL programming language as an extension of the Java programming language by very general rule-based features. Short examples illustrate the application of the new language features. We describe the built-in pattern matching algorithm of the implemented run-time system for the XL programming language, and we sketch a possible implementation of an XL compiler. The third part is an application of relational growth grammars and the XL programming language. We show how the general XL interfaces can be customized for relational growth grammars. On top of this customization, several examples from a variety of disciplines demonstrate the usefulness of the developed formalism and language to describe plant growth, especially functional-structural plant models, but also artificial life, architecture or interactive games. Some examples operate on custom graphs like XML DOM trees or scene graphs of commercial 3D modellers, while the majority uses the 3D modelling platform GroIMP, a software developed in conjunction with this thesis. The appendix gives an overview of the GroIMP software. The practical usage of its plug-in for relational growth grammars is also illustrated.Das zunehmende Wissen über biologische Prozesse verlangt nach geeigneten Methoden, es in ausführbare Modelle zu übersetzen, und die zunehmende Rechenleistung der Computer ermöglicht es, diese Modelle auch tatsächlich auszuführen. Solche Simulationen dienen zur Validierung, Erweiterung und Hinterfragung des Wissens. Speziell für die Pflanzenmodellierung wurden Lindenmayer-Systeme mit Erfolg eingesetzt, jedoch stoßen diese bei aktuellen Modellierungsproblemen und Forschungsvorhaben an ihre Grenzen. Zwar sind sie gut geeignet, Pflanzenstruktur und Geometrie abzubilden - aus welchen Einheiten setzt sich eine Pflanze zusammen, wie sind diese verbunden, wie ist ihre räumliche Lage -, aber die lineare Datenstruktur erschwert die Integration von Funktionsmodellen, welche Prozesse innerhalb der verzweigten Struktur und des beanspruchten Raumes beschreiben. Daher wird in dieser Arbeit vorgeschlagen, anstelle der linearen Stuktur Graphen und Graph-Grammatiken als Grundlage für die kombinierte Funktions-Struktur-Modellierung von Pflanzen zu verwenden. Im ersten Teil der Dissertation wird der theoretische Unterbau entwickelt. Nach einer Vorstellung des aktuellen Wissensstandes auf dem Gebiet der Lindenmayer-Systeme und Graph-Grammatiken werden relationale Wachstumsgrammatiken eingeführt, die auf bekannten Mechanismen für parallele Graph-Grammatiken aufbauen und Lindenmayer-Systeme als Spezialfall enthalten, dabei jedoch verzweigte Strukturen direkt als axiale Bäume darstellen. Zur praktischen Anwendung wird im zweiten Teil die Programmiersprache XL entwickelt, die Java um allgemein gehaltene Sprachkonstrukte für Graph-Grammatiken erweitert. Kurze Beispiele zeigen die Anwendung der neuen Sprachmerkmale. Der Algorithmus zur Mustersuche wird erläutert, und die Implementation des XL-Compilers wird vorgestellt. Im dritten Teil werden mögliche Anwendungen relationaler Wachstumsgrammatiken aufgezeigt. Dazu werden zunächst die allgemeinen XL-Schnittstellen für relationale Wachstumsgrammatiken konkretisiert, um dieses System dann für Modelle aus verschiedenen Bereichen zu nutzen, darunter Funktions-Struktur-Modelle von Pflanzen, Künstliches Leben, Architektur und interaktive Spiele. Einige Beispiele nutzen spezifische Graphen wie XML-DOM-Bäume oder Szenengraphen kommerzieller 3D-Modellierprogramme, aber der überwiegende Teil baut auf der 3D-Plattform GroIMP auf, die zusammen mit dieser Dissertation entwickelt wurde. Im Anhang wird die Software GroIMP kurz vorgestellt und ihre praktische Anwendung für relationale Wachstumsgrammatiken erläutert

    Database repairs with answer set programming

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaIntegrity constraints play an important part in database design. They are what allow databases to store accurate information, since they impose some properties that must always hold. However, none of the existing Database Management Systems allows the specification of new integrity constraints if the information stored is already violating these new integrity constraints. In this dissertation, we developed DRSys, an application that allows the user to specify integrity constraints that he wishes to enforce in the database. If the database becomes inconsistent with respect to such integrity constraints, DRSys returns to the user possible ways to restore consistency, by inserting or deleting tuples into/from the original database, creating a new consistent database, a database repair. Also, since we are dealing with databases, we want to change as little information as possible, so DRSys offers the user two distinct minimality criteria when repairing the database: minimality under set inclusion or minimality under cardinality of operations. We approached the database repairing problem by using the capacity of problem solving offered by Answer Set Programming (ASP), which benefits from the simple specification of problems, and the existence of “Solvers” that solve those problems in an efficient manner. DRSys is a database repair application that was built on top of the database management system PostgreSQL. Furthermore, we developed a graphical user interface, to aid the user in the whole process of defining new integrity constraints and in the process of database repairing. We evaluate the performance and scalability of DRSys, by presenting several tests in different situations, exploring particular features of it as well, in order to understand the scalability of DRSys

    Information driven exploration in robotics

    Get PDF
    Imagine an intelligent robot entering an unknown room. It starts interacting with its new surroundings to understand what properties the new objects have and how they interact with each other. Finally, he gathered enough information to skillfully perform various tasks in the new environment. This is the vision behind our research towards intelligent robots. An important role in the described behavior is the ability to chose actions in order to learn new things. This ability we call exploration. It enables the robot to quickly learn about the properties of the objects. Surprisingly autonomous exploration has been mostly neglected by robotics research so far, because many fundamental problems like motor control and perception were still not satisfactory solved. The developments of recent years have, however, overcome this hurdle. State of the art methods enable us now to conduct research on exploration in robotics. On the other hand the machine learning and statistics community has developed methods and the theoretical background to lead learning algorithms to the most promising data. Under the terms active learning and experimental design many methods have been developed to improve the learning rate with fewer training data. In this thesis we combine results from both fields to develop a framework of exploration in robotics. We base our framework on the notion of information and information gain, developed in the field of information theory. And although we show that optimal exploration is a computational hard problem, we develop efficient exploration strategies using information gain as measure and Bayesian experimental design as foundation. To test the explorative behavior generated by our strategies we introduce the Physical Exploration Challenge. It formalizes the desired behavior as exploration of external degrees of freedom. External degrees of freedom are those the robot can not articulate directly but only by interacting with the environment. We present how we can model different exploration tasks of external degree of freedom: Exploring the meaning of geometric symbols by moving objects, exploring the existence of joints and their properties, and exploring how different joints in the environment are interdependent. Different robots show these exploration tasks in both simulated and real world experiments.Wie würde sich ein intelligenter Roboter verhalten, der einen ihm unbekannten Raum betritt? Vermutlich würde er anfangen all die Dinge um sich herum zu untersuchen, um sich ein Bild darüber zu verschaffen, welche Eigenschaften die Objekte ausmachen und wie sie miteinander zusammenhängen. Dieses Wissen würde es ihm dann ermöglichen verschiedenste Aufgaben in der neuen Umgebung zu erledigen. Eine zentrale Rolle bei diesem Verhalten spielt die Fähigkeit eigenständig zu entschieden, was es zu untersuchen gilt. Diese Fähigkeit nennt man Exploration. Erstaunlicherweise wurde autonome Exploration bisher in der Robotik vernachlässigt. Der Grund liegt darin, dass grundlegendere Fähigkeiten, wie zum Beispiel die Erzeugung von Bewegung oder die Wahrnehmung, die Wissenschaft bisher vor große Probleme stellten. Die Entwickelungen der letzten Jahre in diesen Bereichen ermöglichen uns aber nun Exploration der Umwelt mit Robotern zu untersuchen. Auf der anderen Seite wurden im Bereich des Maschinellen Lernens und der Statistik Methoden und theoretische Grundlagen entwickelt, die Lernalgorithmen in die Lage versetzen, selber ihre Trainingdaten zu sammeln. Dadurch kann die Lernrate mit möglichst wenig Trainingsdaten verbessert werden. Diese Methoden werden unter den Begriffen Aktives Lernen und Experimentelles Design zusammengefasst. In dieser Arbeit kombinieren wir die Resultate aus den beiden vorgenannten Feldern. Wir entwickeln damit die Grundlagen für autonome Exploration in der Robotik. Wir leiten diese Grundlagen von der Informationstheorie ab, die eine formale Definition von den Größen Information und Informationsgewinn entwickelt hat. Und obwohl wir zeigen, dass optimale Exploration nicht effzient berechenbar ist, können wir basierend auf dem Informationsgewinn Heuristiken entwicklen, die zu effizienten Explorationsstrategien führen. Um das Explorationsverhalten, dass sich aus diesen Strategien entwickelt, zu testen, führen wir die Physical Exploration Challenge ein, das Problem der physikalischen Exploration. Es formalisiert unsere Vision eines intelligenten, explorierenden Roboters als Problem der Exploration von externen Freiheitsgraden. Externe Freiheitsgrade sind solche, die der Roboter nicht direkt beeinflussen kann, sondern nur durch Interaktion mit der Umwelt. Schlussendlich modellieren wir verschiedene Explorationsaufgaben von externen Freiheitsgraden und zeigen mit verschiedenen Robotern, simulierten wie auch echten, wie diese Aufgaben gelöst werden können. Die Aufgaben umfassen dabei das Erkunden der Bedeutung von Symbolen, die geometrische Zusammenhänge widerspiegeln, die Exploration von Existenz und Eigenschaften von Gelenken in der Umwelt und wie die Stellung von Gelenken entscheidend für die Beweglichkeit andere Gelenke sein kann

    An illumination of the template enigma : software code generation with templates

    Get PDF
    Creating software is a process of refining a concept to an implementation. This process consists of several stages represented by documents, models and plans at several levels of abstraction. Mostly, the refinement process requires creativity of the programmers, but sometimes the task is boring and repetitive. This repetitive work is an indication that the program is not written at the most suitable level of abstraction. The level of abstraction offered by the used programming language might be too low to remove the recurring code. Code generators can be used to raise the level of abstraction of program specifications and to automate the repetitive work. This thesis focuses on code generators based on templates. Templates are one of the techniques to implement a code generator. Templates allow extension of the syntax of a programming language, enabling generative programming without modifying the underlying compiler. Four artifacts are involved in a template based generator: templates, input data, a template evaluator and output code. The templates we consider are a concrete (incomplete) representation of the output document, i.e. object code, that contains holes, i.e. the meta code. These holes are filled by the template evaluator using information from the input data to obtain the output code. Templates are widely used to generate HTML code in web applications. They can be used for generating all kinds of text, like e-mails or (source) code. In this thesis we limit the scope to the generation of source code. The central research question is how the quality of template based code generators can be improved. Quality, in general, is a broad notion and our scope is limited to the technical quality of templates and generated code. We focused on improving the maintainability of template based code generators and the correctness of the generated code. This is facilitated by the three main contributions provided by this thesis. First, the maintainability of template based code generators is increased by specifying the following requirement for our metalanguage. Our metalanguage should not be rich enough to allow programming in templates, without being too restrictive to express some code generators. We used the theory of formal languages to specify our metalanguage. Second, we ensure correctness of the templates and generated code. Third, the presented theory and techniques are validated by case studies. These case studies show application of templates in real world applications, increased maintainability and syntactical correctness of generated code. Our metalanguage should not be rich enough to allow programming in templates, without being too restrictive to express some code generators. The theory of formal languages is used to specify the requirements for our metalanguage. As we only consider to generate programming languages, it is sufficient to support the generation of languages defined by context-free grammars. This assumption is used to derive a metalanguage, that is rich enough to specify code generators that are able to instantiate all possible sentences of a context-free language. A specific case of a code generator, the unparser, is a program that can instantiate all sentences of a context-free language. We proved that an unparser can be implemented using a linear deterministic topdown tree-to-string transducer. We call this property unparser-completeness. Our metalanguage is based on a linear deterministic top-down tree-to-string transducer. Recall that the goal of specifying the requirements of the metalanguage is to increase the maintainability of template based code generators, without being too restrictive. To validate that our metalanguage is not too restrictive and leads to better maintainable templates, we compared it with four off-the-shelf text template systems by implementing an unparser. We have observed that the industrial template evaluators provide a Turing complete metalanguage, but they do not contain a block scoping mechanism for the meta-variables. This results in undesired additional boilerplate meta code in their templates. The second contribution is guaranteeing the correctness of the generated code. Correctness of the generated code can be divided in two concerns: syntactical correctness and semantical correctness. We start with syntactical correctness of the generated code. The use of text templates implies that syntactical correctness of the generated code can only be detected at compilation time. This means that errors detected during the compilation are reported on the level of the generated code. The developer is required to trace back manually the errors to their origin in the template or input data. We believe that programs manipulating source code should not consider the object code as text to detect errors as early as possible. We present an approach where the grammars of the object language and metalanguage can be combined in a modular way. Combining both grammars allows parsing both languages simultaneously. Syntax errors in both languages of the template will be found while parsing it. Moreover, only parsing a template is not sufficient to ensure that the generated code will be free of syntax errors. The template evaluator must be equipped with a mechanism to guarantee its output will be syntactically correct. We discuss our mechanism in short. A parse tree is constructed during the parsing of the template. This tree contains subtrees for the object code and subtrees for the meta code. While evaluating the template, subtrees of the meta code are substituted by object code subtrees. The template evaluator checks whether the root nonterminal of the object code subtree is equal to the root nonterminal of the meta code subtree. When both are equal, it is allowed to substitute the meta code. When the root nonterminals are distinct an accurate error message is generated. The template evaluator terminates when all meta code subtrees are substituted. The result is a parse tree of the object language and thus syntactically correct. We call this process syntax safe code generation. In order to validate that the presented techniques increase maintainability and ensure syntactical correctness, we implemented our ideas in a syntax safe template evaluator called Repleo. Repleo has been applied in four case studies. The first case is a real world situation, where it is required to generate a three tier web application from a data model. This case showed that multiple layers of an applications defined in different programming languages can be generated from a single model. The second case and third case are used to show that our metalanguage results in a better maintainable code generator. Our metalanguage forces to use a two layer code generator with separation of concerns between the two layers, where the original implementations are less modular. The last case study shows that ensuring syntactical correctness results in the prevention of cross-site scripting attacks in dynamic generation of web pages. Recall that one of our goals was ensuring the correctness of the generated code. We also showed that is possible to check static semantic properties of templates. Static semantic checks are defined for the metalanguage, for the object language and checks for the situations where the object language is dependent on the metalanguage. We implemented a prototype of a static semantic checker for PicoJava templates using attribute grammars. The use of attribute grammars leads to re-use of the original PicoJava checker. Summarizing, in this thesis we have formulated the requirements for a metalanguage and discussed how to implement a syntax safe template evaluator. This results in better maintainable template based code generators and more reliable generated code

    An Interoperable Clinical Cardiology Electronic Health Record System - a standards based approach for Clinical Practice and Research with Data Reuse

    Get PDF
    Currently in hospitals, several information systems manage, very often autonomously, the patient’s personal, clinical and diagnostic data. This originates a clinical information management system consisting of a myriad of independent subsystems which, although efficient in their specific purpose, make the integration of the whole system very difficult and limit the use of clinical data, especially as regards the reuse of these data for research purposes. Mainly for these reasons, the management of the Genoese ASL3 decided to commission the University of Genoa to set up a medical record system that could be easily integrated with the rest of the information system already present, but which offered solid interoperability features, and which could support the research skills of hospital health workers. My PhD work aimed to develop an electronic health record system for a cardiology ward, obtaining a prototype which is functional and usable in a hospital ward. The choice of cardiology was due to the wide availability of the staff of the cardiology department to support me in the development and in the test phase. The resulting medical record system has been designed “ab initio” to be fully integrated into the hospital information system and to exchange data with the regional health information infrastructure. In order to achieve interoperability the system is based on the Health Level Seven standards for exchanging information between medical information systems. These standards are widely deployed and allow for the exchange of information in several functional domains. Specific decision support sections for particular aspects of the clinical life were also included. The data collected by this system were the basis for examples of secondary use for the development of two models based on machine learning algorithms. The first model allows to predict mortality in patients with heart failure within 6 months from their admission, and the second is focused on the discrimination between heart failure versus chronic ischemic heart disease in the elderly population, which is the widest population section served by the cardiological ward
    • …
    corecore