19 research outputs found

    A Functional Implementation of a Multiway Dataflow Constraint System Library

    Get PDF
    Masteroppgave i Programvareutvikling samarbeid med HVLPROG399MAMN-PRO

    Scene modeling based on constraint system decomposition techniques

    Full text link

    Reusable User Interface Behaviors

    Get PDF
    User interfaces often account for a majority of application code and defects. High quality user interfaces come with equally high development costs – lower than the cost of multitudes of users coping with low quality user interfaces, but higher than the mild frustration experienced by any individual user. Thus, the economics of software lead to a situation where barely passable user interfaces abound. That most user interface code comes from bespoke attempts to implement vague human interface guidelines is a leading cause of high cost and low quality. This thesis introduces a novel formalism for user interfaces based on ordered constraint systems. Using explicit models for the values and relationships in a user interface, several reusable algorithms are defined for rich user interface behaviors, including value propagation, dataflow visualization, pinning, scripting, command activation, widget enablement, and context-sensitive help. Developers can leverage provably correct implementations of such desirable features for free, raising the quality of user interfaces while lowering their production cost. Some of these behaviors have been implemented in a JavaScript framework, Hot-Drink, for web user interfaces, and a C++ framework, Adam, for desktop user interfaces. Experiments have demonstrated higher developer productivity, fewer lines of code, fewer defects, and fewer components when compared to conventional user interface frameworks

    Towards Low-cost Feature-rich Web User Interfaces

    Get PDF
    Web-based user interfaces are used widely. They are replacing conventional desktop-based user interfaces in many domains and are emerging as front-ends for online businesses. The technologies for web user interfaces have advanced considerably to support high-quality user interfaces. However, the usability of web interfaces continues to be an issue. We still encounter web forms where basic interactive features are missing or work unexpectedly. User interface is a costly and error-prone area of software construction. This is particularly true for web user interfaces. They are typically implemented with fewer reusable components on programmers' toolboxes than conventional user interfaces built using user interface frameworks such as Windows Forms, Cocoa, and Qt. Consequently, web interface programmers tend to struggle with low productivity, or low quality and high defect rates. This thesis focuses on property models, a declarative approach to programming user interfaces. In this approach, common user interface behaviors are automatically derived from the specifications of the data manipulated by user interfaces. The approach aims to reuse user interface algorithms that are common across interfaces and allow the programmers to focus on application-specific concerns. This thesis work is a part of project "hotdrink," a JavaScript implementation of the property model system, which has the goal of providing the benefits of property models for web interfaces. This thesis builds on previous work on property models, and adds to it three reusable help and convenience features, which can be especially useful for web forms. In particular, this thesis describes the generic mechanisms of the following user interface features: (1) validating data coming from a user and presenting useful messages that help the user to fix errors, (2) controlling the flow of data through "pinning," and (3) canceling the user's previous actions through undoing. The main contributions of the thesis are the mechanisms and the software architecture that enable implementing these behaviors in a reusable manner. This thesis also presents several examples to illustrate the benefits of the proposed mechanisms

    A domain-specific language for structure manipulation in constraint system-based GUIs

    Get PDF
    A common frustration with programming Graphical User Interfaces (GUIs) is that features for manipulating structures, such as lists and trees, are limited, inconsistent, buggy, or even missing. Implementing complete and convenient sets of operations for inserting, removing, and reordering elements in such structures can be tedious and difficult: a structure that appears as one collection to the user can be implemented as several different data structures and a web of dependencies between them. Structural modifications require changes both to the GUIs' model and view, and possibly extraneous bookkeeping operations, such as adding and removing event handlers.This paper introduces a DSL that helps programmers to implement a complete set of operations to structures displayed in GUIs. The programmer specifies structures and relations between elements in the structure. Concretely, the latter are definitions of methods for establishing and unestablishing relations. Operations that manipulate structures are specified as rules that control which relations should hold before and after a rule is applied. From these specifications, our tools generate an easy-to-use API for structure manipulation. We target constraint system-based Web GUIs: the DSL generates JavaScript and relies on dataflow constraint systems for expressing dependencies between elements in GUI structures. Our DSL gives tangible representations with well-defined operations for ad-hoc and incidental GUI structures.</p

    Guaranteeing Responsiveness and Consistency in Dynamic, Asynchronous Graphical User Interfaces

    Get PDF
    This dissertation proposes a programming model for Graphical User Interfaces (GUIs) that relieves the programmer of a difficult and error-prone task: orchestrating concurrent responses to events to ensure data dependencies are always enforced correctly. In this programming model, rather than defining program responses to events, the programmer defines the data dependencies that exist in the GUI and the methods by which those dependencies may be enforced – a run-time system uses this specification to generate responses to events. The approach gives the following guarantee: the same sequence of events produces the same results, regardless of the timing of those events. The dissertation demonstrates the benefits of the proposed programming model with implementations of several example user interfaces. At the core of this programming model is a data structure known as a property model. A property model composes responses to individual events into a single reactive program that runs asynchronously. The program's results are used to update the GUI. The program is constructed in a manner that respects all data dependencies, thereby guaranteeing that results are consistent regardless of the length of time taken by individual responses. The core reactive program may be extended with features that support additional functionality, such as access to prior variable values, optional data dependencies, and identifying unused variables. The dissertation defines the semantics of the construction and execution of this reactive program formally. The dissertation shows how property models may be defined as a composition of reusable components. This is essential for modeling GUIs whose structures change in response to user events by the addition or removal of components. Components can contain data and dependencies as well as templates that describe how dependencies arise from composition with other components. Furthermore, templates can be written for arrays of components to define dependencies that arise among them. One key task of the property model is planning by which methods dependencies will be enforced. The dissertation describes how a specialized planner can be constructed that is able to create a plan for a specific property model. This specialized planner is essentially a Deterministic Finite-state Automaton (DFA), and can be orders of magnitude faster than a general-purpose planner

    Multi-Dimensional Joins

    Get PDF
    We present three novel algorithms for performing multi-dimensional joins and an in-depth survey and analysis of a low-dimensional spatial join. The first algorithm, the Iterative Spatial Join, performs a spatial join on low-dimensional data and is based on a plane-sweep technique. As we show analytically and experimentally, the Iterative Spatial Join performs well when internal memory is limited, compared to competing methods. This suggests that the Iterative Spatial Join would be useful for very large data sets or in situations where internal memory is a shared resource and is therefore limited, such as with today's database engines which share internal memory amongst several queries. Furthermore, the performance of the Iterative Spatial Join is predictable and has no parameters which need to be tuned, unlike other algorithms. The second algorithm, the Quickjoin algorithm, performs a higher-dimensional similarity join in which pairs of objects that lie within a certain distance epsilon of each other are reported. The Quickjoin algorithm overcomes drawbacks of competing methods, such as requiring embedding methods on the data first or using multi-dimensional indices, which limit the ability to discriminate between objects in each dimension, thereby degrading performance. A formal analysis is provided of the Quickjoin method, and experiments show that the Quickjoin method significantly outperforms competing methods. The third algorithm adapts incremental join techniques to improve the speed of calculating the Hausdorff distance, which is used in applications such as image matching, image analysis, and surface approximations. The nearest neighbor incremental join technique for indices that are based on hierarchical containment use a priority queue of index node pairs and bounds on the distance values between pairs, both of which need to modified in order to calculate the Hausdorff distance. Results of experiments are described that confirm the performance improvement. Finally, a survey is provided which instead of just summarizing the literature and presenting each technique in its entirety, describes distinct components of the different techniques, and each technique is decomposed into an overall framework for performing a spatial join

    Graph Processing in Main-Memory Column Stores

    Get PDF
    Evermore, novel and traditional business applications leverage the advantages of a graph data model, such as the offered schema flexibility and an explicit representation of relationships between entities. As a consequence, companies are confronted with the challenge of storing, manipulating, and querying terabytes of graph data for enterprise-critical applications. Although these business applications operate on graph-structured data, they still require direct access to the relational data and typically rely on an RDBMS to keep a single source of truth and access. Existing solutions performing graph operations on business-critical data either use a combination of SQL and application logic or employ a graph data management system. For the first approach, relying solely on SQL results in poor execution performance caused by the functional mismatch between typical graph operations and the relational algebra. To the worse, graph algorithms expose a tremendous variety in structure and functionality caused by their often domain-specific implementations and therefore can be hardly integrated into a database management system other than with custom coding. Since the majority of these enterprise-critical applications exclusively run on relational DBMSs, employing a specialized system for storing and processing graph data is typically not sensible. Besides the maintenance overhead for keeping the systems in sync, combining graph and relational operations is hard to realize as it requires data transfer across system boundaries. A basic ingredient of graph queries and algorithms are traversal operations and are a fundamental component of any database management system that aims at storing, manipulating, and querying graph data. Well-established graph traversal algorithms are standalone implementations relying on optimized data structures. The integration of graph traversals as an operator into a database management system requires a tight integration into the existing database environment and a development of new components, such as a graph topology-aware optimizer and accompanying graph statistics, graph-specific secondary index structures to speedup traversals, and an accompanying graph query language. In this thesis, we introduce and describe GRAPHITE, a hybrid graph-relational data management system. GRAPHITE is a performance-oriented graph data management system as part of an RDBMS allowing to seamlessly combine processing of graph data with relational data in the same system. We propose a columnar storage representation for graph data to leverage the already existing and mature data management and query processing infrastructure of relational database management systems. At the core of GRAPHITE we propose an execution engine solely based on set operations and graph traversals. Our design is driven by the observation that different graph topologies expose different algorithmic requirements to the design of a graph traversal operator. We derive two graph traversal implementations targeting the most common graph topologies and demonstrate how graph-specific statistics can be leveraged to select the optimal physical traversal operator. To accelerate graph traversals, we devise a set of graph-specific, updateable secondary index structures to improve the performance of vertex neighborhood expansion. Finally, we introduce a domain-specific language with an intuitive programming model to extend graph traversals with custom application logic at runtime. We use the LLVM compiler framework to generate efficient code that tightly integrates the user-specified application logic with our highly optimized built-in graph traversal operators. Our experimental evaluation shows that GRAPHITE can outperform native graph management systems by several orders of magnitude while providing all the features of an RDBMS, such as transaction support, backup and recovery, security and user management, effectively providing a promising alternative to specialized graph management systems that lack many of these features and require expensive data replication and maintenance processes

    Metadata-driven data integration

    Get PDF
    Cotutela: Universitat Politècnica de Catalunya i Université Libre de Bruxelles, IT4BI-DC programme for the joint Ph.D. degree in computer science.Data has an undoubtable impact on society. Storing and processing large amounts of available data is currently one of the key success factors for an organization. Nonetheless, we are recently witnessing a change represented by huge and heterogeneous amounts of data. Indeed, 90% of the data in the world has been generated in the last two years. Thus, in order to carry on these data exploitation tasks, organizations must first perform data integration combining data from multiple sources to yield a unified view over them. Yet, the integration of massive and heterogeneous amounts of data requires revisiting the traditional integration assumptions to cope with the new requirements posed by such data-intensive settings. This PhD thesis aims to provide a novel framework for data integration in the context of data-intensive ecosystems, which entails dealing with vast amounts of heterogeneous data, from multiple sources and in their original format. To this end, we advocate for an integration process consisting of sequential activities governed by a semantic layer, implemented via a shared repository of metadata. From an stewardship perspective, this activities are the deployment of a data integration architecture, followed by the population of such shared metadata. From a data consumption perspective, the activities are virtual and materialized data integration, the former an exploratory task and the latter a consolidation one. Following the proposed framework, we focus on providing contributions to each of the four activities. We begin proposing a software reference architecture for semantic-aware data-intensive systems. Such architecture serves as a blueprint to deploy a stack of systems, its core being the metadata repository. Next, we propose a graph-based metadata model as formalism for metadata management. We focus on supporting schema and data source evolution, a predominant factor on the heterogeneous sources at hand. For virtual integration, we propose query rewriting algorithms that rely on the previously proposed metadata model. We additionally consider semantic heterogeneities in the data sources, which the proposed algorithms are capable of automatically resolving. Finally, the thesis focuses on the materialized integration activity, and to this end, proposes a method to select intermediate results to materialize in data-intensive flows. Overall, the results of this thesis serve as contribution to the field of data integration in contemporary data-intensive ecosystems.Les dades tenen un impacte indubtable en la societat. La capacitat d’emmagatzemar i processar grans quantitats de dades disponibles és avui en dia un dels factors claus per l’èxit d’una organització. No obstant, avui en dia estem presenciant un canvi representat per grans volums de dades heterogenis. En efecte, el 90% de les dades mundials han sigut generades en els últims dos anys. Per tal de dur a terme aquestes tasques d’explotació de dades, les organitzacions primer han de realitzar una integració de les dades, combinantles a partir de diferents fonts amb l’objectiu de tenir-ne una vista unificada d’elles. Per això, aquest fet requereix reconsiderar les assumpcions tradicionals en integració amb l’objectiu de lidiar amb els requisits imposats per aquests sistemes de tractament massiu de dades. Aquesta tesi doctoral té com a objectiu proporcional un nou marc de treball per a la integració de dades en el context de sistemes de tractament massiu de dades, el qual implica lidiar amb una gran quantitat de dades heterogènies, provinents de múltiples fonts i en el seu format original. Per això, proposem un procés d’integració compost d’una seqüència d’activitats governades per una capa semàntica, la qual és implementada a partir d’un repositori de metadades compartides. Des d’una perspectiva d’administració, aquestes activitats són el desplegament d’una arquitectura d’integració de dades, seguit per la inserció d’aquestes metadades compartides. Des d’una perspectiva de consum de dades, les activitats són la integració virtual i materialització de les dades, la primera sent una tasca exploratòria i la segona una de consolidació. Seguint el marc de treball proposat, ens centrem en proporcionar contribucions a cada una de les quatre activitats. La tesi inicia proposant una arquitectura de referència de software per a sistemes de tractament massiu de dades amb coneixement semàntic. Aquesta arquitectura serveix com a planell per a desplegar un conjunt de sistemes, sent el repositori de metadades al seu nucli. Posteriorment, proposem un model basat en grafs per a la gestió de metadades. Concretament, ens centrem en donar suport a l’evolució d’esquemes i fonts de dades, un dels factors predominants en les fonts de dades heterogènies considerades. Per a l’integració virtual, proposem algorismes de rescriptura de consultes que usen el model de metadades previament proposat. Com a afegitó, considerem heterogeneïtat semàntica en les fonts de dades, les quals els algorismes de rescriptura poden resoldre automàticament. Finalment, la tesi es centra en l’activitat d’integració materialitzada. Per això proposa un mètode per a seleccionar els resultats intermedis a materialitzar un fluxes de tractament intensiu de dades. En general, els resultats d’aquesta tesi serveixen com a contribució al camp d’integració de dades en els ecosistemes de tractament massiu de dades contemporanisLes données ont un impact indéniable sur la société. Le stockage et le traitement de grandes quantités de données disponibles constituent actuellement l’un des facteurs clés de succès d’une entreprise. Néanmoins, nous assistons récemment à un changement représenté par des quantités de données massives et hétérogènes. En effet, 90% des données dans le monde ont été générées au cours des deux dernières années. Ainsi, pour mener à bien ces tâches d’exploitation des données, les organisations doivent d’abord réaliser une intégration des données en combinant des données provenant de sources multiples pour obtenir une vue unifiée de ces dernières. Cependant, l’intégration de quantités de données massives et hétérogènes nécessite de revoir les hypothèses d’intégration traditionnelles afin de faire face aux nouvelles exigences posées par les systèmes de gestion de données massives. Cette thèse de doctorat a pour objectif de fournir un nouveau cadre pour l’intégration de données dans le contexte d’écosystèmes à forte intensité de données, ce qui implique de traiter de grandes quantités de données hétérogènes, provenant de sources multiples et dans leur format d’origine. À cette fin, nous préconisons un processus d’intégration constitué d’activités séquentielles régies par une couche sémantique, mise en oeuvre via un dépôt partagé de métadonnées. Du point de vue de la gestion, ces activités consistent à déployer une architecture d’intégration de données, suivies de la population de métadonnées partagées. Du point de vue de la consommation de données, les activités sont l’intégration de données virtuelle et matérialisée, la première étant une tâche exploratoire et la seconde, une tâche de consolidation. Conformément au cadre proposé, nous nous attachons à fournir des contributions à chacune des quatre activités. Nous commençons par proposer une architecture logicielle de référence pour les systèmes de gestion de données massives et à connaissance sémantique. Une telle architecture consiste en un schéma directeur pour le déploiement d’une pile de systèmes, le dépôt de métadonnées étant son composant principal. Ensuite, nous proposons un modèle de métadonnées basé sur des graphes comme formalisme pour la gestion des métadonnées. Nous mettons l’accent sur la prise en charge de l’évolution des schémas et des sources de données, facteur prédominant des sources hétérogènes sous-jacentes. Pour l’intégration virtuelle, nous proposons des algorithmes de réécriture de requêtes qui s’appuient sur le modèle de métadonnées proposé précédemment. Nous considérons en outre les hétérogénéités sémantiques dans les sources de données, que les algorithmes proposés sont capables de résoudre automatiquement. Enfin, la thèse se concentre sur l’activité d’intégration matérialisée et propose à cette fin une méthode de sélection de résultats intermédiaires à matérialiser dans des flux des données massives. Dans l’ensemble, les résultats de cette thèse constituent une contribution au domaine de l’intégration des données dans les écosystèmes contemporains de gestion de données massivesPostprint (published version
    corecore