99 research outputs found

    Some remarks on functional dependencies in relational datamodels

    Get PDF
    The concept of minimal family is introduced. We prove that this family and family of functional dependencies (FDs ) determine each other uniquely. A characterization of this family is presented. We show that there is no polynomial time algorithm finding a minimal family from a given relation scheme. We prove that the time complexity of finding a minimal family from a given relation is exponential in the number of attributes

    Acta Cybernetica : Volume 11. Number 4.

    Get PDF

    Acta Cybernetica : Volume 16. Number 3.

    Get PDF

    On the Data Complexity of Consistent Query Answering over Graph Databases

    Get PDF
    Areas in which graph databases are applied - such as the semantic web, social networks and scientific databases - are prone to inconsistency, mainly due to interoperability issues. This raises the need for understanding query answering over inconsistent graph databases in a framework that is simple yet general enough to accommodate many of its applications. We follow the well-known approach of consistent query answering (CQA), and study the data complexity of CQA over graph databases for regular path queries (RPQs) and regular path constraints (RPCs), which are frequently used. We concentrate on subset, superset and symmetric difference repairs. Without further restrictions, CQA is undecidable for the semantics based on superset and symmetric difference repairs, and Pi_2^P-complete for subset repairs. However, we provide several tractable restrictions on both RPCs and the structure of graph databases that lead to decidability, and even tractability of CQA. We also compare our results with those obtained for CQA in the context of relational databases

    Acta Cybernetica : Volume 20. Number 2.

    Get PDF

    Proceedings of the first international VLDB workshop on Management of Uncertain Data

    Get PDF

    Finding blind spots:Investigating identity data matching in transnational commercialized security infrastructures and beyond

    Get PDF
    This dissertation analyzes the interconnections between data matching technologies, identification practices, and transnational commercialized security infrastructures, particularly in relation to migration management and border control. The research was motivated by a curiosity about the intersection between identity data matching and the challenges authorities encounter when identifying individuals, especially the “blind spots” caused by incomplete data, aliases, and uncertainties. The dissertation addresses the following main research question: “How are practices and technologies for matching identity data in migration management and border control shaping and shaped by transnational commercialized security infrastructures?”The dissertation begins by presenting an overview of the literature regarding the connections between data matching technology, which is used across various sectors, and its interrelationships with the internationalization, commercialization, securitization, and infrastructuring of identification infrastructure. This overview highlights a noticeable gap in the understanding of how data matching influences the meaning of the interconnected data and shapes relationships between organizations that use it. To address this gap, Chapter 3 proposes a methodological framework for using data matching as both a research topic and a resource for answering specific sub-questions related to specific aspects of data matching.Chapter 4 emphasizes the significance of data models in information systems for categorizing individuals and establishing connections between different data models for accurate matching. The analysis of this aspect of data matching is made possible by introducing the “Ontology Explorer”, which serves as a novel method for examining the knowledge and assumptions embedded within data models. By applying this method to analyze national and transnational data infrastructures for population management, this method is shown to reveal authorities’ imaginaries on people-on-the-move. In this way, the method demonstrates the importance of data categories in data models, as they are crucial for data matching while also offering valuable insights into how authorities enact people in different ways.Following that, the dissertation investigates how identity data matching is employed to re-identify applicants within a government migration and asylum agency in The Netherlands. Chapter 5 introduces the concept of re-identification, which involves the ongoing utilization and integration of data from various sources to establish whether multiple sets of identity data pertain to a single individual. This chapter uses insights gathered from interviews with personnel from the agency to investigate the integration of data matching tools for re-identification. The chapter shows that striving to minimize data friction in re-identification through data matching can have unintended consequences and additional burdens for the agency’s personnel.Lastly, this dissertation examines the evolution of a commercial data matching system employed for identification and security, adopting a sociotechnical approach. Chapter 6 introduces heuristics that are then used to identify moments that emphasize the design contingencies of the data matching system. Through the examination of fieldwork data collected from the company that created the system, the chapter highlights the reciprocal influences between the system’s design and the actors and entities involved. The system experienced adaptive and contingent changes from a generic data matching system to a specialized tool for identification and security because of such influences. In a broader sense, the chapter brings attention to the interrelationships among software suppliers, integrators, and customers, and the circulation and use of knowledge and technology for matching identity data across organizations

    Acta Cybernetica : Volume 11. Number 1-2.

    Get PDF

    Acta Cybernetica : Volume 17. Number 1.

    Get PDF

    Integrated data model and DSL modifications

    Get PDF
    Companies are increasingly more and more dependent on distributed web-based software systems to support their businesses. This increases the need to maintain and extend software systems with up-to-date new features. Thus, the development process to introduce new features usually needs to be swift and agile, and the supporting software evolution process needs to be safe, fast, and efficient. However, this is usually a difficult and challenging task for a developer due to the lack of support offered by programming environments, frameworks, and database management systems. Changes needed at the code level, database model, and the actual data contained in the database must be planned and developed together and executed in a synchronized way. Even under a careful development discipline, the impact of changing an application data model is hard to predict. The lifetime of an application comprises changes and updates designed and tested using data, which is usually far from the real, production, data. So, coding DDL and DML SQL scripts to update database schema and data, is the usual (and hard) approach taken by developers. Such manual approach is error prone and disconnected from the real data in production, because developers may not know the exact impact of their changes. This work aims to improve the maintenance process in the context of Agile Platform by Outsystems. Our goal is to design and implement new data-model evolution features that ensure a safe support for change and a sound migration process. Our solution includes impact analysis mechanisms targeting the data model and the data itself. This provides, to developers, a safe, simple, and guided evolution process
    • …
    corecore