39 research outputs found

    Mitigating Insider Threat in Relational Database Systems

    Get PDF
    The dissertation concentrates on addressing the factors and capabilities that enable insiders to violate systems security. It focuses on modeling the accumulative knowledge that insiders get throughout legal accesses, and it concentrates on analyzing the dependencies and constraints among data items and represents them using graph-based methods. The dissertation proposes new types of Knowledge Graphs (KGs) to represent insiders\u27 knowledgebases. Furthermore, it introduces the Neural Dependency and Inference Graph (NDIG) and Constraints and Dependencies Graph (CDG) to demonstrate the dependencies and constraints among data items. The dissertation discusses in detail how insiders use knowledgebases and dependencies and constraints to get unauthorized knowledge. It suggests new approaches to predict and prevent the aforementioned threat. The proposed models use KGs, NDIG and CDG in analyzing the threat status, and leverage the effect of updates on the lifetimes of data items in insiders\u27 knowledgebases to prevent the threat without affecting the availability of data items. Furthermore, the dissertation uses the aforementioned idea in ordering the operations of concurrent tasks such that write operations that update risky data items in knowledgebases are executed before the risky data items can be used in unauthorized inferences. In addition to unauthorized knowledge, the dissertation discusses how insiders can make unauthorized modifications in sensitive data items. It introduces new approaches to build Modification Graphs that demonstrate the authorized and unauthorized data items which insiders are able to update. To prevent this threat, the dissertation provides two methods, which are hiding sensitive dependencies and denying risky write requests. In addition to traditional RDBMS, the dissertation investigates insider threat in cloud relational database systems (cloud RDMS). It discusses the vulnerabilities in the cloud computing structure that may enable insiders to launch attacks. To prevent such threats, the dissertation suggests three models and addresses the advantages and limitations of each one. To prove the correctness and the effectiveness of the proposed approaches, the dissertation uses well stated algorithms, theorems, proofs and simulations. The simulations have been executed according to various parameters that represent the different conditions and environments of executing tasks

    Access Control for Data Integration in Presence of Data Dependencies

    Full text link
    International audienceDefining access control policies in a data integration scenario is a challenging task. In such a scenario typically each source specifies its local access control policy and cannot anticipate data inferences that can arise when data is integrated at the mediator level. Inferences, e.g., using functional dependencies, can allow malicious users to obtain, at the mediator level, prohibited information by linking multiple queries and thus violating the local policies. In this paper, we propose a framework, i.e., a methodology and a set of algorithms, to prevent such violations. First, we use a graph-based approach to identify sets of queries, called violating transactions, and then we propose an approach to forbid the execution of those transactions by identifying additional access control rules that should be added to the mediator. We also state the complexity of the algorithms and discuss a set of experiments we conducted by using both real and synthetic datasets. Tests also confirm the complexity and upper bounds in worst-case scenarios of the proposed algorithms

    An Introduction to Database Systems

    Get PDF
    This textbook introduces the basic concepts of database systems. These concepts are presented through numerous examples in modeling and design. The material in this book is geared to an introductory course in database systems offered at the junior or senior level of Computer Science. It could also be used in a first year graduate course in database systems, focusing on a selection of the advanced topics in the latter chapters

    ‘Enhanced Encryption and Fine-Grained Authorization for Database Systems

    Get PDF
    The aim of this research is to enhance fine-grained authorization and encryption so that database systems are equipped with the controls necessary to help enterprises adhere to zero-trust security more effectively. For fine-grained authorization, this thesis has extended database systems with three new concepts: Row permissions, column masks and trusted contexts. Row permissions and column masks provide data-centric security so the security policy cannot be bypassed as with database views, for example. They also coexist in harmony with the rest of the database core tenets so that enterprises are not forced to compromise neither security nor database functionality. Trusted contexts provide applications in multitiered environments with a secure and controlled manner to propagate user identities to the database and therefore enable such applications to delegate the security policy to the database system where it is enforced more effectively. Trusted contexts also protect against application bypass so the application credentials cannot be abused to make database changes outside the scope of the application’s business logic. For encryption, this thesis has introduced a holistic database encryption solution to address the limitations of traditional database encryption methods. It too coexists in harmony with the rest of the database core tenets so that enterprises are not forced to choose between security and performance as with column encryption, for example. Lastly, row permissions, column masks, trusted contexts and holistic database encryption have all been implemented IBM DB2, where they are relied upon by thousands of organizations from around the world to protect critical data and adhere to zero-trust security more effectively

    Colloquium capita datastructuren

    Get PDF

    Evaluation of Functional Data Models for Database Design and Use

    Get PDF
    The problems of design, operation, and maintenance of databases using the three most popular database management systems (Hierarchical, CQDASYL/DBTG, and Relational) are well known. Users wishing to use these systems have to make conscious and often complex mappings between the real-world structures and the data structuring options (data models) provided by these systems. In addition, much of the semantics associated with the data either does not get expressed at all or gets embedded procedurally in application programs in an ad-hoc way. In recent years, a large number of data models (called semantic data models) have been proposed with the aim of simplifying database design and use. However, the lack of usable implementations of these proposals has so far inhibited the widespread use of these concepts. The present work reports on an effort to evaluate and extend one such semantic model by means of an implementation. It is based on the functional data model proposed earlier by Shipman[SHIP81). We call this 'Extended Functional Data Model' (EFDM). EFDM, like Shipman's proposals, is a marriage of three of the advanced modelling concepts found in both database and artificial intelligence research: the concept of entity to represent an object in the real world, the concept of type hierarchy among entity types, and the concept of derived data for modelling procedural knowledge. The functional notation of the model lends itself to high level data manipulation languages. The data selection in these languages is expressed simply as function application. Further, the functional approach makes it possible to incorporate general purpose computation facilities in the data languages without having to embed them in procedural languages. In addition to providing the usual database facilities, the implementation also provides a mechanism to specify multiple user views of the database

    Attribute-Level Versioning: A Relational Mechanism for Version Storage and Retrieval

    Get PDF
    Data analysts today have at their disposal a seemingly endless supply of data and repositories hence, datasets from which to draw. New datasets become available daily thus making the choice of which dataset to use difficult. Furthermore, traditional data analysis has been conducted using structured data repositories such as relational database management systems (RDBMS). These systems, by their nature and design, prohibit duplication for indexed collections forcing analysts to choose one value for each of the available attributes for an item in the collection. Often analysts discover two or more datasets with information about the same entity. When combining this data and transforming it into a form that is usable in an RDBMS, analysts are forced to deconflict the collisions and choose a single value for each duplicated attribute containing differing values. This deconfliction is the source of a considerable amount of guesswork and speculation on the part of the analyst in the absence of professional intuition. One must consider what is lost by discarding those alternative values. Are there relationships between the conflicting datasets that have meaning? Is each dataset presenting a different and valid view of the entity or are the alternate values erroneous? If so, which values are erroneous? Is there a historical significance of the variances? The analysis of modern datasets requires the use of specialized algorithms and storage and retrieval mechanisms to identify, deconflict, and assimilate variances of attributes for each entity encountered. These variances, or versions of attribute values, contribute meaning to the evolution and analysis of the entity and its relationship to other entities. A new, distinct storage and retrieval mechanism will enable analysts to efficiently store, analyze, and retrieve the attribute versions without unnecessary complexity or additional alterations of the original or derived dataset schemas. This paper presents technologies and innovations that assist data analysts in discovering meaning within their data and preserving all of the original data for every entity in the RDBMS
    corecore