128 research outputs found

    Ensuring compliance with data privacy and usage policies in online services

    Get PDF
    Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical, often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. We present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively. Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code. Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud. Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.Onlinedienste sammeln und verarbeiten eine Vielzahl sensibler persönlicher Daten, die komplexen Datenschutzrichtlinien unterliegen. Die Einhaltung dieser Richtlinien ist hĂ€ufig rechtlich bindend fĂŒr Dienstanbieter und gleichzeitig eine Herausforderung, da Fehler in Anwendungsprogrammen zu einer unabsichtlichen Offenlegung fĂŒhren können. Wir prĂ€sentieren zwei Compliance-Systeme, Qapla und Pacer, die Richtlinien effizient einhalten und gegen direkte und indirekte Offenlegungen durch SeitenkanĂ€le schĂŒtzen. Qapla verhindert direkte Offenlegungen in datenbankgestĂŒtzten Anwendungen. Herkömmliche Methoden binden RichtlinienprĂŒfungen in Anwendungscode ein. Stattdessen gibt Qapla Richtlinien direkt in der Datenbank an und setzt sie in einem Datenbankadapter durch. Die KonformitĂ€t ist somit vom Anwendungscode getrennt. Pacer verhindert Netzwerkseitenkanaloffenlegungen in Cloud-Anwendungen. Geheimnisse eines Nutzers können ĂŒber die Form des Netzwerkverkehr offengelegt werden, die bei gemeinsam genutzten Netzwerkelementen (z. B. Netzwerkkarten, Switches) beobachtet werden kann. Pacer implementiert eine Tunnelabstraktion, die Geheimnisse im Netzwerkverkehr des Nutzers verbirgt, jedoch Variationen basier- end auf nicht geheimen Informationen zulĂ€sst und eine sichere und effiziente Nutzung der Netzwerkressourcen in der Cloud ermöglicht. Beide Systeme erfordern geringen Entwicklungsaufwand und verursachen einen moderaten Leistungsaufwand, wodurch ihre NĂŒtzlichkeit demonstriert wird

    A Framework for an Adaptive Early Warning and Response System for Insider Privacy Breaches

    Get PDF
    Organisations such as governments and healthcare bodies are increasingly responsible for managing large amounts of personal information, and the increasing complexity of modern information systems is causing growing concerns about the protection of these assets from insider threats. Insider threats are very difficult to handle, because the insiders have direct access to information and are trusted by their organisations. The nature of insider privacy breaches varies with the organisation’s acceptable usage policy and the attributes of an insider. However, the level of risk that insiders pose depends on insider breach scenarios including their access patterns and contextual information, such as timing of access. Protection from insider threats is a newly emerging research area, and thus, only few approaches are available that systemise the continuous monitoring of dynamic insider usage characteristics and adaptation depending on the level of risk. The aim of this research is to develop a formal framework for an adaptive early warning and response system for insider privacy breaches within dynamic software systems. This framework will allow the specification of multiple policies at different risk levels, depending on event patterns, timing constraints, and the enforcement of adaptive response actions, to interrupt insider activity. Our framework is based on Usage Control (UCON), a comprehensive model that controls previous, ongoing, and subsequent resource usage. We extend UCON to include interrupt policy decisions, in which multiple policy decisions can be expressed at different risk levels. In particular, interrupt policy decisions can be dynamically adapted upon the occurrence of an event or over time. We propose a computational model that represents the concurrent behaviour of an adaptive early warning and response system in the form of statechart. In addition, we propose a Privacy Breach Specification Language (PBSL) based on this computational model, in which event patterns, timing constraints, and the triggered early warning level are expressed in the form of policy rules. The main features of PBSL are its expressiveness, simplicity, practicality, and formal semantics. The formal semantics of the PBSL, together with a model of the mechanisms enforcing the policies, is given in an operational style. Enforcement mechanisms, which are defined by the outcomes of the policy rules, influence the system state by mutually interacting between the policy rules and the system behaviour. We demonstrate the use of this PBSL with a case study from the e-government domain that includes some real-world insider breach scenarios. The formal framework utilises a tool that supports the animation of the enforcement and policy models. This tool also supports the model checking used to formally verify the safety and progress properties of the system over the policy and the enforcement specifications

    Modelo de controlo de acesso para suportar orquestração de expressÔes CRUD

    Get PDF
    Mestrado em Engenharia de Computadores e TelemĂĄticaAccess Control is a sensitive and crucial aspect when it comes to securing the data present in the databases. In an application which is driven by Create, Read, Update and Delete (CRUD) expressions, users can execute a single CRUD expression or a sequence of CRUD expressions to achieve the desired results. In such type of applications, the Access Control is not just Iimited to authorizing the subject for accessing the object, but it also aims to authorize and validate the operations that a subject can perform on the data after the authorization. Current Access Control models are generally concerned with restricting the access to the resources. However, once the subject is authorized, there are no restrictions on the actions a subject can perform on the resources. In this work an Access Control Model has been presented which extends current Access Control model's features to provide an environment where a set of predefined policies are implemented as graphs of CRUD expressions. The design of the access control policies is based on the CRUD expressions that a user needs to execute to complete a task. These graphs of CRUD expressions are hence used for controlling and validating the actions that can be performed on authorized information. In order to reuse the policies, presented model allows the inter execution of the policies based on some predefined rules. The aim of the present thesis work is to provide a structure which allows the application users to only execute the authorized sequences of CRUD expressions in a predefined order and allows the security experts to design the policies in a flexible way through the graph data structure. As a proof of concept, Role based Access Control model (RBAC) has been taken as a reference access control model and the base for this work is chosen as Secured, Distributed and Dynamic RBAC (S-DRACA) which allowed the sequence of CRUD expressions to be executed in single direction.O controlo de acesso Ă© um aspecto sensĂ­vel e crucial quando se fala de proteger dados presentes em base de dados. Em aplicaçÔes que assentam numa base de dados baseadas em expressĂ”es Creafe, Read, Update e Delefe (CRUD) , os utilizadores podem executar uma ou uma sequĂȘncia de expressĂ”es CRUD para obter um dado resultado. Neste tipo de aplicaçÔes o controlo de acesso nĂŁo Ă© limitado apenas a autorizar o acesso a um objecto por um sujeito, mas tambĂ©m a autorizar e validar as operaçÔes que o sujeito pode fazer sobre os dados depois de obter autorização. Os modelos atuais de controlo de acesso geralmente focamse em restringir o acesso aos recursos CRUD a CRUD. No entanto, logo que o sujeito Ă© autorizado, nĂŁo hĂĄ restriçÔes sob as açÔes que este pode efetuar sobre esses recursos. Neste trabalho Ă© apresentado um modelo de controlo de acesso que extende as funcionalidades dos modelos de controlo de acesso atuais para fornecer um ambiente onde um conjunto de politicas predefinidas sĂŁo implementadas como grafos de expressĂ”es CRUD. Estes grafos de expressĂ”es CRUD sĂŁo considerados como sequĂȘncias que atuam como politicas guardadas e preconfiguradas. O design das sequĂȘncias Ă© baseado nas operaçÔes que o utilizador deseja efetuar para obter um dado resultado. Estas sequĂȘncias de expressĂ”es CRUD sĂŁo assim usadas para controlar e validar as açÔes que podem ser efetuadas sobre a informação armazenada. De forma a reusar estas polĂ­ticas, o modelo apresentado define o uso de execuçao externa de polĂ­ticas configuradas. O objetivo do trabalho nesta tese Ă© fornecer uma estrutura que permite aos utilizadores de aplicaçÔes apenas executarem sequĂȘncias autorizadas de expressĂ”es CRUD numa ordem predefinida e permitir aos administradores de sistema de desenharem politicas de uma forma flexĂ­vel atravĂ©s de estruturas de grafos. Como prova de conceito, o modelo Role Based Access Control (RBAC) foi tido como referĂȘncia para o modelo de controlo de acesso e para a base deste trabalho foi escolhido o S-DRACA que permite sequĂȘncias de expressĂ”es CRUD de serem executadas por ordem

    ‘Enhanced Encryption and Fine-Grained Authorization for Database Systems

    Get PDF
    The aim of this research is to enhance fine-grained authorization and encryption so that database systems are equipped with the controls necessary to help enterprises adhere to zero-trust security more effectively. For fine-grained authorization, this thesis has extended database systems with three new concepts: Row permissions, column masks and trusted contexts. Row permissions and column masks provide data-centric security so the security policy cannot be bypassed as with database views, for example. They also coexist in harmony with the rest of the database core tenets so that enterprises are not forced to compromise neither security nor database functionality. Trusted contexts provide applications in multitiered environments with a secure and controlled manner to propagate user identities to the database and therefore enable such applications to delegate the security policy to the database system where it is enforced more effectively. Trusted contexts also protect against application bypass so the application credentials cannot be abused to make database changes outside the scope of the application’s business logic. For encryption, this thesis has introduced a holistic database encryption solution to address the limitations of traditional database encryption methods. It too coexists in harmony with the rest of the database core tenets so that enterprises are not forced to choose between security and performance as with column encryption, for example. Lastly, row permissions, column masks, trusted contexts and holistic database encryption have all been implemented IBM DB2, where they are relied upon by thousands of organizations from around the world to protect critical data and adhere to zero-trust security more effectively

    Approach for testing the extract-transform-load process in data warehouse systems, An

    Get PDF
    2018 Spring.Includes bibliographical references.Enterprises use data warehouses to accumulate data from multiple sources for data analysis and research. Since organizational decisions are often made based on the data stored in a data warehouse, all its components must be rigorously tested. In this thesis, we first present a comprehensive survey of data warehouse testing approaches, and then develop and evaluate an automated testing approach for validating the Extract-Transform-Load (ETL) process, which is a common activity in data warehousing. In the survey we present a classification framework that categorizes the testing and evaluation activities applied to the different components of data warehouses. These approaches include both dynamic analysis as well as static evaluation and manual inspections. The classification framework uses information related to what is tested in terms of the data warehouse component that is validated, and how it is tested in terms of various types of testing and evaluation approaches. We discuss the specific challenges and open problems for each component and propose research directions. The ETL process involves extracting data from source databases, transforming it into a form suitable for research and analysis, and loading it into a data warehouse. ETL processes can use complex one-to-one, many-to-one, and many-to-many transformations involving sources and targets that use different schemas, databases, and technologies. Since faulty implementations in any of the ETL steps can result in incorrect information in the target data warehouse, ETL processes must be thoroughly validated. In this thesis, we propose automated balancing tests that check for discrepancies between the data in the source databases and that in the target warehouse. Balancing tests ensure that the data obtained from the source databases is not lost or incorrectly modified by the ETL process. First, we categorize and define a set of properties to be checked in balancing tests. We identify various types of discrepancies that may exist between the source and the target data, and formalize three categories of properties, namely, completeness, consistency, and syntactic validity that must be checked during testing. Next, we automatically identify source-to-target mappings from ETL transformation rules provided in the specifications. We identify one-to-one, many-to-one, and many-to-many mappings for tables, records, and attributes involved in the ETL transformations. We automatically generate test assertions to verify the properties for balancing tests. We use the source-to-target mappings to automatically generate assertions corresponding to each property. The assertions compare the data in the target data warehouse with the corresponding data in the sources to verify the properties. We evaluate our approach on a health data warehouse that uses data sources with different data models running on different platforms. We demonstrate that our approach can find previously undetected real faults in the ETL implementation. We also provide an automatic mutation testing approach to evaluate the fault finding ability of our balancing tests. Using mutation analysis, we demonstrated that our auto-generated assertions can detect faults in the data inside the target data warehouse when faulty ETL scripts execute on mock source data

    BALANCING PRIVACY, PRECISION AND PERFORMANCE IN DISTRIBUTED SYSTEMS

    Get PDF
    Privacy, Precision, and Performance (3Ps) are three fundamental design objectives in distributed systems. However, these properties tend to compete with one another and are not considered absolute properties or functions. They must be defined and justified in terms of a system, its resources, stakeholder concerns, and the security threat model. To date, distributed systems research has only considered the trade-offs of balancing privacy, precision, and performance in a pairwise fashion. However, this dissertation formally explores the space of trade-offs among all 3Ps by examining three representative classes of distributed systems, namely Wireless Sensor Networks (WSNs), cloud systems, and Data Stream Management Systems (DSMSs). These representative systems support large part of the modern and mission-critical distributed systems. WSNs are real-time systems characterized by unreliable network interconnections and highly constrained computational and power resources. The dissertation proposes a privacy-preserving in-network aggregation protocol for WSNs demonstrating that the 3Ps could be navigated by adopting the appropriate algorithms and cryptographic techniques that are not prohibitively expensive. Next, the dissertation highlights the privacy and precision issues that arise in cloud databases due to the eventual consistency models of the cloud. To address these issues, consistency enforcement techniques across cloud servers are proposed and the trade-offs between 3Ps are discussed to help guide cloud database users on how to balance these properties. Lastly, the 3Ps properties are examined in DSMSs which are characterized by high volumes of unbounded input data streams and strict real-time processing constraints. Within this system, the 3Ps are balanced through a proposed simple and efficient technique that applies access control policies over shared operator networks to achieve privacy and precision without sacrificing the systems performance. Despite that in this dissertation, it was shown that, with the right set of protocols and algorithms, the desirable 3P properties can co-exist in a balanced way in well-established distributed systems, this dissertation is promoting the use of the new 3Ps-by-design concept. This concept is meant to encourage distributed systems designers to proactively consider the interplay among the 3Ps from the initial stages of the systems design lifecycle rather than identifying them as add-on properties to systems

    Access control systems for geo-spatial data and applications

    Get PDF
    Data security is today an important requirement in various applications because of the stringent need to ensure confidentiality, integrity, and availability of information. Comprehensive solutions to data security are quite complicated and require the integration of different tools and techniques as well as specific organizational processes. In such a context, a fundamental role is played by the access control system (ACS) that establishes which subjects are authorized to perform which operations on which objects. Subjects are individuals or programs or other entities requiring access to the protected resources. When dealing with protection of information, the resources of interest are typically objects that record information, such as files in an operating system, tuples in a relational database, or a complex object in an object database. Because of its relevance in the context of solutions for information security, access control has been extensively investigated for database management systems (DBMSs) [6], digital libraries [3, 14], and multimedia applications [24]. Yet, the importance of the spatial dimension in access control has been highlighted only recently. We say that access control has a spatial dimension when the authorization to access a resource depends on position information.We broadly categorize spatially aware access control as object-driven, subject-driven, and hybrid based on whether the position information concerns objects, subjects, or both, respectively. In the former case, the spatial dimension is introduced because of the spatial nature of resources. For example, if the resources are georeferenced Earth images, then we can envisage an individual be allowed to only display images covering a certain region. The spatial dimension may also be required because of the spatial nature of subjects. This is the case of mobile individuals allowed to access a resource when located in a given area. For example, an individual may be authorized to view secret information only within a military base. Finally, position information may concern both objects and subjects like in the case of an individual authorized to display images of a region only within a military office. There is a wide range of applications which motivate spatially aware access control. The two challenging and contrasting applications we propose as examples 190 Maria Luisa Damiani and Elisa Bertino are the spatial data infrastructures (SDI) and location-based services (LBS). An SDI consists of the technological and organizational infrastructure which enables the sharing and coordinated maintenance of spatial data among multiple heterogeneous organizations, primarily public administrations, and government agencies. On the other side, LBS enable mobile users equipped with location-aware terminals to access information based on the position of terminals. These applications have different requirements on access control. In an SDI, typically, there is the need to account for various complex structured spatial data that may have multiple representations across different organizations. In an SDI, the access control is thus object-driven. Conversely, in LBS, there is the need to account for a dynamic and mobile user population which may request diversified services based on position. Access control is thus subject-driven or hybrid. However, despite the variety of requirements and the importance of spatial data protection in these and other applications, very few efforts have been devoted to the investigation of spatially aware access control models and systems. In this chapter, we pursue two main goals: the first is to present an overview of this emerging research area and in particular of requirements and research directions; the second is to analyze in more detail some research issues, focusing in particular on access control in LBS. We can expect LBS to be widely deployed in the near future when advanced wireless networks, such as mobile geosensor networks, and new positioning technologies, such as the Galileo satellite system will come into operation. In this perspective, access control will become increasingly important, especially for enabling selective access to services such as Enterprise LBS, which provide information services to mobile organizations, such as health care and fleet management enterprises. An access control model targeting mobile organizations is GEO-RBAC [4]. Such a model is based on the RBAC (role-based access control) standard and is compliant with Open Geospatial Consortium (OGC) standards with respect to the representation of the spatial dimension of the model. The main contributions of the chapter can be summarized as follows: \u2022 We provide an overview of the ongoing research in the field of spatially aware access control. \u2022 We show how the spatial dimension is interconnected with the security aspects in a specific access control model, that is, GEO-RBAC. \u2022 We outline relevant architectural issues related to the implementation of an ACS based on the GEO-RBAC model. In particular, we present possible strategies for security enforcement and the architecture of a decentralized ACS for large-scale LBS applications. The chapter is organized as follows. The next section provides some background knowledge on data security and in particular access control models. The subsequent section presents requirements for geospatial data security and then the state of the art. Afterward the GEO-RBAC model is introduced. In particular, we present the main concepts of the model defined in the basic layer of the model, the Core GEO-RBAC. Hence, architectural approaches supporting GEO-RBAC are presented. Open issues are finally reported in the concluding section along with directions for future work

    A role-based access control schema for materialized views

    Get PDF
    This thesis research presents a framework that enhances security at the level of materialized views. Materialized views can be used for performance reasons in very large systems such as data warehouses or distributed systems, or for providing a filtered selection of data from a more general database. Existing proposed techniques provide rule-based access control for materialized views, however, the administration of such systems is time consuming and cumbersome in a large environment. This thesis presents a role-based access control schema for materialized views in which data authorization rules are associated with roles and defined in Datalog syntax in plain text files, a column level restriction is imposed on a materialized view based on a user assigned role, and a role conflict strategy is defined in which priority is given to each conflicting role in order to resolve role conflicts if a user is gaining authorization for permissions associated with conflicting roles at the same time. KEYWORDS Materialized Views, Authorization Views, Session Roles, Role Conflict

    Hierarchical Group and Attribute-Based Access Control: Incorporating Hierarchical Groups and Delegation into Attribute-Based Access Control

    Get PDF
    Attribute-Based Access Control (ABAC) is a promising alternative to traditional models of access control (i.e. Discretionary Access Control (DAC), Mandatory Access Control (MAC) and Role-Based Access control (RBAC)) that has drawn attention in both recent academic literature and industry application. However, formalization of a foundational model of ABAC and large-scale adoption is still in its infancy. The relatively recent popularity of ABAC still leaves a number of problems unexplored. Issues like delegation, administration, auditability, scalability, hierarchical representations, etc. have been largely ignored or left to future work. This thesis seeks to aid in the adoption of ABAC by filling in several of these gaps. The core contribution of this work is the Hierarchical Group and Attribute-Based Access Control (HGABAC) model, a novel formal model of ABAC which introduces the concept of hierarchical user and object attribute groups to ABAC. It is shown that HGABAC is capable of representing the traditional models of access control (MAC, DAC and RBAC) using this group hierarchy and that in many cases it’s use simplifies both attribute and policy administration. HGABAC serves as the basis upon which extensions are built to incorporate delegation into ABAC. Several potential strategies for introducing delegation into ABAC are proposed, categorized into families and the trade-offs of each are examined. One such strategy is formalized into a new User-to-User Attribute Delegation model, built as an extension to the HGABAC model. Attribute Delegation enables users to delegate a subset of their attributes to other users in an off-line manner (not requiring connecting to a third party). Finally, a supporting architecture for HGABAC is detailed including descriptions of services, high-level communication protocols and a new low-level attribute certificate format for exchanging user and connection attributes between independent services. Particular emphasis is placed on ensuring support for federated and distributed systems. Critical components of the architecture are implemented and evaluated with promising preliminary results. It is hoped that the contributions in this research will further the acceptance of ABAC in both academia and industry by solving the problem of delegation as well as simplifying administration and policy authoring through the introduction of hierarchical user groups
    • 

    corecore