2,306 research outputs found

    Extensions and Experimental Evaluation of SAT-based solvers for the UAQ problem

    Get PDF
    Nowadays, most of the health organizations make use of Health Information Systems (HIS) to support the staff to provide patients with proper care service. In this context, security and privacy are key to establish trust between the actors involved in the healthcare process, including the patient. However, patients' privacy cannot jeopardize their safety: as a consequence, a compromise between the two must eventually be found. Privilege management and access control are necessary elements to provide security and privacy. In this thesis, we first present the main features that make the Role Based Access Control suitable for permissions management and access control in HIS. We then address the User Authorization Query (UAQ) problem for RBAC, namely the problem of determining the optimum set of roles to activate to provide the user with the requested permissions (if the user is authorized) while satisfying a set of Dynamic Mutually Exclusive Roles (DMER) constraints and achieving some optimization objective (least privilege versus availability). As a first contribution, we show how DMER can be used to support the enforcement of SoD. The UAQ problem is known to be NP-hard. Most of the techniques proposed in the literature to solve it have been experimentally evaluated by running them against different benchmark problems. However, the adequacy of the latter is seldom discussed. In this thesis, we propose a methodology for evaluating existing benchmarks or designing new ones: the methodology leverages the asymptotic complexity analysis of the solving procedures provided in other works to forsee the benchmarks complexity given the values of the most significant RBAC dimensions. First, we use our methodology to demonstrate that the state-of-the-art benchmarks are unsatisfactory. We then introduce UAQ-Solve, a tool that works both as generator of benchmarks and as UAQ solver leveraging existing PMAXSAT complete solvers. By using UAQ-Solve, we apply our methodology to generate a novel suite of parametric benchmarks that allows for the systematic assessment of UAQ solvers over a number of relevant dimensions. These include problems for which no polynomial-time algorithm is known as well as problems for which polynomial-time algorithms do exist. We then execute UAQ-Solve over our benchmarks to compare the performance of different complete and incomplete PMAXSAT solvers

    Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective

    Full text link
    Rapid advances in human genomics are enabling researchers to gain a better understanding of the role of the genome in our health and well-being, stimulating hope for more effective and cost efficient healthcare. However, this also prompts a number of security and privacy concerns stemming from the distinctive characteristics of genomic data. To address them, a new research community has emerged and produced a large number of publications and initiatives. In this paper, we rely on a structured methodology to contextualize and provide a critical analysis of the current knowledge on privacy-enhancing technologies used for testing, storing, and sharing genomic data, using a representative sample of the work published in the past decade. We identify and discuss limitations, technical challenges, and issues faced by the community, focusing in particular on those that are inherently tied to the nature of the problem and are harder for the community alone to address. Finally, we report on the importance and difficulty of the identified challenges based on an online survey of genome data privacy expertsComment: To appear in the Proceedings on Privacy Enhancing Technologies (PoPETs), Vol. 2019, Issue

    Tunable Security for Deployable Data Outsourcing

    Get PDF
    Security mechanisms like encryption negatively affect other software quality characteristics like efficiency. To cope with such trade-offs, it is preferable to build approaches that allow to tune the trade-offs after the implementation and design phase. This book introduces a methodology that can be used to build such tunable approaches. The book shows how the proposed methodology can be applied in the domains of database outsourcing, identity management, and credential management

    Computational Complexity And Algorithms For Dirty Data Evaluation And Repairing

    Get PDF
    In this dissertation, we study the dirty data evaluation and repairing problem in relational database. Dirty data is usually inconsistent, inaccurate, incomplete and stale. Existing methods and theories of consistency describe using integrity constraints, such as data dependencies. However, integrity constraints are good at detection but not at evaluating the degree of data inconsistency and cannot guide the data repairing. This dissertation first studies the computational complexity of and algorithms for the database inconsistency evaluation. We define and use the minimum tuple deletion to evaluate the database inconsistency. For such minimum tuple deletion problem, we study the relationship between the size of rule set and its computational complexity. We show that the minimum tuple deletion problem is NP-hard to approximate the minimum tuple deletion within 17/16 if given three functional dependencies and four attributes involved. A near optimal approximated algorithm for computing the minimum tuple deletion is proposed with a ratio of 2 − 1/2r , where r is the number of given functional dependencies. To guide the data repairing, this dissertation also investigates the data repairing method by using query feedbacks, formally studies two decision problems, functional dependency restricted deletion and insertion propagation problem, corresponding to the feedbacks of deletion and insertion. A comprehensive analysis on both combined and data complexity of the cases is provided by considering different relational operators and feedback types. We have identified the intractable and tractable cases to picture the complexity hierarchy of these problems, and provided the efficient algorithm on these tractable cases. Two improvements are proposed, one focuses on figuring out the minimum vertex cover in conflict graph to improve the upper bound of tuple deletion problem, and the other one is a better dichotomy for deletion and insertion propagation problems at the absence of functional dependencies from the point of respectively considering data, combined and parameterized complexities

    Chameleon: A Secure Cloud-Enabled and Queryable System with Elastic Properties

    Get PDF
    There are two dominant themes that have become increasingly more important in our technological society. First, the recurrent use of cloud-based solutions which provide infrastructures, computation platforms and storage as services. Secondly, the use of applicational large logs for analytics and operational monitoring in critical systems. Moreover, auditing activities, debugging of applications and inspection of events generated by errors or potential unexpected operations - including those generated as alerts by intrusion detection systems - are common situations where extensive logs must be analyzed, and easy access is required. More often than not, a part of the generated logs can be deemed as sensitive, requiring a privacy-enhancing and queryable solution. In this dissertation, our main goal is to propose a novel approach of storing encrypted critical data in an elastic and scalable cloud-based storage, focusing on handling JSONbased ciphered documents. To this end, we make use of Searchable and Homomorphic Encryption methods to allow operations on the ciphered documents. Additionally, our solution allows for the user to be near oblivious to our system’s internals, providing transparency while in use. The achieved end goal is a unified middleware system capable of providing improved system usability, privacy, and rich querying over the data. This previously mentioned objective is addressed while maintaining server-side auditable logs, allowing for searchable capabilities by the log owner or authorized users, with integrity and authenticity proofs. Our proposed solution, named Chameleon, provides rich querying facilities on ciphered data - including conjunctive keyword, ordering correlation and boolean queries - while supporting field searching and nested aggregations. The aforementioned operations allow our solution to provide data analytics upon ciphered JSON documents, using Elasticsearch as our storage and search engine.O uso recorrente de soluções baseadas em nuvem tornaram-se cada vez mais importantes na nossa sociedade. Tais soluções fornecem infraestruturas, computação e armazenamento como serviços, para alem do uso de logs volumosos de sistemas e aplicações para análise e monitoramento operacional em sistemas críticos. Atividades de auditoria, debugging de aplicações ou inspeção de eventos gerados por erros ou possíveis operações inesperadas - incluindo alertas por sistemas de detecção de intrusão - são situações comuns onde logs extensos devem ser analisados com facilidade. Frequentemente, parte dos logs gerados podem ser considerados confidenciais, exigindo uma solução que permite manter a confidencialidades dos dados durante procuras. Nesta dissertação, o principal objetivo é propor uma nova abordagem de armazenar logs críticos num armazenamento elástico e escalável baseado na cloud. A solução proposta suporta documentos JSON encriptados, fazendo uso de Searchable Encryption e métodos de criptografia homomórfica com provas de integridade e autenticação. O objetivo alcançado é um sistema de middleware unificado capaz de fornecer privacidade, integridade e autenticidade, mantendo registos auditáveis do lado do servidor e permitindo pesquisas pelo proprietário dos logs ou usuários autorizados. A solução proposta, Chameleon, visa fornecer recursos de consulta atuando em cima de dados cifrados - incluindo queries conjuntivas, de ordenação e booleanas - suportando pesquisas de campo e agregações aninhadas. As operações suportadas permitem à nossa solução suportar data analytics sobre documentos JSON cifrados, utilizando o Elasticsearch como armazenamento e motor de busca

    Army Information Technology Procurement: a Business Process Analysis

    Get PDF
    This thesis presents a business process analysis of the Army\u27s ICT procurement system. The research identified several inefficiencies and proposes several potential solutions. the contributions of this research include a unified taxonomy, a method to prioritize requests, and system architecture products for development of an automated and sustainable collaboration interface for the CIO/G6 to streamline their IT acquisition process. Development of a centralized system would reduce waste in the request process from submission to formal accounting, hasten the movement of requests between stakeholders, maintain a digital signature authorization for each approval authority, provide a reporting database to recognize reprogramming thresholds, and deliver relevant metrics and analysis to help inform the Army\u27s IT resourcing decisions

    Cloud transactions and caching for improved performance in clouds and DTNs

    Get PDF
    In distributed transactional systems deployed over some massively decentralized cloud servers, access policies are typically replicated. Interdependencies ad inconsistencies among policies need to be addressed as they can affect performance, throughput and accuracy. Several stringent levels of policy consistency constraints and enforcement approaches to guarantee the trustworthiness of transactions on cloud servers are proposed. We define a look-up table to store policy versions and the concept of Tree-Based Consistency approach to maintain a tree structure of the servers. By integrating look-up table and the consistency tree based approach, we propose an enhanced version of Two-phase validation commit (2PVC) protocol integrated with the Paxos commit protocol with reduced or almost the same performance overhead without affecting accuracy and precision. A new caching scheme has been proposed which takes into consideration Military/Defense applications of Delay-tolerant Networks (DTNs) where data that need to be cached follows a whole different priority levels. In these applications, data popularity can be defined not only based on request frequency, but also based on the importance like who created and ranked point of interests in the data, when and where it was created; higher rank data belonging to some specific location may be more important though frequency of those may not be higher than more popular lower priority data. Thus, our caching scheme is designed by taking different requirements into consideration for DTN networks for defense applications. The performance evaluation shows that our caching scheme reduces the overall access latency, cache miss and usage of cache memory when compared to using caching schemes --Abstract, page iv

    On designing large, secure and resilient networked systems

    Get PDF
    2019 Summer.Includes bibliographical references.Defending large networked systems against rapidly evolving cyber attacks is challenging. This is because of several factors. First, cyber defenders are always fighting an asymmetric warfare: While the attacker needs to find just a single security vulnerability that is unprotected to launch an attack, the defender needs to identify and protect against all possible avenues of attacks to the system. Various types of cost factors, such as, but not limited to, costs related to identifying and installing defenses, costs related to security management, costs related to manpower training and development, costs related to system availability, etc., make this asymmetric warfare even challenging. Second, newer and newer cyber threats are always emerging - the so called zero-day attacks. It is not possible for a cyber defender to defend against an attack for which defenses are yet unknown. In this work, we investigate the problem of designing large and complex networks that are secure and resilient. There are two specific aspects of the problem that we look into. First is the problem of detecting anomalous activities in the network. While this problem has been variously investigated, we address the problem differently. We posit that anomalous activities are the result of mal-actors interacting with non mal-actors, and such anomalous activities are reflected in changes to the topological structure (in a mathematical sense) of the network. We formulate this problem as that of Sybil detection in networks. For our experimentation and hypothesis testing we instantiate the problem as that of Sybil detection in on-line social networks (OSNs). Sybil attacks involve one or more attackers creating and introducing several mal-actors (fake identities in on-line social networks), called Sybils, into a complex network. Depending on the nature of the network system, the goal of the mal-actors can be to unlawfully access data, to forge another user's identity and activity, or to influence and disrupt the normal behavior of the system. The second aspect that we look into is that of building resiliency in a large network that consists of several machines that collectively provide a single service to the outside world. Such networks are particularly vulnerable to Sybil attacks. While our Sybil detection algorithms achieve very high levels of accuracy, they cannot guarantee that all Sybils will be detected. Thus, to protect against such "residual" Sybils (that is, those that remain potentially undetected and continue to attack the network services), we propose a novel Moving Target Defense (MTD) paradigm to build resilient networks. The core idea is that for large enterprise level networks, the survivability of the network's mission is more important than the security of one or more of the servers. We develop protocols to re-locate services from server to server in a random way such that before an attacker has an opportunity to target a specific server and disrupt it’s services, the services will migrate to another non-malicious server. The continuity of the service of the large network is thus sustained. We evaluate the effectiveness of our proposed protocols using theoretical analysis, simulations, and experimentation. For the Sybil detection problem we use both synthetic and real-world data sets. We evaluate the algorithms for accuracy of Sybil detection. For the moving target defense protocols we implement a proof-of-concept in the context of access control as a service, and run several large scale simulations. The proof-of- concept demonstrates the effectiveness of the MTD paradigm. We evaluate the computation and communication complexity of the protocols as we scale up to larger and larger networks

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period
    corecore