10 research outputs found

    Dynamic deployment of context-aware access control policies for constrained security devices

    Get PDF
    Securing the access to a server, guaranteeing a certain level of protection over an encrypted communication channel, executing particular counter measures when attacks are detected are examples of security requirements. Such requirements are identi ed based on organizational purposes and expectations in terms of resource access and availability and also on system vulnerabilities and threats. All these requirements belong to the so-called security policy. Deploying the policy means enforcing, i.e., con guring, those security components and mechanisms so that the system behavior be nally the one speci ed by the policy. The deployment issue becomes more di cult as the growing organizational requirements and expectations generally leave behind the integration of new security functionalities in the information system: the information system will not always embed the necessary security functionalities for the proper deployment of contextual security requirements. To overcome this issue, our solution is based on a central entity approach which takes in charge unmanaged contextual requirements and dynamically redeploys the policy when context changes are detected by this central entity. We also present an improvement over the OrBAC (Organization-Based Access Control) model. Up to now, a controller based on a contextual OrBAC policy is passive, in the sense that it assumes policy evaluation triggered by access requests. Therefore, it does not allow reasoning about policy state evolution when actions occur. The modi cations introduced by our work overcome this limitation and provide a proactive version of the model by integrating concepts from action speci cation languages

    Author's personal copy Roles in information security e A survey and classification of the research area

    Get PDF
    Motivation The growing diffusion of information technologies within all areas of human society has increased their importance as a critical success factor in the modern world. However, information processing systems are vulnerable to many different kinds of threats that can lead to various types of damage resulting in significant economic losses. Consequently, the importance of Information Security has grown and evolved in a similar manner. In its most basic definition, Information Security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction. The aim of Information Security is to minimize risks related to the three main security goals confidentiality, integrity, and availability e usually referred to as "CIA" c o m p u t e r s & s e c u r i t y 3 0 ( 2 0 1 1 ) 7 4 8 e7 6 9 0167-4048/$ e see front matter

    PRESERVING PRIVACY IN DATA RELEASE

    Get PDF
    Data sharing and dissemination play a key role in our information society. Not only do they prove to be advantageous to the involved parties, but they can also be fruitful to the society at large (e.g., new treatments for rare diseases can be discovered based on real clinical trials shared by hospitals and pharmaceutical companies). The advancements in the Information and Communication Technology (ICT) make the process of releasing a data collection simpler than ever. The availability of novel computing paradigms, such as data outsourcing and cloud computing, make scalable, reliable and fast infrastructures a dream come true at reasonable costs. As a natural consequence of this scenario, data owners often rely on external storage servers for releasing their data collections, thus delegating the burden of data storage and management to the service provider. Unfortunately, the price to be paid when releasing a collection of data is in terms of unprecedented privacy risks. Data collections often include sensitive information, not intended for disclosure, that should be properly protected. The problem of protecting privacy in data release has been under the attention of the research and development communities for a long time. However, the richness of released data, the large number of available sources, and the emerging outsourcing/cloud scenarios raise novel problems, not addressed by traditional approaches, which need enhanced solutions. In this thesis, we define a comprehensive approach for protecting sensitive information when large collections of data are publicly or selectively released by their owners. In a nutshell, this requires protecting data explicitly included in the release, as well as protecting information not explicitly released but that could be exposed by the release, and ensuring that access to released data be allowed only to authorized parties according to the data owners\u2019 policies. More specifically, these three aspects translate to three requirements, addressed by this thesis, which can be summarized as follows. The first requirement is the protection of data explicitly included in a release. While intuitive, this requirement is complicated by the fact that privacy-enhancing techniques should not prevent recipients from performing legitimate analysis on the released data but, on the contrary, should ensure sufficient visibility over non sensitive information. We therefore propose a solution, based on a novel formulation of the fragmentation approach, that vertically fragments a data collection so to satisfy requirements for both information protection and visibility, and we complement it with an effective means for enriching the utility of the released data. The second requirement is the protection of data not explicitly included in a release. As a matter of fact, even a collection of non sensitive data might enable recipients to infer (possibly sensitive) information not explicitly disclosed but that somehow depends on the released information (e.g., the release of the treatment with which a patient is being cared can leak information about her disease). To address this requirement, starting from a real case study, we propose a solution for counteracting the inference of sensitive information that can be drawn observing peculiar value distributions in the released data collection. The third requirement is access control enforcement. Available solutions fall short for a variety of reasons. Traditional access control mechanisms are based on a reference monitor and do not fit outsourcing/cloud scenarios, since neither the data owner is willing, nor the cloud storage server is trusted, to enforce the access control policy. Recent solutions for access control enforcement in outsourcing scenarios assume outsourced data to be read-only and cannot easily manage (dynamic) write authorizations. We therefore propose an approach for efficiently supporting grant and revoke of write authorizations, building upon the selective encryption approach, and we also define a subscription-based authorization policy, to fit real-world scenarios where users pay for a service and access the resources made available during their subscriptions. The main contributions of this thesis can therefore be summarized as follows. With respect to the protection of data explicitly included in a release, our original results are: i) a novel modeling of the fragmentation problem; ii) an efficient technique for computing a fragmentation, based on reduced Ordered Binary Decision Diagrams (OBDDs) to formulate the conditions that a fragmentation must satisfy; iii) the computation of a minimal fragmentation not fragmenting data more than necessary, with the definition of both an exact and an heuristic algorithms, which provides faster computational time while well approximating the exact solutions; and iv) the definition of loose associations, a sanitized form of the sensitive associations broken by fragmentation that can be safely released, specifically extended to operate on arbitrary fragmentations. With respect to the protection of data not explicitly included in a release, our original results are: i) the definition of a novel and unresolved inference scenario, raised from a real case study where data items are incrementally released upon request; ii) the definition of several metrics to assess the inference exposure due to a data release, based upon the concepts of mutual information, Kullback-Leibler distance between distributions, Pearson\u2019s cumulative statistic, and Dixon\u2019s coefficient; and iii) the identification of a safe release with respect to the considered inference channel and the definition of the controls to be enforced to guarantee that no sensitive information be leaked releasing non sensitive data items. With respect to access control enforcement, our original results are: i) the management of dynamic write authorizations, by defining a solution based on selective encryption for efficiently and effectively supporting grant and revoke of write authorizations; ii) the definition of an effective technique to guarantee data integrity, so to allow the data owner and the users to verify that modifications to a resource have been produced only by authorized users; and iii) the modeling and enforcement of a subscription-based authorization policy, to support scenarios where both the set of users and the set of resources change frequently over time, and users\u2019 authorizations are based on their subscriptions

    Access Control Administration with Adjustable Decentralization

    Get PDF
    Access control is a key function of enterprises that preserve and propagate massive data. Access control enforcement and administration are two major components of the system. On one hand, enterprises are responsible for data security; thus, consistent and reliable access control enforcement is necessary although the data may be distributed. On the other hand, data often belongs to several organizational units with various access control policies and many users; therefore, decentralized administration is needed to accommodate diverse access control needs and to avoid the central bottleneck. Yet, the required degree of decentralization varies within different organizations: some organizations may require a powerful administrator in the system; whereas, some others may prefer a self-governing setting in which no central administrator exists, but users fully manage their own data. Hence, a single system with adjustable decentralization will be useful for supporting various (de)centralized models within the spectrum of access control administration. Giving individual users the ability to delegate or grant privileges is a means of decentralizing access control administration. Revocation of arbitrary privileges is a means of retaining control over data. To provide flexible administration, the ability to delegate a specific privilege and the ability to revoke it should be held independently of each other and independently of the privilege itself. Moreover, supporting arbitrary user and data hierarchies, fine-grained access control, and protection of both data (end objects) and metadata (access control data) with a single uniform model will provide the most widely deployable access control system. Conflict resolution is a major aspect of access control administration in systems. Resolving access conflicts when deriving effective privileges from explicit ones is a challenging problem in the presence of both positive and negative privileges, sophisticated data hierarchies, and diversity of conflict resolution strategies. This thesis presents a uniform access control administration model with adjustable decentralization, to protect both data and metadata. There are several contributions in this work. First, we present a novel mechanism to constrain access control administration for each object type at object creation time, as a means of adjusting the degree of decentralization for the object when the system is configured. Second, by controlling the access control metadata with the same mechanism that controls the users’ data, privileges can be granted and revoked to the extent that these actions conform to the corporation’s access control policy. Thus, this model supports a whole spectrum of access control administration, in which each model is characterized as a network of access control states, similar to a finite state automaton. The model depends on a hierarchy of access banks of authorizations which is supported by a formal semantics. Within this framework, we also introduce the self-governance property in the context of access control, and show how the model facilitates it. In particular, using this model, we introduce a conflict-free and decentralized access control administration model in which all users are able to retain complete control over their own data while they are also able to delegate any subset of their privileges to other users or user groups. We also introduce two measures to compare any two access control models in terms of the degrees of decentralization and interpretation. Finally, as the conflict resolution component of access control models, we incorporate a unified algorithm to resolve access conflicts by simultaneously supporting several combined strategies

    Twenty years of rewriting logic

    Get PDF
    AbstractRewriting logic is a simple computational logic that can naturally express both concurrent computation and logical deduction with great generality. This paper provides a gentle, intuitive introduction to its main ideas, as well as a survey of the work that many researchers have carried out over the last twenty years in advancing: (i) its foundations; (ii) its semantic framework and logical framework uses; (iii) its language implementations and its formal tools; and (iv) its many applications to automated deduction, software and hardware specification and verification, security, real-time and cyber-physical systems, probabilistic systems, bioinformatics and chemical systems

    A Uniform Formal Approach to Business and Access Control Models, Policies and their Combinations

    Get PDF
    Access control represents an important part of security in software systems, since access control policies determine which users of a software system have access to what objects and operations and under what constraints. One can view access control models as providing the basis for access control rules. Further, an access control policy can be seen as a combination of one or more rules, and one or more policies can be combined into a set of access control policies that control access to an entire system. The rules and resulting policies can be combined in many different ways, and the combination of rules and policies are included in policy languages. Approaches to access control (AC) policy languages, such as XACML, do not provide a formal representation for specifying rule- and policy-combining algorithms or for classifying and verifying properties of AC policies. In addition, there is no connection between the rules that form a policy and the general access control and business models on which those rules are based. Some authors propose formal representations for rule- and policy-combining algorithms. However, the proposed models are not expressive enough to represent formally classes of algorithms related to history of policy outcomes including ordered-permit-overrides, ordered-deny-overrides, and only-one-applicable. In fact, they are not able to express formally any algorithm that involves history including the class related to consensus such as weak-consensus, weak-majority, strong-consensus, strong-majority, and super-majoritypermit. In addition, some other authors propose a formal representation but do not present an approach and automated support for the formal verification of any classes of combining algorithms. The work presented in this thesis provides a uniform formal approach to business and access control models, policies and their combinations. The research involves a new formal representation for access control rules, policies, and their combination and supports formal verification. In addition, the approach explicitly connects the rules to the underlying access control model. Specically, the approach • provides a common representation for systematically describing and integrating business processes, access control models, their rules and policies, • expresses access control rules using an underlying access control model based on an existing augmented business modeling notation, • can express and verify formally all known policy- and rule-combining algorithms, a result not seen in the literature, • supports a classification of relevant access control properties that can be verified against policies and their combinations, and • supports automated formal verification of single policies and combined policy sets based on model checking. Finally, the approach is applied to an augmented version of the conference management system, a well-known example from the literature. Several properties, whose verification was not possible by prior approaches, such as ones involving history of policy outcomes, are verified in this thesis

    Enhancing Data Security in Data Warehousing

    Get PDF
    Tese de doutoramento do Programa de Doutoramento em Ciências e Tecnologias da Informação, apresentada à Faculdade de Ciências e Tecnologia da Universidade de CoimbraData Warehouses (DWs) store sensitive data that encloses many business secrets. They have become the most common data source used by analytical tools for producing business intelligence and supporting decision making in most enterprises. This makes them an extremely appealing target for both inside and outside attackers. Given these facts, securing them against data damage and information leakage is critical. This thesis proposes a security framework for integrating data confidentiality solutions and intrusion detection in DWs. Deployed as a middle tier between end user interfaces and the database server, the framework describes how the different solutions should interact with the remaining tiers. To the best of our knowledge, this framework is the first to integrate confidentiality solutions such as data masking and encryption together with intrusion detection in a unique blueprint, providing a broad scope data security architecture. Packaged database encryption solutions are been well-accepted as the best form for protecting data confidentiality while keeping high database performance. However, this thesis demonstrates that they heavily increase storage space and introduce extremely large response time overhead, among other drawbacks. Although their usefulness in their security purpose itself is indisputable, the thesis discusses the issues concerning their feasibility and efficiency in data warehousing environments. This way, solutions specifically tailored for DWs (i.e., that account for the particular characteristics of the data and workloads are capable of delivering better tradeoffs between security and performance than those proposed by standard algorithms and previous research. This thesis proposes a reversible data masking function and a novel encryption algorithm that provide diverse levels of significant security strength while adding small response time and storage space overhead. Both techniques take numerical input and produce numerical output, using data type preservation to minimize storage space overhead, and simply use arithmetical operators mixed with eXclusive OR and modulus operators in their data transformations. The operations used in these data transformations are native to standard SQL, which enables both solutions to use transparent SQL rewriting to mask or encrypt data. Transparently rewriting SQL allows discarding data roundtrips between the database and the encryption/decryption mechanisms, thus avoiding I/O and network bandwidth bottlenecks. Using operations and operators native to standard SQL also enables their full portability to any type of DataBase Management System (DBMS) and/or DW. Experimental evaluation demonstrates the proposed techniques outperform standard and state-of-the-art research algorithms while providing substantial security strength. From an intrusion detection view, most Database Intrusion Detection Systems (DIDS) rely on command-syntax analysis to compute data access patterns and dependencies for building user profiles that represent what they consider as typical user activity. However, the considerable ad hoc nature of DW user workloads makes it extremely difficult to distinguish between normal and abnormal user behavior, generating huge amounts of alerts that mostly turn out to be false alarms. Most DIDS also lack assessing the damage intrusions might cause, while many allow various intrusions to pass undetected or only inspect user actions a posteriori to their execution, which jeopardizes intrusion damage containment. This thesis proposes a DIDS specifically tailored for DWs, integrating a real-time intrusion detector and response manager at the SQL command level that acts transparently as an extension of the database server. User profiles and intrusion detection processes rely on analyzing several distinct aspects of typical DW workloads: the user command, processed data and results from processing the command. An SQL-like rule set extends data access control and statistical models are built for each feature to obtain individual user profiles, using statistical tests for intrusion detection. A self-calibration formula computes the contribution of each feature in the overall intrusion detection process. A risk exposure method is used for alert management, which is proven more efficient in damage containment than using alert correlation techniques to deal with the generation of high amounts of alerts. Experiments demonstrate the overall efficiency of the proposed DIDS.As Data Warehouses (DWs) armazenam dados sensíveis que muitas vezes encerram os segredos do negócio. São actualmente a forma mais utilizada por parte de ferramentas analíticas para produzir inteligência de negócio e proporcionar apoio à tomada de decisão em muitas empresas. Isto torna as DWs um alvo extremamente apetecível por parte de atacantes internos e externos à própria empresa. Devido a estes factos, assegurar que o seu conteúdo é devidamente protegido contra danos que possam ser causados nos dados, ou o roubo e utilização ou divulgação desses dados, é de uma importância crítica. Nesta tese, é apresentada uma framework de segurança que possibilita a integração conjunta das soluções de confidencialidade de dados e detecção de intrusões em DWs. Esta integração conjunta de soluções é definida na framework como uma camada intermédia entre os interfaces dos utilizadores e o servidor de base de dados, descrevendo como as diferentes soluções interagem com os restantes pares. Consideramos esta framework como a primeira do género que combina tipos distintos de soluções de confidencialidade, como mascaragem e encriptação de dados com detecção de intrusões, numa única arquitectura integrada, promovendo uma solução de segurança de dados transversal e de grande abrangência. A utilização de pacotes de soluções de encriptação incluídos em servidores de bases de dados tem sido considerada como a melhor forma de proteger a confidencialidade de dados sensíveis e conseguir ao mesmo tempo manter um nível elevado de desempenho nas bases de dados. Contudo, esta tese demonstra que a utilização de encriptação resulta tipicamente num aumento extremamente considerável do espaço de armazenamento de dados e no tempo de processamento e resposta dos comandos SQL, entre outras desvantagens ou aspectos negativos relativos ao seu desempenho. Apesar da sua utilidade indiscutível no cumprimento dos pressupostos em termos de segurança propriamente ditos, nesta tese discutimos os problemas inerentes que dizem respeito à sua aplicabilidade, eficiência e viabilidade em ambientes de data warehousing. Argumentamos que soluções especificamente concebidas para DWs, que tenham em conta as características particulares dos seus dados e as actividades típicas dos seus utilizadores, são capazes de produzir um melhor equilíbrio entre segurança e desempenho do que as soluções previamente disponibilizadas por algoritmos standard e outros trabalhos de investigação para bases de dados na sua generalidade. Nesta tese, propomos uma função reversível de mascaragem de dados e um novo algoritmo de encriptação, que providenciam diversos níveis de segurança consideráveis, ao mesmo tempo que adicionam pequenos aumentos de espaço de armazenamento e tempo de processamento. Ambas as técnicas recebem dados numéricos de entrada e produzem dados numéricos de saída, usam preservação do tipo de dados para minimizar o aumento do espaço de armazenamento, e simplesmente utilizam combinações de operadores aritméticos conjuntamente com OU exclusivos (XOR) e restos de divisão (MOD) nas operações de transformação de dados. Como este tipo de operações se conseguem realizar recorrendo a comandos nativos de SQL, isto permite a ambas as soluções utilizar de forma transparente a reescrita de comandos SQL para mascarar e encriptar dados. Este manuseamento transparente de comandos SQL permite requerer a execução desses mesmos comandos ao Sistema de Gestão de Base de Dados (SGBD) sem que os dados tenham de ser transportados entre a base de dados e os mecanismos de mascaragem/desmascaragem e encriptação/ decriptação, evitando assim o congestionamento em termos de I/O e rede. A utilização de operações e operadores nativos ao SQL também permite a sua portabilidade para qualquer tipo de SGBD e/ou DW. As avaliações experimentais demonstram que as técnicas propostas obtêm um desempenho significativamente superior ao obtido por algoritmos standard e outros propostos pelo estado da arte da investigação nestes domínios, enquanto providenciam um nível de segurança considerável. Numa perspectiva de detecção de intrusões, a maioria dos Sistemas de Detecção de Intrusões em Bases de Dados (SDIBD) utilizam formas de análise de sintaxe de comandos para determinar padrões de acesso e dependências que determinam os perfis que consideram representativos da actividade típica dos utilizadores. Contudo, a carga considerável de natureza ad hoc existente em muitas acções por parte dos utilizadores de DWs gera frequentemente um número avassalador de alertas que, na sua maioria, se revelam falsos alarmes. Muitos SDIBD também não fazem qualquer tipo de avaliação aos potenciais danos que as intrusões podem causar, enquanto muitos outros permitem que várias intrusões passem indetectadas ou apenas inspeccionam as acções dos utilizadores após essas acções terem completado a sua execução, o que coloca em causa a possível contenção e/ou reparação de danos causados. Nesta tese, propomos um SDIBD especificamente concebido para DWs, integrando um detector de intrusões em tempo real, com capacidade de parar ou impedir a execução da acção do utilizador, e que funciona de forma transparente como uma extensão do SGBD. Os perfis dos utilizadores e os processos de detecção de intrusões recorrem à análise de diversos aspectos distintos característicos da actividade típica de utilizadores de DWs: o comando SQL emitido, os dados processados, e os dados resultantes desse processamento. Um conjunto de regras tipo SQL estende o alcance das políticas de controlo de acesso a dados, e modelos estatísticos são construídos baseados em cada variável relevante à determinação dos perfis dos utilizadores, sendo utilizados testes estatísticos para analisar as acções dos utilizadores e detectar possíveis intrusões. Também é descrito um método de calibragem automatizado da contribuição de cada uma dessas variáveis no processo global de detecção de intrusões, com base na eficiência que vão apresentando ao longo do tempo nesse mesmo processo. Um método de exposição de risco é definido para fazer a gestão de alertas, que é mais eficiente do que as técnicas de correlação habitualmente utilizadas para este fim, de modo a lidar com a geração de quantidades elevadas de alertas. As avaliações experimentais incluídas nesta tese demonstram a eficiência do SDIBD proposto
    corecore