7 research outputs found

    Improving the Policy Specification for Practical Access Control Systems

    Get PDF
    Access control systems play a crucial role in protecting the security of information systems by ensuring that only authorized users are granted access to sensitive resources, and the protection is only as good as the access control policies. For enabling a security administrator to express her desired policy conveniently, it is paramount that a policy specification is expressive, comprehensible, and free of inconsistencies. In this dissertation, we study the policy specifications for three practical access control systems (i.e., obligation systems, firewalls, and Security-Enhanced Linux in Android) and improve their expressiveness, comprehensibility, and consistency. First, we improve the expressiveness of obligation policies for handling different types of obligations. We propose a language for specifying obligations as well as an architecture for handling access control policies with these obligations, by extending XACML (i.e., the de facto standard for specifying access control policies). We also implement our design into a prototype system named ExtXACML to handle various obligations. Second, we improve the comprehensibility of firewall policies enabling administrators to better understand and manage the policies. We introduce the tri-modularized design of firewall policies for elevating them from monolithic to modular. To support legacy firewall policies, we also define a five-step process and present algorithms for converting them into their modularized form. Finally, we improve the consistency of Security-Enhanced Linux in Android (SEAndroid) policies for reducing the attack surface in Android systems. We propose a systematic approach as well as a semiautomatic tool for uncovering three classes of policy misconfigurations. We also analyze SEAndroid policies from four Android versions and seven Android phone vendors, and in all of them we observe examples of potential policy misconfigurations

    Improving the Policy Specification for Practical Access Control Systems

    Get PDF
    Access control systems play a crucial role in protecting the security of information systems by ensuring that only authorized users are granted access to sensitive resources, and the protection is only as good as the access control policies. For enabling a security administrator to express her desired policy conveniently, it is paramount that a policy specification is expressive, comprehensible, and free of inconsistencies. In this dissertation, we study the policy specifications for three practical access control systems (i.e., obligation systems, firewalls, and Security-Enhanced Linux in Android) and improve their expressiveness, comprehensibility, and consistency. First, we improve the expressiveness of obligation policies for handling different types of obligations. We propose a language for specifying obligations as well as an architecture for handling access control policies with these obligations, by extending XACML (i.e., the de facto standard for specifying access control policies). We also implement our design into a prototype system named ExtXACML to handle various obligations. Second, we improve the comprehensibility of firewall policies enabling administrators to better understand and manage the policies. We introduce the tri-modularized design of firewall policies for elevating them from monolithic to modular. To support legacy firewall policies, we also define a five-step process and present algorithms for converting them into their modularized form. Finally, we improve the consistency of Security-Enhanced Linux in Android (SEAndroid) policies for reducing the attack surface in Android systems. We propose a systematic approach as well as a semiautomatic tool for uncovering three classes of policy misconfigurations. We also analyze SEAndroid policies from four Android versions and seven Android phone vendors, and in all of them we observe examples of potential policy misconfigurations

    Modelling and Analysis of Network Security Policies

    Get PDF
    Nowadays, computers and network communications have a pervasive presence in all our daily activities. Their correct configuration in terms of security is becoming more and more complex due to the growing number and variety of services present in a network. Generally, the security configuration of a computer network is dictated by specifying the policies of the security controls (e.g. firewall, VPN gateway) in the network. This implies that the specification of the network security policies is a crucial step to avoid errors in network configuration (e.g., blocking legitimate traffic, permitting unwanted traffic or sending insecure data). In the literature, an anomaly is an incorrect policy specification that an administrator may introduce in the network. In this thesis, we indicate as policy anomaly any conflict (e.g. two triggered policy rules enforcing contradictory actions), error (e.g. a policy cannot be enforced because it requires a cryptographic algorithm not supported by the security controls) or sub-optimization (e.g. redundant policies) that may arise in the policy specification phase. Security administrators, thus, have to face the hard job of correctly specifying the policies, which requires a high level of competence. Several studies have confirmed, in fact, that many security breaches and breakdowns are attributable to administrators’ responsibilities. Several approaches have been proposed to analyze the presence of anomalies among policy rules, in order to enforce a correct security configuration. However, we have identified two limitations of such approaches. On one hand, current literature identifies only the anomalies among policies of a single security technology (i.e., IPsec, TLS), while a network is generally configured with many technologies. On the other hand, existing approaches work on a single policy type, also named domain (i.e., filtering, communication protection). Unfortunately, the complexity of real systems is not self-contained and each network security control may affect the behavior of other controls in the same network. The objective of this PhD work was to investigate novel approaches for modelling security policies and their anomalies, and formal techniques of anomaly analysis. We present in this dissertation our contributions to the current policy analysis state of the art and the achieved results. A first contribution was the definition of a new class of policy anomalies, i.e. the inter-technology anomalies, which arises in a set of policies of multiple security technologies. We provided also a formal model able to detect these new types of anomalies. One of the results achieved by applying the inter-technology analysis to the communication protection policies was to categorize twelve new types of anomalies. The second result of this activity was derived from an empirical assessment that proved the practical significance of detecting such new anomalies. The second contribution of this thesis was the definition of a newly-defined type of policy analysis, named inter-domain analysis, which identifies any anomaly that may arise among different policy domains. We improved the state of the art by proposing a possible model to detect the inter-domain anomalies, which is a generalization of the aforementioned inter-technology model. In particular, we defined the Unified Model for Policy Analysis (UMPA) to perform the inter-domain analysis by extending the analysis model applied for a single policy domain to comprehensive analysis of anomalies among many policy domains. The result of this last part of our dissertation was to improve the effectiveness of the analysis process. Thanks to the inter-domain analysis, indeed, administrators can detect in a simple and customizable way a greater set of anomalies than the sets they could detect by running individually any other model

    Distributed Security Policy Analysis

    Get PDF
    Computer networks have become an important part of modern society, and computer network security is crucial for their correct and continuous operation. The security aspects of computer networks are defined by network security policies. The term policy, in general, is defined as ``a definite goal, course or method of action to guide and determine present and future decisions''. In the context of computer networks, a policy is ``a set of rules to administer, manage, and control access to network resources''. Network security policies are enforced by special network appliances, so called security controls.Different types of security policies are enforced by different types of security controls. Network security policies are hard to manage, and errors are quite common. The problem exists because network administrators do not have a good overview of the network, the defined policies and the interaction between them. Researchers have proposed different techniques for network security policy analysis, which aim to identify errors within policies so that administrators can correct them. There are three different solution approaches: anomaly analysis, reachability analysis and policy comparison. Anomaly analysis searches for potential semantic errors within policy rules, and can also be used to identify possible policy optimizations. Reachability analysis evaluates allowed communication within a computer network and can determine if a certain host can reach a service or a set of services. Policy comparison compares two or more network security policies and represents the differences between them in an intuitive way. Although research in this field has been carried out for over a decade, there is still no clear answer on how to reduce policy errors. The different analysis techniques have their pros and cons, but none of them is a sufficient solution. More precisely, they are mainly complements to each other, as one analysis technique finds policy errors which remain unknown to another. Therefore, to be able to have a complete analysis of the computer network, multiple models must be instantiated. An analysis model that can perform all types of analysis techniques is desirable and has three main advantages. Firstly, the model can cover the greatest number of possible policy errors. Secondly, the computational overhead of instantiating the model is required only once. Thirdly, research effort is reduced because improvements and extensions to the model are applied to all three analysis types at the same time. Fourthly, new algorithms can be evaluated by comparing their performance directly to each other. This work proposes a new analysis model which is capable of performing all three analysis techniques. Security policies and the network topology are represented by the so-called Geometric-Model. The Geometric-Model is a formal model based on the set theory and geometric interpretation of policy rules. Policy rules are defined according to the condition-action format: if the condition holds then the action is applied. A security policy is expressed as a set of rules, a resolution strategy which selects the action when more than one rule applies, external data used by the resolution strategy and a default action in case no rule applies. This work also introduces the concept of Equivalent-Policy, which is calculated on the network topology and the policies involved. All analysis techniques are performed on it with a much higher performance. A precomputation phase is required for two reasons. Firstly, security policies which modify the traffic must be transformed to gain linear behaviour. Secondly, there are much fewer rules required to represent the global behaviour of a set of policies than the sum of the rules in the involved policies. The analysis model can handle the most common security policies and is designed to be extensible for future security policy types. As already mentioned the Geometric-Model can represent all types of security policies, but the calculation of the Equivalent-Policy has some small dependencies on the details of different policy types. Therefore, the computation of the Equivalent-Policy must be tweaked to support new types. Since the model and the computation of the Equivalent-Policy was designed to be extendible, the effort required to introduce a new security policy type is minimal. The anomaly analysis can be performed on computer networks containing different security policies. The policy comparison can perform an Implementation-Verification among high-level security requirements and an entire computer network containing different security policies. The policy comparison can perform a ChangeImpact-Analysis of an entire network containing different security policies. The proposed model is implemented in a working prototype, and a performance evaluation has been performed. The performance of the implementation is more than sufficient for real scenarios. Although the calculation of the Equivalent-Policy requires a significant amount of time, it is still manageable and is required only once. The execution of the different analysis techniques is fast, and generally the results are calculated in real time. The implementation also exposes an API for future integration in different frameworks or software packages. Based on the API, a complete tool was implemented, with a graphical user interface and additional features

    Cybersecurity issues in software architectures for innovative services

    Get PDF
    The recent advances in data center development have been at the basis of the widespread success of the cloud computing paradigm, which is at the basis of models for software based applications and services, which is the "Everything as a Service" (XaaS) model. According to the XaaS model, service of any kind are deployed on demand as cloud based applications, with a great degree of flexibility and a limited need for investments in dedicated hardware and or software components. This approach opens up a lot of opportunities, for instance providing access to complex and widely distributed applications, whose cost and complexity represented in the past a significant entry barrier, also to small or emerging businesses. Unfortunately, networking is now embedded in every service and application, raising several cybersecurity issues related to corruption and leakage of data, unauthorized access, etc. However, new service-oriented architectures are emerging in this context, the so-called services enabler architecture. The aim of these architectures is not only to expose and give the resources to these types of services, but it is also to validate them. The validation includes numerous aspects, from the legal to the infrastructural ones e.g., but above all the cybersecurity threats. A solid threat analysis of the aforementioned architecture is therefore necessary, and this is the main goal of this thesis. This work investigate the security threats of the emerging service enabler architectures, providing proof of concepts for these issues and the solutions too, based on several use-cases implemented in real world scenarios

    A Survey on Security and Privacy of 5G Technologies: Potential Solutions, Recent Advancements, and Future Directions

    Get PDF
    Security has become the primary concern in many telecommunications industries today as risks can have high consequences. Especially, as the core and enable technologies will be associated with 5G network, the confidential information will move at all layers in future wireless systems. Several incidents revealed that the hazard encountered by an infected wireless network, not only affects the security and privacy concerns, but also impedes the complex dynamics of the communications ecosystem. Consequently, the complexity and strength of security attacks have increased in the recent past making the detection or prevention of sabotage a global challenge. From the security and privacy perspectives, this paper presents a comprehensive detail on the core and enabling technologies, which are used to build the 5G security model; network softwarization security, PHY (Physical) layer security and 5G privacy concerns, among others. Additionally, the paper includes discussion on security monitoring and management of 5G networks. This paper also evaluates the related security measures and standards of core 5G technologies by resorting to different standardization bodies and provide a brief overview of 5G standardization security forces. Furthermore, the key projects of international significance, in line with the security concerns of 5G and beyond are also presented. Finally, a future directions and open challenges section has included to encourage future research.European CommissionNational Research Tomsk Polytechnic UniversityUpdate citation details during checkdate report - A

    Efficient sharing mechanisms for virtualized multi-tenant heterogeneous networks

    Get PDF
    The explosion in data traffic, the physical resource constraints, and the insufficient financial incentives for deploying 5G networks, stress the need for a paradigm shift in network upgrades. Typically, operators are also the service providers, which charge the end users with low and flat tariffs, independently of the service enjoyed. A fine-scale management of the network resources is needed, both for optimizing costs and resource utilization, as well as for enabling new synergies among network owners and third-parties. In particular, operators could open their networks to third parties by means of fine-scale sharing agreements over customized networks for enhanced service provision, in exchange for an adequate return of investment for upgrading their infrastructures. The main objective of this thesis is to study the potential of fine-scale resource management and sharing mechanisms for enhancing service provision and for contributing to a sustainable road to 5G. More precisely, the state-of-the-art architectures and technologies for network programmability and scalability are studied, together with a novel paradigm for supporting service diversity and fine-scale sharing. We review the limits of conventional networks, we extend existing standardization efforts and define an enhanced architecture for enabling 5G networks' features (e.g., network-wide centralization and programmability). The potential of the proposed architecture is assessed in terms of flexible sharing and enhanced service provision, while the advantages of alternative business models are studied in terms of additional profits to the operators. We first study the data rate improvement achievable by means of spectrum and infrastructure sharing among operators and evaluate the profit increase justified by a better service provided. We present a scheme based on coalitional game theory for assessing the capability of accommodating more service requests when a cooperative approach is adopted, and for studying the conditions for beneficial sharing among coalitions of operators. Results show that: i) collaboration can be beneficial also in case of unbalanced cost redistribution within coalitions; ii) coalitions of equal-sized operators provide better profit opportunities and require lower tariffs. The second kind of sharing interaction that we consider is the one between operators and third-party service providers, in the form of fine-scale provision of customized portions of the network resources. We define a policy-based admission control mechanism, whose performance is compared with reference strategies. The proposed mechanism is based on auction theory and computes the optimal admission policy at a reduced complexity for different traffic loads and allocation frequencies. Because next-generation services include delay-critical services, we compare the admission control performances of conventional approaches with the proposed one, which proves to offer near real-time service provision and reduced complexity. Besides, it guarantees high revenues and low expenditures in exchange for negligible losses in terms of fairness towards service providers. To conclude, we study the case where adaptable timescales are adopted for the policy-based admission control, in order to promptly guarantee service requirements over traffic fluctuations. In order to reduce complexity, we consider the offline pre­computation of admission strategies with respect to reference network conditions, then we study the extension to unexplored conditions by means of computationally efficient methodologies. Performance is compared for different admission strategies by means of a proof of concept on real network traces. Results show that the proposed strategy provides a tradeoff in complexity and performance with respect to reference strategies, while reducing resource utilization and requirements on network awareness.La explosion del trafico de datos, los recursos limitados y la falta de incentivos para el desarrollo de 5G evidencian la necesidad de un cambio de paradigma en la gestion de las redes actuales. Los operadores de red suelen ser tambien proveedores de servicios, cobrando tarifas bajas y planas, independientemente del servicio ofrecido. Se necesita una gestion de recursos precisa para optimizar su utilizacion, y para permitir nuevas sinergias entre operadores y proveedores de servicios. Concretamente, los operadores podrian abrir sus redes a terceros compartiendolas de forma flexible y personalizada para mejorar la calidad de servicio a cambio de aumentar sus ganancias como incentivo para mejorar sus infraestructuras. El objetivo principal de esta tesis es estudiar el potencial de los mecanismos de gestion y comparticion de recursos a pequei\a escala para trazar un camino sostenible hacia el 5G. En concreto, se estudian las arquitecturas y tecnolog fas mas avanzadas de "programabilidad" y escalabilidad de las redes, junto a un nuevo paradigma para la diversificacion de servicios y la comparticion de recursos. Revisamos los limites de las redes convencionales, ampliamos los esfuerzos de estandarizacion existentes y definimos una arquitectura para habilitar la centralizacion y la programabilidad en toda la red. La arquitectura propuesta se evalua en terminos de flexibilidad en la comparticion de recursos, y de mejora en la prestacion de servicios, mientras que las ventajas de un modelo de negocio alternativo se estudian en terminos de ganancia para los operadores. En primer lugar, estudiamos el aumento en la tasa de datos gracias a un uso compartido del espectro y de las infraestructuras, y evaluamos la mejora en las ganancias de los operadores. Presentamos un esquema de admision basado en la teoria de juegos para acomodar mas solicitudes de servicio cuando se adopta un enfoque cooperativo, y para estudiar las condiciones para que la reparticion de recursos sea conveniente entre coaliciones de operadores. Los resultados ensei\an que: i) la colaboracion puede ser favorable tambien en caso de una redistribucion desigual de los costes en cada coalicion; ii) las coaliciones de operadores de igual tamai\o ofrecen mejores ganancias y requieren tarifas mas bajas. El segundo tipo de comparticion que consideramos se da entre operadores de red y proveedores de servicios, en forma de provision de recursos personalizada ya pequei\a escala. Definimos un mecanismo de control de trafico basado en polfticas de admision, cuyo rendimiento se compara con estrategias de referencia. El mecanismo propuesto se basa en la teoria de subastas y calcula la politica de admision optima con una complejidad reducida para diferentes cargas de trafico y tasa de asignacion. Con particular atencion a servicios 5G de baja latencia, comparamos las prestaciones de estrategias convencionales para el control de admision con las del metodo propuesto, que proporciona: i) un suministro de servicios casi en tiempo real; ii) una complejidad reducida; iii) unos ingresos elevados; y iv) unos gastos reducidos, a cambio de unas perdidas insignificantes en terminos de imparcialidad hacia los proveedores de servicios. Para concluir, estudiamos el caso en el que se adoptan escalas de tiempo adaptables para el control de admision, con el fin de garantizar puntualmente los requisitos de servicio bajo diferentes condiciones de trafico. Para reducir la complejidad, consideramos el calculo previo de las estrategias de admision con respecto a condiciones de red de referenda, adaptables a condiciones inexploradas por medio de metodologias computacionalmente eficientes. Se compara el rendimiento de diferentes estrategias de admision sobre trazas de trafico real. Los resultados muestran que la estrategia propuesta equilibra complejidad y ganancias, mientras se reduce la utilizacion de recursos y la necesidad de conocer el estado exacto de la red.Postprint (published version
    corecore