132 research outputs found

    Improvement on PDP Evaluation Performance Based on Neural Networks and SGDK-means Algorithm

    Get PDF
    With the purpose of improving the PDP (policy decision point) evaluation performance, a novel and efficient evaluation engine, namely XDNNEngine, based on neural networks and an SGDK-means (stochastic gradient descent K-means) algorithm is proposed. We divide a policy set into different clusters, distinguish different rules based on their own features and label them for the training of neural networks by using the K-means algorithm and an asynchronous SGDK-means algorithm. Then, we utilize neural networks to search for the applicable rule. A quantitative neural network is introduced to reduce a server’s computational cost. By simulating the arrival of requests, XDNNEngine is compared with the Sun PDP, XEngine and SBA-XACML. Experimental results show that 1) if the number of requests reaches 10,000, the evaluation time of XDNNEngine on the large-scale policy set with 10,000 rules is approximately 2.5 ms, and 2) in the same condition as 1), the evaluation time of XDNNEngine is reduced by 98.27%, 90.36% and 84.69%, respectively, over that of the Sun PDP, XEngine and SBA-XACML

    Clustering and recommendation techniques for access control policy management

    Get PDF
    Managing access control policies can be a daunting process, given the frequent policy decisions that need to be made, and the potentially large number of policy rules involved. Policy management includes, but is not limited to: policy optimization, configuration, and analysis. Such tasks require a deep understanding of the policy and its building compo- nents, especially in scenarios where it frequently changes and needs to adapt to different environments. Assisting both administrators and users in performing these tasks is impor- tant in avoiding policy misconfigurations and ill-informed policy decisions. We investigate a number of clustering and recommendation techniques, and implement a set of tools that assist administrators and users in managing their policies. First, we propose and imple- ment an optimization technique, based on policy clustering and adaptable rule ranking, to achieve optimal request evaluation performance. Second, we implement a policy analysis framework that simplifies and visualizes analysis results, based on a hierarchical cluster- ing algorithm. The framework utilizes a similarity-based model that provides a basis of risk analysis on newly introduced policy rules. In addition to administrators, we focus on regular individuals whom nowadays manage their own access control polices on a regular basis. Users are making frequent policy decisions, especially with the increasing popular- ity of social network sites, such as Facebook and Twitter. For example, users are required to allow/deny access to their private data on social sites each time they install a 3rd party application. To make matters worse, 3rd party access requests are mostly uncustomizable by the user. We propose a framework that allows users to customize their policy decisions on social sites, and provides a set of recommendations that assist users in making well- informed decisions. Finally, as the browser has become the main medium for the users online presence, we investigate the access control models for 3rd party browser extensions. Even though, extensions enrich the browsing experience of users, they could potentially represent a threat to their privacy. We propose and implement a framework that 1) monitors 3rd party extension accesses, 2) provides fine-grained permission controls, and 3) Provides detailed permission information to users in effort to increase their privacy aware- ness. To evaluate the framework we conducted a within-subjects user study and found the framework to effectively increase user awareness of requested permissions

    An experimental testbed to predict the performance of XACML Policy Decision Points

    Get PDF
    The performance and scalability of access control systems is a growing concern as organisations deploy ever more complex communications and content management systems. This paper describes how an (offline) experimental testbed may be used to address performance concerns. To begin, timing measurements are collected from a server component incorporating the Policy Decision Point (PDP) under test, using representative policies and corresponding requests. Our experiments with two XACML PDP implementations show that measured request service times are typically clustered by request type; thus an algorithm for request cluster identification is presented. Cluster characterisations are used as inputs to a PDP performance model for a given policy/request mix and an analytic (queueing) model is used to estimate the equilibrium server load for different mixes of request clusters. The analytic performance prediction model is validated and extended by discrete event simulation of a PDP subject to additional load. These predictive models enable network administrators to explore the capacity of the PDP for different overall loadings (requests per unit time) and profiles (relative frequencies) of requests

    On designing large, secure and resilient networked systems

    Get PDF
    2019 Summer.Includes bibliographical references.Defending large networked systems against rapidly evolving cyber attacks is challenging. This is because of several factors. First, cyber defenders are always fighting an asymmetric warfare: While the attacker needs to find just a single security vulnerability that is unprotected to launch an attack, the defender needs to identify and protect against all possible avenues of attacks to the system. Various types of cost factors, such as, but not limited to, costs related to identifying and installing defenses, costs related to security management, costs related to manpower training and development, costs related to system availability, etc., make this asymmetric warfare even challenging. Second, newer and newer cyber threats are always emerging - the so called zero-day attacks. It is not possible for a cyber defender to defend against an attack for which defenses are yet unknown. In this work, we investigate the problem of designing large and complex networks that are secure and resilient. There are two specific aspects of the problem that we look into. First is the problem of detecting anomalous activities in the network. While this problem has been variously investigated, we address the problem differently. We posit that anomalous activities are the result of mal-actors interacting with non mal-actors, and such anomalous activities are reflected in changes to the topological structure (in a mathematical sense) of the network. We formulate this problem as that of Sybil detection in networks. For our experimentation and hypothesis testing we instantiate the problem as that of Sybil detection in on-line social networks (OSNs). Sybil attacks involve one or more attackers creating and introducing several mal-actors (fake identities in on-line social networks), called Sybils, into a complex network. Depending on the nature of the network system, the goal of the mal-actors can be to unlawfully access data, to forge another user's identity and activity, or to influence and disrupt the normal behavior of the system. The second aspect that we look into is that of building resiliency in a large network that consists of several machines that collectively provide a single service to the outside world. Such networks are particularly vulnerable to Sybil attacks. While our Sybil detection algorithms achieve very high levels of accuracy, they cannot guarantee that all Sybils will be detected. Thus, to protect against such "residual" Sybils (that is, those that remain potentially undetected and continue to attack the network services), we propose a novel Moving Target Defense (MTD) paradigm to build resilient networks. The core idea is that for large enterprise level networks, the survivability of the network's mission is more important than the security of one or more of the servers. We develop protocols to re-locate services from server to server in a random way such that before an attacker has an opportunity to target a specific server and disrupt it’s services, the services will migrate to another non-malicious server. The continuity of the service of the large network is thus sustained. We evaluate the effectiveness of our proposed protocols using theoretical analysis, simulations, and experimentation. For the Sybil detection problem we use both synthetic and real-world data sets. We evaluate the algorithms for accuracy of Sybil detection. For the moving target defense protocols we implement a proof-of-concept in the context of access control as a service, and run several large scale simulations. The proof-of- concept demonstrates the effectiveness of the MTD paradigm. We evaluate the computation and communication complexity of the protocols as we scale up to larger and larger networks

    Ensuring compliance with data privacy and usage policies in online services

    Get PDF
    Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical, often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. We present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively. Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code. Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud. Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.Onlinedienste sammeln und verarbeiten eine Vielzahl sensibler persönlicher Daten, die komplexen Datenschutzrichtlinien unterliegen. Die Einhaltung dieser Richtlinien ist häufig rechtlich bindend für Dienstanbieter und gleichzeitig eine Herausforderung, da Fehler in Anwendungsprogrammen zu einer unabsichtlichen Offenlegung führen können. Wir präsentieren zwei Compliance-Systeme, Qapla und Pacer, die Richtlinien effizient einhalten und gegen direkte und indirekte Offenlegungen durch Seitenkanäle schützen. Qapla verhindert direkte Offenlegungen in datenbankgestützten Anwendungen. Herkömmliche Methoden binden Richtlinienprüfungen in Anwendungscode ein. Stattdessen gibt Qapla Richtlinien direkt in der Datenbank an und setzt sie in einem Datenbankadapter durch. Die Konformität ist somit vom Anwendungscode getrennt. Pacer verhindert Netzwerkseitenkanaloffenlegungen in Cloud-Anwendungen. Geheimnisse eines Nutzers können über die Form des Netzwerkverkehr offengelegt werden, die bei gemeinsam genutzten Netzwerkelementen (z. B. Netzwerkkarten, Switches) beobachtet werden kann. Pacer implementiert eine Tunnelabstraktion, die Geheimnisse im Netzwerkverkehr des Nutzers verbirgt, jedoch Variationen basier- end auf nicht geheimen Informationen zulässt und eine sichere und effiziente Nutzung der Netzwerkressourcen in der Cloud ermöglicht. Beide Systeme erfordern geringen Entwicklungsaufwand und verursachen einen moderaten Leistungsaufwand, wodurch ihre Nützlichkeit demonstriert wird

    Efficient Attribute Based Access Control for RESTful Services

    Get PDF
    Abstract. The popularity of REST grows more and more and so does the need for fine-grained access control for RESTful services. Attribute Based Access Control (ABAC) is a very generic concept that covers multiple different access control mechanism. XACML is an implementation of ABAC based on XML and is established as a standard mechanism. Its flexibility opens the opportunity to specify detailed security policies. But on the other hand it has some drawbacks regarding maintenance and performance when the complexity of security policies grows. Long processing times for authorization requests are the consequence in environments that require fine-grained access control. We describe how to design a security policy in a resource oriented environment so that its drawbacks are minimized. The results are faster processing times for access requests and an easy to manage concept for security policies for RESTful services

    A Fault-Based Model of Fault Localization Techniques

    Get PDF
    Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault\u27s location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers\u27 work. The results of the survey indicate that software testing researchers are underreporting the significance of their work

    Privacy-enhanced network monitoring

    Get PDF
    This PhD dissertation investigates two necessary means that are required for building privacy-enhanced network monitoring systems: a policy-based privacy or confidentiality enforcement technology; and metrics measuring leakage of private or confidential information to verify and improve these policies. The privacy enforcement mechanism is based on fine-grained access control and reversible anonymisation of XML data to limit or control access to sensitive information from the monitoring systems. The metrics can be used to support a continuous improvement process, by quantifying leakages of private or confidential information, locating where they are, and proposing how these leakages can be mitigated. The planned actions can be enforced by applying a reversible anonymisation policy, or by removing the source of the information leakages. The metrics can subsequently verify that the planned privacy enforcement scheme works as intended. Any significant deviations from the expected information leakage can be used to trigger further improvement actions. The most significant results from the dissertation are: a privacy leakage metric based on the entropy standard deviation of given data (for example IDS alarms), which measures how much sensitive information that is leaking and where these leakages occur; a proxy offering policy-based reversible anonymisation of information in XML-based web services. The solution supports multi-level security, so that only authorised stakeholders can get access to sensitive information; a methodology which combines privacy metrics with the reversible anonymisation scheme to support a continuous improvement process with reduced leakage of private or confidential information over time. This can be used to improve management of private or confidential information where managed security services have been outsourced to semi-trusted parties, for example for outsourced managed security services monitoring health institutions or critical infrastructures. The solution is based on relevant standards to ensure backwards compatibility with existing intrusion detection systems and alarm databases

    Service Outsourcing Character Oriented Privacy Conflict Detection Method in Cloud Computing

    Get PDF
    Cloud computing has provided services for users as a software paradigm. However, it is difficult to ensure privacy information security because of its opening, virtualization, and service outsourcing features. Therefore how to protect user privacy information has become a research focus. In this paper, firstly, we model service privacy policy and user privacy preference with description logic. Secondly, we use the pellet reasonor to verify the consistency and satisfiability, so as to detect the privacy conflict between services and user. Thirdly, we present the algorithm of detecting privacy conflict in the process of cloud service composition and prove the correctness and feasibility of this method by case study and experiment analysis. Our method can reduce the risk of user sensitive privacy information being illegally used and propagated by outsourcing services. In the meantime, the method avoids the exception in the process of service composition by the privacy conflict, and improves the trust degree of cloud service providers

    Optimization of Access Control Policies

    Get PDF
    Organizations undertake complex and costly projects to model high-quality Access Control Policies (ACPs). Once built, these policies must be maintained and managed in an ongoing process to keep their quality high. Insufficient maintenance leads to inaccurate authorization decisions and increases the policies’ administrative effort and susceptibility to errors. While the initial modeling of ACPs has received significant research interest, their optimization is not yet covered as broadly. This work provides a theoretical foundation for ACP quality and its optimization. Furthermore, it analyzes how existing research addresses optimization of ACPs with regard to six crucial optimization dimensions. It presents a structured literature survey tracing these optimization dimensions, the contributed research artifact and data requirements. Building on this literature catalogue, this work elaborates on inaccuracies for user permission assignments, data availability, minimal perturbation and recommendation-based optimization
    • …
    corecore