128 research outputs found

    Efficient data structures for local inconsistency detection in firewall ACL updates

    Get PDF
    Filtering is a very important issue in next generation networks. These networks consist of a relatively high number of resource constrained devices and have special features, such as management of frequent topology changes. At each topology change, the access control policy of all nodes of the network must be automatically modified. In order to manage these access control requirements, Firewalls have been proposed by several researchers. However, many of the problems of traditional firewalls are aggravated due to these networks particularities, as is the case of ACL consistency. A firewall ACL with inconsistencies implies in general design errors, and indicates that the firewall is accepting traffic that should be denied or vice versa. This can result in severe problems such as unwanted accesses to services, denial of service, overflows, etc. Detecting inconsistencies is of extreme importance in the context of highly sensitive applications (e.g. health care). We propose a local inconsistency detection algorithm and data structures to prevent automatic rule updates that can cause inconsistencies. The proposal has very low computational complexity as both theoretical and experimental results will show, and thus can be used in real time environments.Ministerio de Educación y Ciencia DPI2006-15476-C02-0

    Efficient algorithms and abstract data types for local inconsistency isolation in firewall ACLS

    Get PDF
    Writing and managing firewall ACLs are hard, tedious, time-consuming and error-prone tasks for a wide range of reasons. During these tasks, inconsistent rules can be introduced. An inconsistent firewall ACL implies in general a design fault, and indicates that the firewall is accepting traffic that should be denied or vice versa. This can result in severe problems such as unwanted accesses to services, denial of service, overflows, etc. However, the administrator is who ultimately decides if an inconsistent rule is a fault or not. Although many algorithms to detect and manage inconsistencies in firewall ACLs have been proposed, they have different drawbacks regarding different aspects of the consistency diagnosis problem, which can prevent their use in a wide range of real-life situations. In this paper, we review these algorithms along with their drawbacks, and propose a new divide and conquer based algorithm, which uses specialized abstract data types. The proposed algorithm returns consistency results over the original ACL. Its computational complexity is better than the current best algorithm for inconsistency isolation, as experimental results will also show.Ministerio de Educación y Ciencia DIP2006-15476-C02-0

    A FIREWALL MODEL OF FILE SYSTEM SECURITY

    Get PDF
    File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux

    A comparative analysis of cyber-threat intelligence sources, formats and languages

    Get PDF
    The sharing of cyber-threat intelligence is an essential part of multi-layered tools used to protect systems and organisations from various threats. Structured standards, such as STIX, TAXII and CybOX, were introduced to provide a common means of sharing cyber-threat intelligence and have been subsequently much-heralded as the de facto industry standards. In this paper, we investigate the landscape of the available formats and languages, along with the publicly available sources of threat feeds, how these are implemented and their suitability for providing rich cyber-threat intelligence. We also analyse at a sample of cyber-threat intelligence feeds, the type of data they provide and the issues found in aggregating and sharing the data. Moreover, the type of data supported by various formats and languages is correlated with the data needs for several use cases related to typical security operations. The main conclusions drawn by our analysis suggest that many of the standards have a poor level of adoption and implementation, with providers opting for custom or traditional simple formats

    Rethinking Software Network Data Planes in the Era of Microservices

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Design and Evaluation of Packet Classification Systems, Doctoral Dissertation, December 2006

    Get PDF
    Although many algorithms and architectures have been proposed, the design of efficient packet classification systems remains a challenging problem. The diversity of filter specifications, the scale of filter sets, and the throughput requirements of high speed networks all contribute to the difficulty. We need to review the algorithms from a high-level point-of-view in order to advance the study. This level of understanding can lead to significant performance improvements. In this dissertation, we evaluate several existing algorithms and present several new algorithms as well. The previous evaluation results for existing algorithms are not convincing because they have not been done in a consistent way. To resolve this issue, an objective evaluation platform needs to be developed. We implement and evaluate several representative algorithms with uniform criteria. The source code and the evaluation results are both published on a web-site to provide the research community a benchmark for impartial and thorough algorithm evaluations. We propose several new algorithms to deal with the different variations of the packet classification problem. They are: (1) the Shape Shifting Trie algorithm for longest prefix matching, used in IP lookups or as a building block for general packet classification algorithms; (2) the Fast Hash Table lookup algorithm used for exact flow match; (3) the longest prefix matching algorithm using hash tables and tries, used in IP lookups or packet classification algorithms;(4) the 2D coarse-grained tuple-space search algorithm with controlled filter expansion, used for two-dimensional packet classification or as a building block for general packet classification algorithms; (5) the Adaptive Binary Cutting algorithm used for general multi-dimensional packet classification. In addition to the algorithmic solutions, we also consider the TCAM hardware solution. In particular, we address the TCAM filter update problem for general packet classification and provide an efficient algorithm. Building upon the previous work, these algorithms significantly improve the performance of packet classification systems and set a solid foundation for further study

    IMPROVING NETWORK POLICY ENFORCEMENT USING NATURAL LANGUAGE PROCESSING AND PROGRAMMABLE NETWORKS

    Get PDF
    Computer networks are becoming more complex and challenging to operate, manage, and protect. As a result, Network policies that define how network operators should manage the network are becoming more complex and nuanced. Unfortunately, network policies are often an undervalued part of network design, leaving network operators to guess at the intent of policies that are written and fill in the gaps where policies don’t exist. Organizations typically designate Policy Committees to write down the network policies in the policy documents using high-level natural languages. The policy documents describe both the acceptable and unacceptable uses of the network. Network operators then take the responsibility of enforcing the policies and verifying whether the enforcement achieves expected requirements. Network operators often encounter gaps and ambiguous statements when translating network policies into specific network configurations. An ill-structured network policy document may prevent network operators from implementing the true intent of the policies, and thus leads to incorrect enforcement. It is thus important to know the quality of the written network policies and to remove any ambiguity that may confuse the people who are responsible for reading and implementing them. Moreover, there is a need not only to prevent policy violations from occurring but also to check for any policy violations that may have occurred (i.e., the prevention mechanisms failed in some way), since unwanted packets or network traffic, were somehow allowed to enter the network. In addition, the emergence of programmable networks provides flexible network control. Enforcing network routing policies in an environment that contains both the traditional networks and programmable networks also becomes a challenge. This dissertation presents a set of methods designed to improve network policy enforcement. We begin by describing the design and implementation of a new Network Policy Analyzer (NPA), which analyzes the written quality of network policies and outputs a quality report that can be given to Policy Committees to improve their policies. Suggestions on how to write good network policies are also provided. We also present Network Policy Conversation Engine (NPCE), a chatbot for network operators to ask questions in natural languages that check whether there is any policy violation in the network. NPCE takes advantage of recent advances in Natural Language Processing (NLP) and modern database solutions to convert natural language questions into the corresponding database queries. Next, we discuss our work towards understanding how Internet ASes connect with each other at third-party locations such as IXPs and their business relationships. Such a graph is needed to write routing policies and to calculate available routes in the future. Lastly, we present how we successfully manage network policies in a hybrid network composed of both SDN and legacy devices, making network services available over the entire network

    Formal assurance of security policies in automated network orchestration (SDN/NFV)

    Get PDF
    1noL'abstract è presente nell'allegato / the abstract is in the attachmentopen677. INGEGNERIA INFORMATInoopenYusupov, Jalolliddi
    corecore