2,333 research outputs found

    Distributed Security Policy Analysis

    Get PDF
    Computer networks have become an important part of modern society, and computer network security is crucial for their correct and continuous operation. The security aspects of computer networks are defined by network security policies. The term policy, in general, is defined as ``a definite goal, course or method of action to guide and determine present and future decisions''. In the context of computer networks, a policy is ``a set of rules to administer, manage, and control access to network resources''. Network security policies are enforced by special network appliances, so called security controls.Different types of security policies are enforced by different types of security controls. Network security policies are hard to manage, and errors are quite common. The problem exists because network administrators do not have a good overview of the network, the defined policies and the interaction between them. Researchers have proposed different techniques for network security policy analysis, which aim to identify errors within policies so that administrators can correct them. There are three different solution approaches: anomaly analysis, reachability analysis and policy comparison. Anomaly analysis searches for potential semantic errors within policy rules, and can also be used to identify possible policy optimizations. Reachability analysis evaluates allowed communication within a computer network and can determine if a certain host can reach a service or a set of services. Policy comparison compares two or more network security policies and represents the differences between them in an intuitive way. Although research in this field has been carried out for over a decade, there is still no clear answer on how to reduce policy errors. The different analysis techniques have their pros and cons, but none of them is a sufficient solution. More precisely, they are mainly complements to each other, as one analysis technique finds policy errors which remain unknown to another. Therefore, to be able to have a complete analysis of the computer network, multiple models must be instantiated. An analysis model that can perform all types of analysis techniques is desirable and has three main advantages. Firstly, the model can cover the greatest number of possible policy errors. Secondly, the computational overhead of instantiating the model is required only once. Thirdly, research effort is reduced because improvements and extensions to the model are applied to all three analysis types at the same time. Fourthly, new algorithms can be evaluated by comparing their performance directly to each other. This work proposes a new analysis model which is capable of performing all three analysis techniques. Security policies and the network topology are represented by the so-called Geometric-Model. The Geometric-Model is a formal model based on the set theory and geometric interpretation of policy rules. Policy rules are defined according to the condition-action format: if the condition holds then the action is applied. A security policy is expressed as a set of rules, a resolution strategy which selects the action when more than one rule applies, external data used by the resolution strategy and a default action in case no rule applies. This work also introduces the concept of Equivalent-Policy, which is calculated on the network topology and the policies involved. All analysis techniques are performed on it with a much higher performance. A precomputation phase is required for two reasons. Firstly, security policies which modify the traffic must be transformed to gain linear behaviour. Secondly, there are much fewer rules required to represent the global behaviour of a set of policies than the sum of the rules in the involved policies. The analysis model can handle the most common security policies and is designed to be extensible for future security policy types. As already mentioned the Geometric-Model can represent all types of security policies, but the calculation of the Equivalent-Policy has some small dependencies on the details of different policy types. Therefore, the computation of the Equivalent-Policy must be tweaked to support new types. Since the model and the computation of the Equivalent-Policy was designed to be extendible, the effort required to introduce a new security policy type is minimal. The anomaly analysis can be performed on computer networks containing different security policies. The policy comparison can perform an Implementation-Verification among high-level security requirements and an entire computer network containing different security policies. The policy comparison can perform a ChangeImpact-Analysis of an entire network containing different security policies. The proposed model is implemented in a working prototype, and a performance evaluation has been performed. The performance of the implementation is more than sufficient for real scenarios. Although the calculation of the Equivalent-Policy requires a significant amount of time, it is still manageable and is required only once. The execution of the different analysis techniques is fast, and generally the results are calculated in real time. The implementation also exposes an API for future integration in different frameworks or software packages. Based on the API, a complete tool was implemented, with a graphical user interface and additional features

    Security Policy Management for a Cooperative Firewall

    Get PDF
    Increasing popularity of the Internet service and increased number of connected devices along with the introduction of IoT are making the society ever more dependent on the Internet services availability. Therefore, we need to ensure the minimum level of security and reliability of services. Ultra-Reliable Communication (URC) refers to the availability of life and business critical services nearly 100 percent of the time. These requirements are an integral part of upcoming 5th generation (5G) mobile networks. 5G is the future mobile network, which at the same time is part of the future Internet. As an extension to the conventional communication architecture, 5G needs to provide ultra-high reliability of services where; it needs to perform better than the currently available solutions in terms of security, confidentiality, integrity and reliability and it should mitigate the risks of Internet attack and malicious activities. To achieve such requirements, Customer Edge Switching (CES) architecture is presented. It proposes that the Internet user’s agent in the network provider needs to have prior information about the expected traffic of users to mitigate maximum attacks and only allow expected communication between hosts. CES executes communication security policies of each user or device acting as the user’s agent. The policy describes with fine granularity what traffic is expected by the device. The policies are sourced as automatically as possible but can also be modified by the user. Stored policies will follow the mobile user and will be executed at the network edge node executing Customer Edge Switch functions to stop all unexpected traffic from entering the mobile network. State-of-the-art in mobile network architectures utilizes the Quality of Service (QoS) policies of users. This thesis motivates the extension of current architecture to accommodate security and communication policy of end-users. The thesis presents an experimental implementation of a policy management system which is termed as Security Policy Management (SPM) to handle above-mentioned policies of users. We describe the architecture, implementation and integration of SPM with the Customer Edge Switching. Additionally, SPM has been evaluated in terms of performance, scalability, reliability and security offered via 5G customer edge nodes. Finally, the system has been analyzed for feasibility in the 5G architecture

    Web-based monitoring tools for Resistive Plate Chambers in the CMS experiment at CERN

    Get PDF
    The Resistive Plate Chambers (RPC) are used in the CMS experiment at the trigger level and also in the standard offline muon reconstruction. In order to guarantee the quality of the data collected and to monitor online the detector performance, a set of tools has been developed in CMS which is heavily used in the RPC system. The Web-based monitoring (WBM) is a set of java servlets that allows users to check the performance of the hardware during data taking, providing distributions and history plots of all the parameters. The functionalities of the RPC WBM monitoring tools are presented along with studies of the detector performance as a function of growing luminosity and environmental conditions that are tracked over time

    Factors Affecting Perceptions of Cybersecurity Readiness Among Workgroup IT Managers

    Get PDF
    The last decade has seen a dramatic increase in the number, frequency, and scope of cyberattacks, both in the United States and abroad. This upward trend necessitates that a significant aspect of any organization’s information systems strategy involves having a strong cybersecurity profile. Inherent in such a posture is the need to have IT managers who are experts in their field and who are willing and able to employ best practices and educate their users. Furthermore, IT managers need to have awareness of the technology landscape in and around their organizations. After many years of cybersecurity research, large corporations have come to implicitly understand these factors and, as such, have invested heavily in both technology and specialized personnel with the express aim of increasing their cybersecurity capabilities. However, large institutions are comprised of smaller organizational units, which are not always adequately considered when examining the cybersecurity profile of the organization. This oversight is particularly true of colleges and universities where IT managers who are not affiliated with the institution’s central IT department employ their own information security strategies. Such strategies may or may not represent a threat to the institution’s overall level of cybersecurity readiness. Therefore, this research examines the responses of workgroup IT managers who are employed at the school or department level at institutions of higher learning within the United States to determine their perceptions of their cybersecurity readiness. The conceptual model that is developed in this study is referred to as the Practice and Awareness Cybersecurity Readiness Model (PACRM). It examines the relationships between an IT manager’s perceived readiness to detect, prevent, and recover from a cyberattack, and four base factors. Among the factors studied are the manager’s previous level of experience in cybersecurity, the extent of the manager’s use of best practices, the manager’s awareness of the network infrastructure in and around the organizational unit, and the degree to which the manager’s supported user community is educated on topics related to information security. First, a survey instrument is proposed and validated. Then, a Confirmatory Factor Analysis (CFA) is conducted to examine the relationships between the observed variables and the underlying theoretical constructs. Finally, the model is tested using path analysis. The validated instrument will have obvious implications for both cybersecurity researchers and managers. Not only will it be available to other researchers, it will also provide a metric by which practitioners can gauge their perceptions of their cybersecurity readiness. In addition, if the underlying model is found to have been correctly specified, it will provide a theoretical foundation on which to base future research that is not dependent on threats and deterrents but rather on raising the self-efficacy of the human resource

    Detecting Abnormal Social Robot Behavior through Emotion Recognition

    Get PDF
    Sharing characteristics with both the Internet of Things and the Cyber Physical Systems categories, a new type of device has arrived to claim a third category and raise its very own privacy concerns. Social robots are in the market asking consumers to become part of their daily routine and interactions. Ranging in the level and method of communication with the users, all social robots are able to collect, share and analyze a great variety and large volume of personal data.In this thesis, we focus the community’s attention to this emerging area of interest for privacy and security research. We discuss the likely privacy issues, comment on current defense mechanisms that are applicable to this new category of devices, outline new forms of attack that are made possible through social robots, highlight paths that research on consumer perceptions could follow, and propose a system for detecting abnormal social robot behavior based on emotion detection

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols
    • …
    corecore