34,344 research outputs found
A model for the analysis of security policies in service function chains
Two emerging architectural paradigms, i.e., Software Defined Networking (SDN)
and Network Function Virtualization (NFV), enable the deployment and management
of Service Function Chains (SFCs). A SFC is an ordered sequence of abstract
Service Functions (SFs), e.g., firewalls, VPN-gateways,traffic monitors, that
packets have to traverse in the route from source to destination. While this
appealing solution offers significant advantages in terms of flexibility, it
also introduces new challenges such as the correct configuration and ordering
of SFs in the chain to satisfy overall security requirements. This paper
presents a formal model conceived to enable the verification of correct policy
enforcements in SFCs. Software tools based on the model can then be designed to
cope with unwanted network behaviors (e.g., security flaws) deriving from
incorrect interactions of SFs in the same SFC
Increasing resilience of ATM networks using traffic monitoring and automated anomaly analysis
Systematic network monitoring can be the cornerstone for
the dependable operation of safety-critical distributed
systems. In this paper, we present our vision for informed
anomaly detection through network monitoring and
resilience measurements to increase the operators'
visibility of ATM communication networks. We raise the
question of how to determine the optimal level of
automation in this safety-critical context, and we present a
novel passive network monitoring system that can reveal
network utilisation trends and traffic patterns in diverse
timescales. Using network measurements, we derive
resilience metrics and visualisations to enhance the
operators' knowledge of the network and traffic behaviour,
and allow for network planning and provisioning based on
informed what-if analysis
Modelling and Analysis of Network Security Policies
Nowadays, computers and network communications have a pervasive presence in all our daily activities. Their correct configuration in terms of security is becoming more and more complex due to the growing number and variety of services present in a network.
Generally, the security configuration of a computer network is dictated by specifying the policies of the security controls (e.g. firewall, VPN gateway)
in the network. This implies that the specification of the network security policies is a crucial step to avoid errors in network configuration (e.g., blocking
legitimate traffic, permitting unwanted traffic or sending insecure data).
In the literature, an anomaly is an incorrect policy specification that an administrator may introduce in the network. In this thesis, we indicate as policy anomaly any conflict (e.g. two triggered policy rules enforcing contradictory actions), error (e.g. a policy cannot be enforced because it requires a cryptographic algorithm not supported by the security controls) or sub-optimization (e.g. redundant policies) that may arise in the policy specification phase.
Security administrators, thus, have to face the hard job of correctly specifying the policies, which requires a high level of competence. Several studies have
confirmed, in fact, that many security breaches and breakdowns are attributable to administratorsâ responsibilities.
Several approaches have been proposed to analyze the presence of anomalies among policy rules, in order to enforce a correct security configuration. However, we have identified two limitations of such approaches. On one hand, current literature identifies only the anomalies among policies of a single security technology (i.e., IPsec, TLS), while a network is generally configured with many technologies. On the other hand, existing approaches work on a single policy type, also named domain (i.e., filtering, communication protection). Unfortunately, the complexity of real systems is not self-contained and each
network security control may affect the behavior of other controls in the same network.
The objective of this PhD work was to investigate novel approaches for modelling security policies and their anomalies, and formal techniques of anomaly analysis. We present in this dissertation our contributions to the current policy analysis state of the art and the achieved results.
A first contribution was the definition of a new class of policy anomalies, i.e. the inter-technology anomalies, which arises in a set of policies of multiple
security technologies. We provided also a formal model able to detect these new types of anomalies. One of the results achieved by applying the inter-technology analysis to the communication protection policies was to categorize twelve new types of anomalies. The second result of this activity was derived from an empirical assessment that proved the practical significance of detecting such new anomalies.
The second contribution of this thesis was the definition of a newly-defined type of policy analysis, named inter-domain analysis, which identifies any
anomaly that may arise among different policy domains. We improved the state of the art by proposing a possible model to detect the inter-domain
anomalies, which is a generalization of the aforementioned inter-technology model. In particular, we defined the Unified Model for Policy Analysis (UMPA)
to perform the inter-domain analysis by extending the analysis model applied for a single policy domain to comprehensive analysis of anomalies among many
policy domains. The result of this last part of our dissertation was to improve the effectiveness of the analysis process. Thanks to the inter-domain analysis,
indeed, administrators can detect in a simple and customizable way a greater set of anomalies than the sets they could detect by running individually any
other model
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
- âŠ