856 research outputs found
A new approach to deploy a self-adaptive distributed firewall
Distributed firewall systems emerged with the proposal of protecting individual hosts against attacks originating from inside the network. In these systems, firewall rules are centrally created, then distributed and enforced on all servers that compose the firewall, restricting which services will be available. However, this approach lacks protection against software vulnerabilities that can make network services vulnerable to attacks, since firewalls usually do not scan application protocols. In this sense, from the discovery of any vulnerability until the publication and application of patches there is an exposure window that should be reduced. In this context, this article presents Self-Adaptive Distributed Firewall (SADF). Our approach is based on monitoring hosts and using a vulnerability assessment system to detect vulnerable services, integrated with components capable of deciding and applying firewall rules on affected hosts. In this way, SADF can respond to vulnerabilities discovered in these hosts, helping to mitigate the risk of exploiting the vulnerability. Our system was evaluated in the context of a simulated network environment, where the results achieved demonstrate its viability
Tortoise: Interactive System Configuration Repair
System configuration languages provide powerful abstractions that simplify
managing large-scale, networked systems. Thousands of organizations now use
configuration languages, such as Puppet. However, specifications written in
configuration languages can have bugs and the shell remains the simplest way to
debug a misconfigured system. Unfortunately, it is unsafe to use the shell to
fix problems when a system configuration language is in use: a fix applied from
the shell may cause the system to drift from the state specified by the
configuration language. Thus, despite their advantages, configuration languages
force system administrators to give up the simplicity and familiarity of the
shell.
This paper presents a synthesis-based technique that allows administrators to
use configuration languages and the shell in harmony. Administrators can fix
errors using the shell and the technique automatically repairs the higher-level
specification written in the configuration language. The approach (1) produces
repairs that are consistent with the fix made using the shell; (2) produces
repairs that are maintainable by minimizing edits made to the original
specification; (3) ranks and presents multiple repairs when relevant; and (4)
supports all shells the administrator may wish to use. We implement our
technique for Puppet, a widely used system configuration language, and evaluate
it on a suite of benchmarks under 42 repair scenarios. The top-ranked repair is
selected by humans 76% of the time and the human-equivalent repair is ranked
1.31 on average.Comment: Published version in proceedings of IEEE/ACM International Conference
on Automated Software Engineering (ASE) 201
Dynamic service chain composition in virtualised environment
Network Function Virtualisation (NFV) has contributed to improving the flexibility of network service provisioning and reducing the time to market of new services. NFV leverages the virtualisation technology to decouple the software implementation of network appliances from the physical devices on which they run. However, with the emergence of this paradigm, providing data centre applications with an adequate network performance becomes challenging. For instance, virtualised environments cause network congestion, decrease the throughput and hurt the end user experience. Moreover, applications usually communicate through multiple sequences of virtual network functions (VNFs), aka service chains, for policy enforcement and performance and security enhancement, which increases the management complexity at to the network level.
To address this problematic situation, existing studies have proposed high-level approaches of VNFs chaining and placement that improve service chain performance. They consider the VNFs as homogenous entities regardless of their specific characteristics. They have overlooked their distinct behaviour toward the traffic load and how their underpinning implementation can intervene in defining resource usage. Our research aims at filling this gap by finding out particular patterns on production and widely used VNFs. And proposing a categorisation that helps in reducing network latency at the chains.
Based on experimental evaluation, we have classified firewalls, NAT, IDS/IPS, Flow monitors into I/O- and CPU-bound functions. The former category is mainly sensitive to the throughput, in packets per second, while the performance of the latter is primarily affected by the network bandwidth, in bits per second. By doing so, we correlate the VNF category with the traversing traffic characteristics and this will dictate how the service chains would be composed.
We propose a heuristic called Natif, for a VNF-Aware VNF insTantIation and traFfic distribution scheme, to reconcile the discrepancy in VNF requirements based on the category they belong to and to eventually reduce network latency. We have deployed Natif in an OpenStack-based environment and have compared it to a network-aware VNF composition approach. Our results show a decrease in latency by around 188% on average without sacrificing the throughput
Analysis and Coordination of Mixed-Criticality Cyber-Physical Systems
A Cyber-physical System (CPS) can be described as a network of interlinked, concurrent computational components that interact with the physical world. Such a system is usually of reactive nature and must satisfy strict timing requirements to guarantee a correct behaviour. The components can be of mixed-criticality which implies different progress models and communication models, depending whether the focus of a component lies on predictability or resource efficiency.
In this dissertation I present a novel approach that bridges the gap between stream processing models and Labelled Transition Systems (LTSs). The former offer powerful tools to describe concurrent systems of, usually simple, components while the latter allow to describe complex, reactive, components and their mutual interaction. In order to achieve the bridge between the two domains I introduce the novel LTS Synchronous Interface Automaton (SIA) that allows to model the interaction protocol of a process via its interface and to incrementally compose simple processes into more complex ones while preserving the system properties. Exploiting these properties I introduce an analysis to identify permanent blocking situations in a network of composed processes. SIAs are wrapped by the novel component-based coordination model Process Network with Synchronous Communication (PNSC) that allows to describe a network of concurrent processes where multiple communication models and the co-existence and interaction of heterogeneous processes is supported due to well defined interfaces.
The work presented in this dissertation follows a holistic approach which spans from the theory of the underlying model to an instantiation of the model as a novel coordination language, called Streamix. The language uses network operators to compose networks of concurrent processes in a structured and hierarchical way. The work is validated by a prototype implementation of a compiler and a Run-time System (RTS) that allows to compile a Streamix program and execute it on a platform with support for ISO C, POSIX threads, and a Linux operating system
Teaching Specifications Using An Interactive Reasoning Assistant
The importance of verifiably correct software has grown enormously in recent years as software has become integral to the design of critical systems, including airplanes, automobiles, and medical equipment. Hence, the importance of solid analytical reasoning skills to complement basic programming skills has also increased. If developers cannot reason about the software they design, they cannot ensure the correctness of the resulting systems. And if these systems fail, the economic and human costs can be substantial. In addition to learning analytical reasoning principles as part of the standard Computer Science curriculum, students must be excited about learning these skills and engaged in their practice. Our approach to achieving these goals at the introductory level is based on the Test Case Reasoning Assistant (TCRA), interactive courseware that allows students to provide test cases that demonstrate their understanding of instructor-supplied interface specifications while receiving immediate feedback as they work. The constituent tools also enable instructors to rapidly generate graphs of student performance data to understand the progress of their classes. We evaluate the courseware using two case-studies. The evaluation centers on understanding the impact of the tool on students\u27 ability to read and interpret specifications
Automating the Generation of Cyber Range Virtual Scenarios with VSDL
A cyber range is an environment used for training security experts and
testing attack and defence tools and procedures. Usually, a cyber range
simulates one or more critical infrastructures that attacking (red) and
defending (blue) teams must compromise and protect, respectively. The
infrastructure can be physically assembled, but much more convenient is to rely
on the Infrastructure as a Service (IaaS) paradigm. Although some modern
technologies support the IaaS, the design and deployment of scenarios of
interest is mostly a manual operation. As a consequence, it is a common
practice to have a cyber range hosting few (sometimes only one), consolidated
scenarios. However, reusing the same scenario may significantly reduce the
effectiveness of the training and testing sessions. In this paper, we propose a
framework for automating the definition and deployment of arbitrarily complex
cyber range scenarios. The framework relies on the virtual scenario description
language (VSDL), i.e., a domain-specific language for defining high-level
features of the desired infrastructure while hiding low-level details. The
semantics of VSDL is given in terms of constraints that must be satisfied by
the virtual infrastructure. These constraints are then submitted to an SMT
solver for checking the satisfiability of the specification. If satisfiable,
the specification gives rise to a model that is automatically converted to a
set of deployment scripts to be submitted to the IaaS provider
Controlled Data Sharing for Collaborative Predictive Blacklisting
Although sharing data across organizations is often advocated as a promising
way to enhance cybersecurity, collaborative initiatives are rarely put into
practice owing to confidentiality, trust, and liability challenges. In this
paper, we investigate whether collaborative threat mitigation can be realized
via a controlled data sharing approach, whereby organizations make informed
decisions as to whether or not, and how much, to share. Using appropriate
cryptographic tools, entities can estimate the benefits of collaboration and
agree on what to share in a privacy-preserving way, without having to disclose
their datasets. We focus on collaborative predictive blacklisting, i.e.,
forecasting attack sources based on one's logs and those contributed by other
organizations. We study the impact of different sharing strategies by
experimenting on a real-world dataset of two billion suspicious IP addresses
collected from Dshield over two months. We find that controlled data sharing
yields up to 105% accuracy improvement on average, while also reducing the
false positive rate.Comment: A preliminary version of this paper appears in DIMVA 2015. This is
the full version. arXiv admin note: substantial text overlap with
arXiv:1403.212
- …