982 research outputs found
Modeling Complex Packet Filters with Finite State Automata
Designing an efficient and scalable packet filter for modern computer networks becomes each day more challenging: faster link speeds, the steady increase in the number of encapsulation rules (e.g., tunneling) and the necessity to precisely isolate a given subset of traffic cause filtering expressions to become more complex than in the past. Most of current packet filtering mechanisms cannot deal with those requirements because their optimization algorithms either cannot scale with the increased size of the filtering code, or exploit simple domain-specific optimizations that cannot guarantee to operate properly in case of complex filters. This paper presents pFSA, a new model that transforms packet filters into Finite State Automata and guarantees the optimal number of checks on the packet, also in case of multiple filters composition, hence enabling efficiency and scalability without sacrificing filtering computation time
Enabling precise traffic filtering based on protocol encapsulation rules
Current packet filters have a limited support for expressions based on protocol encapsulation relationships and some constraints are not supported at all, such as the value of the IP source address in the inner header of an IP-in-IP packet. This limitation may be critical for a wide range of packet filtering applications, as the number of possible encapsulations is steadily increasing and network operators cannot define exactly which packets they are interested in. This paper proposes a new formalism, called eXtended Finite State Automata with Predicates (xpFSA), that provides an efficient implementation of filtering expressions, supporting both constraints on protocol encapsulations and the composition of multiple filtering expressions. Furthermore, it defines a novel algorithm that can be used to automatically detect tunneled packets. Our algorithms are validated through a large set of tests assessing both the performance of the filtering generation process and the efficiency of the actual packet filtering code when dealing with real network packets
Optimizing Computation of Recovery Plans for BPEL Applications
Web service applications are distributed processes that are composed of
dynamically bounded services. In our previous work [15], we have described a
framework for performing runtime monitoring of web service against behavioural
correctness properties (described using property patterns and converted into
finite state automata). These specify forbidden behavior (safety properties)
and desired behavior (bounded liveness properties). Finite execution traces of
web services described in BPEL are checked for conformance at runtime. When
violations are discovered, our framework automatically proposes and ranks
recovery plans which users can then select for execution. Such plans for safety
violations essentially involve "going back" - compensating the executed actions
until an alternative behaviour of the application is possible. For bounded
liveness violations, recovery plans include both "going back" and "re-planning"
- guiding the application towards a desired behaviour. Our experience, reported
in [16], identified a drawback in this approach: we compute too many plans due
to (a) overapproximating the number of program points where an alternative
behaviour is possible and (b) generating recovery plans for bounded liveness
properties which can potentially violate safety properties. In this paper, we
describe improvements to our framework that remedy these problems and describe
their effectiveness on a case study.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures
This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverable’s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
Approximated Symbolic Computations over Hybrid Automata
Hybrid automata are a natural framework for modeling and analyzing systems
which exhibit a mixed discrete continuous behaviour. However, the standard
operational semantics defined over such models implicitly assume perfect
knowledge of the real systems and infinite precision measurements. Such
assumptions are not only unrealistic, but often lead to the construction of
misleading models. For these reasons we believe that it is necessary to
introduce more flexible semantics able to manage with noise, partial
information, and finite precision instruments. In particular, in this paper we
integrate in a single framework based on approximated semantics different over
and under-approximation techniques for hybrid automata. Our framework allows to
both compare, mix, and generalize such techniques obtaining different
approximated reachability algorithms.Comment: In Proceedings HAS 2013, arXiv:1308.490
Programming Using Automata and Transducers
Automata, the simplest model of computation, have proven to be an effective tool in reasoning about programs that operate over strings. Transducers augment automata to produce outputs and have been used to model string and tree transformations such as natural language translations. The success of these models is primarily due to their closure properties and decidable procedures, but good properties come at the price of limited expressiveness. Concretely, most models only support finite alphabets and can only represent small classes of languages and transformations. We focus on addressing these limitations and bridge the gap between the theory of automata and transducers and complex real-world applications: Can we extend automata and transducer models to operate over structured and infinite alphabets? Can we design languages that hide the complexity of these formalisms? Can we define executable models that can process the input efficiently? First, we introduce succinct models of transducers that can operate over large alphabets and design BEX, a language for analysing string coders. We use BEX to prove the correctness of UTF and BASE64 encoders and decoders. Next, we develop a theory of tree transducers over infinite alphabets and design FAST, a language for analysing tree-manipulating programs. We use FAST to detect vulnerabilities in HTML sanitizers, check whether augmented reality taggers conflict, and optimize and analyze functional programs that operate over lists and trees. Finally, we focus on laying the foundations of stream processing of hierarchical data such as XML files and program traces. We introduce two new efficient and executable models that can process the input in a left-to-right linear pass: symbolic visibly pushdown automata and streaming tree transducers. Symbolic visibly pushdown automata are closed under Boolean operations and can specify and efficiently monitor complex properties for hierarchical structures over infinite alphabets. Streaming tree transducers can express and efficiently process complex XML transformations while enjoying decidable procedures
- …