1,487 research outputs found

    Study and analysis of innovative network protocols and architectures

    Get PDF
    In the last years, some new paradigms are emerging in the networking area as inspiring models for the definition of future communications networks. A key example is certainly the Content Centric Networking (CCN) protocol suite, namely a novel network architecture that aims to supersede the current TCP/IP stack in favor of a name based routing algorithm, also introducing in-network caching capabilities. On the other hand, much interest has been placed on Software Defined Networking (SDN), namely the set of protocols and architectures designed to make network devices more dynamic and programmable. Given this complex arena, the thesis focuses on the analysis of these innovative network protocols, with the aim of exploring possible design flaws and hence guaranteeing their proper operation when actually deployed in the network. Particular emphasis is given to the security of these protocols, for its essential role in every wide scale application. Some work has been done in this direction, but all these solutions are far to be considered fully investigated. In the CCN case, a closer investigation on problems related to possible DDoS attacks due to the stateful nature of the protocol, is presented along with a full-fledged proposal to support scalable PUSH application on top of CCN. Concerning SDN, instead, we present a tool for the verification of network policies in complex graphs containing dynamic network functions. In order to obtain significant results, we leverage different tools and methodologies: on the one hand, we assess simulation software as very useful tools for representing the most common use cases for the various technologies. On the other hand, we exploit more sophisticated formal methods to ensure a higher level of confidence for the obtained results

    IVY 2-A model-based analysis tool

    Get PDF
    The IVY workbench is a model-based tool that supports the formal verification of interactive computing systems. It adopts a plugin-based architecture to support a flexible development model. Over the years the chosen architectural solution revealed a number of limitations, resulting both from technological deprecation of some of the adopted solutions and a better understanding of the verification process to support. This paper presents the redesign and implementation of the original plugin infrastructure, originating a new version of the tool: IVY 2. It describes the limitations of the original solutions and the new architecture, which resorts to the Java module system in order to solve them.This work is financed by National Funds through the Portuguese funding agency, FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within project: UID/EEA/50014/2019

    Putting Teeth into Open Architectures: Infrastructure for Reducing the Need for Retesting

    Get PDF
    Proceedings Paper (for Acquisition Research Program)The Navy is currently implementing the open-architecture framework for developing joint interoperable systems that adapt and exploit open-system design principles and architectures. This raises concerns about how to practically achieve dependability in software-intensive systems with many possible configurations when: 1) the actual configuration of the system is subject to frequent and possibly rapid change, and 2) the environment of typical reusable subsystems is variable and unpredictable. Our preliminary investigations indicate that current methods for achieving dependability in open architectures are insufficient. Conventional methods for testing are suited for stovepipe systems and depend strongly on the assumptions that the environment of a typical system is fixed and known in detail to the quality-assurance team at test and evaluation time. This paper outlines new approaches to quality assurance and testing that are better suited for providing affordable reliability in open architectures, and explains some of the additional technical features that an Open Architecture must have in order to become a Dependable Open Architecture.Naval Postgraduate School Acquisition Research ProgramApproved for public release; distribution is unlimited

    DBKnot: A Transparent and Seamless, Pluggable Tamper Evident Database

    Get PDF
    Database integrity is crucial to organizations that rely on databases of important data. They suffer from the vulnerability to internal fraud. Database tampering by internal malicious employees with high technical authorization to their infrastructure or even compromised by externals is one of the important attack vectors. This thesis addresses such challenge in a class of problems where data is appended only and is immutable. Examples of operations where data does not change is a) financial institutions (banks, accounting systems, stock market, etc., b) registries and notary systems where important data is kept but is never subject to change, and c) system logs that must be kept intact for performance and forensic inspection if needed. The target of the approach is implementation seamlessness with little-or-no changes required in existing systems. Transaction tracking for tamper detection is done by utilizing a common hashtable that serially and cumulatively hashes transactions together while using an external time-stamper and signer to sign such linkages together. This allows transactions to be tracked without any of the organizations’ data leaving their premises and going to any third-party which also reduces the performance impact of tracking. This is done so by adding a tracking layer and embedding it inside the data workflow while keeping it as un-invasive as possible. DBKnot implements such features a) natively into databases, or b) embedded inside Object Relational Mapping (ORM) frameworks, and finally c) outlines a direction of implementing it as a stand-alone microservice reverse-proxy. A prototype ORM and database layer has been developed and tested for seamlessness of integration and ease of use. Additionally, different models of optimization by implementing pipelining parallelism in the hashing/signing process have been tested in order to check their impact on performance. Stock-market information was used for experimentation with DBKnot and the initial results gave a slightly less than 100% increase in transaction time by using the most basic, sequential, and synchronous version of DBKnot. Signing and hashing overhead does not show significant increase per record with the increased amount of data. A number of different alternate optimizations were done to the design that via testing have resulted in significant increase in performance
    corecore