1,894 research outputs found

    Efficient Synthesis of Network Updates

    Full text link
    Software-defined networking (SDN) is revolutionizing the networking industry, but current SDN programming platforms do not provide automated mechanisms for updating global configurations on the fly. Implementing updates by hand is challenging for SDN programmers because networks are distributed systems with hundreds or thousands of interacting nodes. Even if initial and final configurations are correct, naively updating individual nodes can lead to incorrect transient behaviors, including loops, black holes, and access control violations. This paper presents an approach for automatically synthesizing updates that are guaranteed to preserve specified properties. We formalize network updates as a distributed programming problem and develop a synthesis algorithm based on counterexample-guided search and incremental model checking. We describe a prototype implementation, and present results from experiments on real-world topologies and properties demonstrating that our tool scales to updates involving over one-thousand nodes

    Management of high availability services using virtualization

    Get PDF
    This thesis examines the use of virtualization in management of high availability services using open source tools. The services are hosted in virtual machines, which can be seamlessly migrated between the physical nodes in the cluster automatically by high availability software. Currently there are no complete open source solutions that provide migration of virtual machines as a method for repair. The work is based on the high availability software Heartbeat. In this work, an add-on to Heartbeat is developed, allowing Heartbeat to be able to seamlessly migrate the virtual machines between the physical nodes, when shut down gracefully. This add-on is tested in a proof of concept cluster, where Heartbeat runs Xen virtual machines with high availability. The impact of migration has been measured for both TCP and UDP services, both numerically and heuristically. The outages caused by graceful failures (e.g. rebooting) are measured to be around 1/4 seconds. Practical tests are also performed. The impression is that the outages are not noticed by the users of latency critical services as game servers or streaming audio servers.Master i nettverks- og systemadministrasjo

    A Case Study on Cloud Migration and Improvement of Twelve-Factor App

    Get PDF
    The Twelve-Factor app methodology was introduced in 2011, intending to raise awareness, provide shared vocabulary and offer broad conceptual solutions. In this thesis, a case study was done on two software implementations of the same business idea. The implementations were introduced and then analyzed with Twelve-Factor. Hevner's Information Systems Research Framework was used to assess the implementations, and Twelve-Factor's theoretical methodology was combined with them to form results. The implementations were found to fulfill most of the twelve factors, although in different ways. The use of containers in the new implementation explained most of the differences. Some factors were also revealed to be standard practices today, which showed the need to abstract factors like Dependencies, Processes, Port binding, Concurrency, and Disposability. In addition, the methodology itself was analyzed, and additions to it were introduced, conforming to the modern needs of applications that most often run containerized on cloud platforms. New additions are API First, Telemetry, Security, and Automation. API First instructs developers to prioritize building the APIs at the start of the development cycle, while Telemetry points that as much information as possible should be collected from the app to improve performance and help to solve bugs. Security introduces two different practical solutions and a guideline of following the principle of least privilege, and lastly, automation is emphasized to free up developer time.Twelve-Factor App on vuonna 2011 julkaistu metodologia, jonka tarkoituksena on nostaa tietoisuutta, tarjota jaettu sanakirja alan termistölle sekä tarjota yleinen käsitteellinen ratkaisu. Tässä työssä esitellään kaksi toteutusta samasta liiketoimintaideasta. Toteutukset esitellään ja ne analysoidaan Twelve-Factorin avulla. Hevnerin Information Systems Research Framework -mallia hyödynnettiin toteutusten arvioinnissa ja se yhdistettiin Twelve-Factor -mallin kanssa, jotta saatiin tulokset. Työssä huomattiin, että suurin osa Twelve-Factorin kuvaamista osista toteutuivat ohjelmistoja tarkasteltaessa. Toteutuksissa oli kuitenkin eroa. Etenkin Docker konttien käyttäminen uudessa toteutuksessa selitti suurimman osan eroista. Twelve-Factorin osien toteutuminen suurilta osin näytti tarpeen abstrahoida osia mallista, jotka ovat jo tavanomaisia käytäntöjä. Sen lisäksi metodologia itsessään analysoidaan ja siihen esitettiin lisäyksiä, jotta se vastaisi nykyaikaisten ohjelmistotuotteiden tarpeisiin, jotka ovat useimmiten kontitettuina eri pilvipalveluissa.Uusia lisäyksiä ovat API ensin -ajattelu, telemetria, tietoturva ja automaatio. API ensin -ajattelu ohjeistaa kehittäjiä priorisoimaan rajapintojen rakentamisen mahdollisimman aikaisessa vaiheessa ohjelmistoprojektia, kun taas telemetria ohjeistaa keräämään mahdollisimman paljon tietoa ohjelmiston toiminnasta, jotta voidaan kehittää tuotteen suoritustehokkuutta sekä helpottaa ohjelmistovirheiden ratkaisemista. Tietoturva esittelee kaksi erilaista käytännön ratkaisua sekä ohjenuoran noudattaa vähimpien oikeuksien periaatetta. Viimeisimpänä automaation tärkeyttä painotetaan, jotta säästetään ohjelmistokehittäjien aikaa ja ohjataan resursseja varsinaiseen arvon luomiseen

    Survey of Consistent Network Updates

    Get PDF
    Computer networks have become a critical infrastructure. Designing dependable computer networks however is challenging, as such networks should not only meet strict requirements in terms of correctness, availability, and performance, but they should also be flexible enough to support fast updates, e.g., due to a change in the security policy, an increasing traffic demand, or a failure. The advent of Software-Defined Networks (SDNs) promises to provide such flexiblities, allowing to update networks in a fine-grained manner, also enabling a more online traffic engineering. In this paper, we present a structured survey of mechanisms and protocols to update computer networks in a fast and consistent manner. In particular, we identify and discuss the different desirable update consistency properties a network should provide, the algorithmic techniques which are needed to meet these consistency properties, their implications on the speed and costs at which updates can be performed. We also discuss the relationship of consistent network update problems to classic algorithmic optimization problems. While our survey is mainly motivated by the advent of Software-Defined Networks (SDNs), the fundamental underlying problems are not new, and we also provide a historical perspective of the subject

    Toward Synthesis of Network Updates

    Full text link
    Updates to network configurations are notoriously difficult to implement correctly. Even if the old and new configurations are correct, the update process can introduce transient errors such as forwarding loops, dropped packets, and access control violations. The key factor that makes updates difficult to implement is that networks are distributed systems with hundreds or even thousands of nodes, but updates must be rolled out one node at a time. In networks today, the task of determining a correct sequence of updates is usually done manually -- a tedious and error-prone process for network operators. This paper presents a new tool for synthesizing network updates automatically. The tool generates efficient updates that are guaranteed to respect invariants specified by the operator. It works by navigating through the (restricted) space of possible solutions, learning from counterexamples to improve scalability and optimize performance. We have implemented our tool in OCaml, and conducted experiments showing that it scales to networks with a thousand switches and tens of switches updating.Comment: In Proceedings SYNT 2013, arXiv:1403.726

    Resilience of Virtualized Embedded IoT Networks

    Get PDF
    Embedded IoT networks are the backbone of safetycritical systems like smart factories, autonomous vehicles, and airplanes. Therefore, their resilience against failures and attacks should be a prior concern. The design of more capable IoT devices enables the flexible deployment of network services by virtualization but it also increases the complexity of the systems and makes them more error-prone. In this paper, we discuss the issues and challenges to ensure resilience in virtualized embedded IoT networks by presenting proactive and reactive measures

    Autonomic Role and Mission Allocation Framework for Wireless Sensor Networks.

    No full text
    Pervasive applications incorporate physical components that are exposed to everyday use and a large number of conditions and external factors that can lead to faults and failures. It is also possible that application requirements change during deployment and the network needs to adapt to a new context. Consequently, pervasive systems must be capable to autonomically adapt to changing conditions without involving users becoming a transparent asset in the environment. In this paper, we present an autonomic mechanism for initial task assignment in sensor networks, an NP-hard problem. We also study on-line adaptation of the original deployment which considers real-time metrics for maximising utility and lifetime of applications and smooth service degradation in the face of component failures. © 2011 IEEE
    corecore