297 research outputs found

    Granularity in Large-Scale Parallel Functional Programming

    Get PDF
    This thesis demonstrates how to reduce the runtime of large non-strict functional programs using parallel evaluation. The parallelisation of several programs shows the importance of granularity, i.e. the computation costs of program expressions. The aspect of granularity is studied both on a practical level, by presenting and measuring runtime granularity improvement mechanisms, and at a more formal level, by devising a static granularity analysis. By parallelising several large functional programs this thesis demonstrates for the first time the advantages of combining lazy and parallel evaluation on a large scale: laziness aids modularity, while parallelism reduces runtime. One of the parallel programs is the Lolita system which, with more than 47,000 lines of code, is the largest existing parallel non-strict functional program. A new mechanism for parallel programming, evaluation strategies, to which this thesis contributes, is shown to be useful in this parallelisation. Evaluation strategies simplify parallel programming by separating algorithmic code from code specifying dynamic behaviour. For large programs the abstraction provided by functions is maintained by using a data-oriented style of parallelism, which defines parallelism over intermediate data structures rather than inside the functions. A highly parameterised simulator, GRANSIM, has been constructed collaboratively and is discussed in detail in this thesis. GRANSIM is a tool for architecture-independent parallelisation and a testbed for implementing runtime-system features of the parallel graph reduction model. By providing an idealised as well as an accurate model of the underlying parallel machine, GRANSIM has proven to be an essential part of an integrated parallel software engineering environment. Several parallel runtime- system features, such as granularity improvement mechanisms, have been tested via GRANSIM. It is publicly available and in active use at several universities worldwide. In order to provide granularity information this thesis presents an inference-based static granularity analysis. This analysis combines two existing analyses, one for cost and one for size information. It determines an upper bound for the computation costs of evaluating an expression in a simple strict higher-order language. By exposing recurrences during cost reconstruction and using a library of recurrences and their closed forms, it is possible to infer the costs for some recursive functions. The possible performance improvements are assessed by measuring the parallel performance of a hand-analysed and annotated program

    Asiakasreunakytkennän testausalustan kehitys

    Get PDF
    Customer Edge Switching (CES) and Realm Gateway (RGW) are technologies designed to solve core challenges of the modern Internet. Challenges include the ever increasing amount of devices connected to the Internet and risks created by malicious parties. CES and RGW leverage existing technologies like Domain Name System (DNS). Software testing is critical for ensuring correctness of software. It aims to ensure that products and protocols operate correctly. Testing also aims to find any critical vulnerabilities in the products. Fuzz testing is a field of software testing allowing automatic iteration of unexpected inputs. In this thesis work we evaluate two CES versions in performance, in susceptibility of Denial of Service (DoS) and in weaknesses related to use of DNS. Performance is an important metric for switches. Denial of Service is a very common attack vector and use of DNS in new ways requires critical evaluation. The performance of the old version was sufficient. Some clear issues were found. The version was vulnerable against DoS. Oversights in DNS operation were found. The new version shows improvement over the old one. We also evaluated suitability of expanding Robot Framework for fuzz testing Customer Edge Traversal Protocol (CETP). We conclude that the use of the Framework was not the best approach. We also developed a new testing framework using Robot Framework for the new version of CES.Customer Edge Switching (CES) asiakasreunakytkentä ja Realm Gateway (RGW) alueen yhdyskäytävä tarjoavat ratkaisuja modernin Internetin ydinongelmiin. Ydinongelmiin kuuluvat kytkettyjen laitteiden määrän jatkuva kasvu ja pahantahtoisten tahojen luomat riskit. CES ja RGW hyödyntävät olemassa olevia tekniikoita kuten nimipalvelua (DNS). Ohjelmistojen oikeellisuuden varmistuksessa testaus on välttämätöntä. Sen tavoitteena on varmistaa tuotteiden ja protokollien oikea toiminnallisuus. Testaus myös yrittää löytää kriittiset haavoittuvuudet ohjelmistoissa. Sumea testaus on ohjelmistotestauksen alue, joka mahdollistaa odottamattomien syötteiden automaattisen läpikäynnin. Tässä työssä arvioimme kahden CES version suorituskykyä, palvelunestohyökkäyksien sietoa ja nimipalvelun käyttöön liittyviä heikkouksia. Suorituskyky on tärkeä mittari kytkimille. Palvelunesto on erittäin yleinen hyökkäystapa ja nimipalvelun uudenlainen käyttö vaatii kriittistä arviointia. Vanhan version suorituskyky oli riittävä. Joitain selviä ongelmia löydettiin. Versio oli haavoittuvainen palvelunestohyökkäyksille. Löysimme epätarkkuuksia nimipalveluiden toiminnassa. Uusi versio vaikuttaa paremmalta kuin vanha versio. Arvioimme työssä myös Robot Framework testausalustan laajentamisen soveltuvuutta Customer Edge Traversal Protocol (CETP) asiakasreunalävistysprotokollan sumeaan testaukseen. Toteamme, ettei alustan käyttö ollut paras lähestymistapa. Esitämme myös työmme Robot Framework alustaa hyödyntävän testausalustan kehityksessä nykyiselle CES versiolle. Kehitimme myös uuden testausalustan uudelle CES versiolle hyödyntäen Robot Frameworkia

    The 2nd Conference of PhD Students in Computer Science

    Get PDF

    Auto-configuration of Critical Network Infrastructure

    Get PDF
    Until the turn of the millennia, many electricity, water and gas supply plant operators used analogue serial cabling to communicate between highly customised systems to control and manage their plants. Since then, cost reductions and increased flexibility have become possible through the use of COTS (Commodity-Off-The- Shelf) equipment. These have radically changed communication between critical infrastructure devices, but have also introduced risks to the domain; one example being the major incident at a German steel mill in 2014 [14]. Donna F. Dodson, Chief of CyberSecurity at NIST has stated that “There’s an increase in free tools available focusing on industrial control systems. And greater hacker interest.” A common strategy to mitigate these risks is the extensive use of firewalls. Firewalls are not as simple as they appear. Efficient and reliable firewall security requires expertise in arcane, vendor-dependent configuration languages [15] and even then, configuration errors are common [97, 128, 129]. It is easy to complain about short-term thinking in firewall designers, but, at a deeper level the problem is that current approaches conflate multiple concerns: i.e., they incorporate network, protocol and hardware dependent details into security policy, in an unsystematised manner. In this thesis we tackle this problem. We begin by building a mathematically rigorous foundation for the design of security policies that separates divergent concerns. The formal foundations allow security administrators to reason about their network security; for instance to (i) show that certain types of traffic flows are impossible; and (ii) compare their security to industry best practices to check it complies and so on. In particular, we design our policy framework with Supervisory Control And Data Acquisition (SCADA) networks in mind; these networks control the distributed assets of many critical infrastructure plants. In doing so, we consider the requirements of a security policy specification that are general in nature as well as specific to a SCADA network context. An example requirement is verifiability: a property that increases transparency in the framework and provides security administrators assurance of expected security outcome. Lack of verifiability in current firewall configuration platforms contribute to the broken-by-design networks found in practice. Moreover, we apply design principles derived from real SCADA case studies [97] and industry best-practices [21,117], to develop simple policy specification features that are easy to administer correctly. We demonstrate the use of these specification features through a prototype implementation that creates secure-by-design networks. In enabling security by design, we (i) prevent policy emergence: i.e., implicit definition of policy as a result of many small decisions with complex interactions; and (ii) support rigorous verification: from policy consistency and best-practice compliance checks to pre-deployment verification, we only allow deploying policies that deliver the expected security outcome; and (iii) protect proactively: security can’t be purely reactive, placing pre-verified security controls prior to a cyber attack can prevent significant, expensive damage to system infrastructure.Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 201

    Dynamics and pragmatics for high performance concurrency

    Get PDF
    This thesis is concerned with support at all levels for building highly concurrent and dynamic parallel processing systems. The CSP model of concurrency, as (largely) embodied in the occam programming language is used due to its simplicity, expressiveness, architecture- independent nature, and potential for high performance. Additionally, occam provides guarantees regarding freedom from aliasing and race-hazard error. This thesis addresses one of the grand challenges of present day computer science: providing a software technology that offers the dynamic flexibility and performance of mainstream object oriented environments with the level of safety, formal analysis, modularity and lightweight concurrency offered by CSP/occam. Two approaches to this challenge are possible: do something to make the mainstream languages (e.g. Java, C++) safe, or make occam dynamic -- without compromising its existing good properties. This thesis follows the latter route. The first part of this thesis concentrates on enhancing the occam language and run-time system, on a commodity platform (IBM PC) running the freely available Linux operating system. After a brief introduction to the various components of the kroc occam system, additions and extensions to the occam programming language and supporting run-time system are examined. These provide a greater degree of programming flexibility in occam (for example, by adding support for dynamic allocation, mobile semantics and dynamic network construction), without compromising the safety of programs which use them. Benchmarks are reported that demonstrate significant improvements in performance (for example, channel communication in tens of nano-seconds). The second part concentrates on improving the level of interaction between occam programs and the OS environment. Providing easy access to sockets and networking, for example. This thesis concludes with a discussion of the work presented herein, with consideration given to parallels with object-oriented languages. Also described are details of ongoing and potential future research. The modified language grammar, details of new compiler generated code, and miscellany are provided in the appendices.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Curracurrong: a stream processing system for distributed environments

    Get PDF
    Advances in technology have given rise to applications that are deployed on wireless sensor networks (WSNs), the cloud, and the Internet of things. There are many emerging applications, some of which include sensor-based monitoring, web traffic processing, and network monitoring. These applications collect large amount of data as an unbounded sequence of events and process them to generate a new sequences of events. Such applications need an adequate programming model that can process large amount of data with minimal latency; for this purpose, stream programming, among other paradigms, is ideal. However, stream programming needs to be adapted to meet the challenges inherent in running it in distributed environments. These challenges include the need for modern domain specific language (DSL), the placement of computations in the network to minimise energy costs, and timeliness in real-time applications. To overcome these challenges we developed a stream programming model that achieves easy-to-use programming interface, energy-efficient actor placement, and timeliness. This thesis presents Curracurrong, a stream data processing system for distributed environments. In Curracurrong, a query is represented as a stream graph of stream operators and communication channels. Curracurrong provides an extensible stream operator library and adapts to a wide range of applications. It uses an energy-efficient placement algorithm that optimises communication and computation. We extend the placement problem to support dynamically changing networks, and develop a dynamic program with polynomially bounded runtime to solve the placement problem. In many stream-based applications, real-time data processing is essential. We propose an approach that measures time delays in stream query processing; this model measures the total computational time from input to output of a query, i.e., end-to-end delay

    Curracurrong: a stream processing system for distributed environments

    Get PDF
    Advances in technology have given rise to applications that are deployed on wireless sensor networks (WSNs), the cloud, and the Internet of things. There are many emerging applications, some of which include sensor-based monitoring, web traffic processing, and network monitoring. These applications collect large amount of data as an unbounded sequence of events and process them to generate a new sequences of events. Such applications need an adequate programming model that can process large amount of data with minimal latency; for this purpose, stream programming, among other paradigms, is ideal. However, stream programming needs to be adapted to meet the challenges inherent in running it in distributed environments. These challenges include the need for modern domain specific language (DSL), the placement of computations in the network to minimise energy costs, and timeliness in real-time applications. To overcome these challenges we developed a stream programming model that achieves easy-to-use programming interface, energy-efficient actor placement, and timeliness. This thesis presents Curracurrong, a stream data processing system for distributed environments. In Curracurrong, a query is represented as a stream graph of stream operators and communication channels. Curracurrong provides an extensible stream operator library and adapts to a wide range of applications. It uses an energy-efficient placement algorithm that optimises communication and computation. We extend the placement problem to support dynamically changing networks, and develop a dynamic program with polynomially bounded runtime to solve the placement problem. In many stream-based applications, real-time data processing is essential. We propose an approach that measures time delays in stream query processing; this model measures the total computational time from input to output of a query, i.e., end-to-end delay
    corecore