50 research outputs found

    A distributed object-oriented graphical programming system

    Get PDF
    technical reportThis report presents the design of a distributed parallel object system (DPOS) and its implementation using a graphical editing interface. DPOS brings together concepts of object-oriented programming and graphical programming with aspects of modern functional languages. Programs are defined as networks of active processes called "Process Objects" and interconnecting communications lines. These active objects are independent single threaded programs that employ much of the modularity, encapsulation of function, and encapsulation of data found in sequential object-oriented programming. The system defines a clear and simple approach to generating and managing parallelism and interprocess communication in a distributed parallel environment. DPOS contributes several new solutions to the problems of distributed parallel programming that are improvements over existing systems. The key improvements of this system include: a more complete and versatile means of dynamic process creation; the specification of complex network topologies in an intuitively clear and understandable way; seperation of the management of parallelism from the definition of computation; automatic resolution of low level critical section issues; the ability to design and develop separate processes as traditional single threaded programs; the encapsulation and incremental development of programs subnetworks; application of graphical programming concepts to high level programming

    Towards effective live cloud migration on public cloud IaaS.

    Get PDF
    Cloud computing allows users to access shared, online computing resources. However, providers often offer their own proprietary applications, APIs and infrastructures, resulting in a heterogeneous cloud environment. This environment makes it difficult for users to change cloud service providers and to explore capabilities to support the automated migration from one provider to another. Many standards bodies (IEEE, NIST, DMTF and SNIA), industry (middleware) and academia have been pursuing standards and approaches to reduce the impact of vendor lock-in. Cloud providers offer their Infrastructure as a Service (IaaS) based on virtualization to enable multi-tenant and isolated environments for users. Because, each provider has its own proprietary virtual machine (VM) manager, called the hypervisor, VMs are usually tightly coupled to the underlying hardware, thus hindering live migration of VMs to different providers. A number of user-centric approaches have been proposed from both academia and industry to solve this coupling issue. However, these approaches suffer limitations in terms of flexibility (decoupling VMs from underlying hardware), performance (migration downtime) and security (secure live migration). These limitations are identified using our live cloud migration criteria which are rep- resented by flexibility, performance and security. These criteria are not only used to point out the gap in the previous approaches, but are also used to design our live cloud migration approach, LivCloud. This approach aims to live migration of VMs across various cloud IaaS with minimal migration downtime, with no extra cost and without user’s intervention and awareness. This aim has been achieved by addressing different gaps identified in the three criteria: the flexibility gap is improved by considering a better virtualization platform to support a wider hardware range, supporting various operating system and taking into account the migrated VMs’ hardware specifications and layout; the performance gap is enhanced by improving the network connectivity, providing extra resources required by the migrated VMs during the migration and predicting any potential failure to roll back the system to its initial state if required; finally, the security gap is clearly tackled by protecting the migration channel using encryption and authentication. This thesis presents: (i) A clear identification of the key challenges and factors to successfully perform live migration of VMs across different cloud IaaS. This has resulted in a rigorous comparative analysis of the literature on live migration of VMs at the cloud IaaS based on our live cloud migration criteria; (ii) A rigorous analysis to distil the limitations of existing live cloud migration approaches and how to design efficient live cloud migration using up-to-date technologies. This has led to design a novel live cloud migration approach, called LivCloud, that overcomes key limitations in currently available approaches, is designed into two stages, the basic design stage and the enhancement of the basic design stage; (iii) A systematic approach to assess LivCloud on different public cloud IaaS. This has been achieved by using a combination of up-to-date technologies to build LivCloud taking the interoperability challenge into account, implementing and discussing the results of the basic design stage on Amazon IaaS, and implementing both stages of the approach on Packet bare metal cloud. To sum up, the thesis introduces a live cloud migration approach that is systematically designed and evaluated on uncontrolled environments, Amazon and Packet bare metal. In contrast to other approaches, it clearly highlights how to perform and secure the migration between our local network and the mentioned environments

    State-of-the-art Assessment For Simulated Forces

    Get PDF
    Summary of the review of the state of the art in simulated forces conducted to support the research objectives of Research and Development for Intelligent Simulated Forces

    Doctor of Philosophy

    Get PDF
    dissertationIn the static analysis of functional programs, control- ow analysis (k-CFA) is a classic method of approximating program behavior as a infinite state automata. CFA2 and abstract garbage collection are two recent, yet orthogonal improvements, on k-CFA. CFA2 approximates program behavior as a pushdown system, using summarization for the stack. CFA2 can accurately approximate arbitrarily-deep recursive function calls, whereas k-CFA cannot. Abstract garbage collection removes unreachable values from the store/heap. If unreachable values are not removed from a static analysis, they can become reachable again, which pollutes the final analysis and makes it less precise. Unfortunately, as these two techniques were originally formulated, they are incompatible. CFA2's summarization technique for managing the stack obscures the stack such that abstract garbage collection is unable to examine the stack for reachable values. This dissertation presents introspective pushdown control-flow analysis, which manages the stack explicitly through stack changes (pushes and pops). Because this analysis is able to examine the stack by how it has changed, abstract garbage collection is able to examine the stack for reachable values. Thus, introspective pushdown control-flow analysis merges successfully the benefits of CFA2 and abstract garbage collection to create a more precise static analysis. Additionally, the high-performance computing community has viewed functional programming techniques and tools as lacking the efficiency necessary for their applications. Nebo is a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena. For efficient execution, Nebo exploits a version of expression templates, based on the C++ template system, which is a type-less, completely-pure, Turing-complete functional language with burdensome syntax. Nebo's declarative syntax supports functional tools, such as point-wise lifting of complex expressions and functional composition of stencil operators. Nebo's primary abstraction is mathematical assignment, which separates what a calculation does from how that calculation is executed. Currently Nebo supports single-core execution, multicore (thread-based) parallel execution, and GPU execution. With single-core execution, Nebo performs on par with the loops and code that it replaces in Wasatch, a pre-existing high-performance simulation project. With multicore (thread-based) execution, Nebo can linearly scale (with roughly 90% efficiency) up to 6 processors, compared to its single-core execution. Moreover, Nebo's GPU execution can be up to 37x faster than its single-core execution. Finally, Wasatch (the pre-existing high-performance simulation project which uses Nebo) can scale up to 262K cores

    Analyzing communication flow and process placement in Linda programs on transputers

    Get PDF
    With the evolution of parallel and distributed systems, users from diverse disciplines have looked to these systems as a solution to their ever increasing needs for computer processing resources. Because parallel processing systems currently require a high level of expertise to program, many researchers are investing effort into developing programming approaches which hide some of the difficulties of parallel programming from users. Linda, is one such parallel paradigm, which is intuitive to use, and which provides a high level decoupling between distributable components of parallel programs. In Linda, efficiency becomes a concern of the implementation rather than of the programmer. There is a substantial overhead in implementing Linda, an inherently shared memory model on a distributed system. This thesis describes the compile-time analysis of tuple space interactions which reduce the run-time matching costs, and permits the distributon of the tuple space data. A language independent module which partitions the tuple space data and suggests appropriate storage schemes for the partitions so as to optimise Linda operations is presented. The thesis also discusses hiding the network topology from the user by automatically allocating Linda processes and tuple space partitons to nodes in the network of transputers. This is done by introducing a fast placement algorithm developed for Linda.KMBT_22

    openstack

    Get PDF
    Σκοπός της Διπλωματικής εργασίας είναι η παρουσίαση του OpenStack. Ένα ανοιχτό λογισμικό διαχείρισης των τηλεπικοινωνιακών πόρων σε cloud περιβάλλον. Για την εκπόνηση της Διπλωματικής εργασίας γίνεται αρχικά μία περιγραφή της αρχιτεκτονικής του cloud περιβάλλοντος, και των μοντέλων εξυπηρέτησης που χρησιμοποιούνται. Εν συνεχεία παρουσιάζεται η αρχιτεκτονική Network Function Virtualization που εφαρμόζεται στις τηλεπικοινωνίες σύμφωνα με τα πρότυπα που έχει θέσει ο Ευρωπα’ι’κός Οργανισμός Τηλεπικοινωνιακών Προτύπων. Το κύριο θέμα της Διπλωματικής Εργασίας είναι η παρουσίαση του λογισμικού OpenStack που χρησιμοποιείται από την NFV αρχιτεκτονική. Στα κεφάλαια αυτά γίνεται μία προσπάθεια όσο το δυνατόν λεπτομερέστερης και πληρέστερης περιγραφής των λειτουργιών του OpenStack καθώς και τα μέρη από τα αποία αποτελείται. Τέλος γίνεται μία τεχνοοικονομική ανάλυση του κόστους εφαρμογής της NFV αρχιτεκτονικής με την τωρινή αρχιτεκτονική που εφαρμόζεται στα Τηλεπιοκοινωνιακά δίκτυα. Τα αποτελέσματα τα οποία προκύπτουν από την παρούσα εργασία είναι η πολλές δυνατότητες υλοπόιησης και εφαρμογής της νέας αρχιτεκτονικής καθώς και το πολύ χαμηλό κόστος λειτουργίας της σε σχέση με την υφιστάμενη εώς τώρα τεχνολογίαThe aim of this thesis is the presentation of OpenStack. An open software management of telecommunications resources in cloud environment. For the preparation of this thesis is first a description of the architecture of cloud environment, and service models used. Architecture Network Function Virtualization occurs then applied to telecommunications in accordance with standards set by the European Telecommunications Standards Institute. The main topic of the thesis is to present the OpenStack software used by the NFV architecture. In these chapters an attempt is as detailed and comprehensive description of the OpenStack functions and parts of the colony consists. Finally there is one techno-economic analysis of the cost of implementing NFV architecture with the current architecture applied to Telecommunication networks. The results derived from this work is a lot of potential implementation and application of the new architecture and the very low operating costs compared with existing technology up to no

    Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    Get PDF
    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness
    corecore