101 research outputs found

    Generic Distribution Support for Programming Systems

    Get PDF
    This dissertation provides constructive proof, through the implementation of a middleware, that distribution transparency is practical, generic, and extensible. Fault tolerant distributed services can be developed by using the failure detection abilities of the middleware. By generic we mean that the middleware can be used for many different programming languages and paradigms. Distribution for each kind of language entity is done in terms of consistency protocols, which guarantee that the semantics of the entities are preserved in a distributed setting. The middleware allows new consistency protocols to be added easily. The efficiency of the middleware and the ease of integration are shown by coupling the middleware to a programming system, which encompasses the object oriented, the functional, and the concurrent-declarative programming paradigms. Our measurements show that the distribution middleware is competitive with the most popular distributed programming systems (JavaRMI, .NET, IBM CORBA)

    Distributed Dependancy Injection

    Get PDF
    Applications nowadays are built of objects, which collaborate in order to provide their functionality, are interconnected by default and are by no means limited to a single domain of an application, a process or a computer. In this thesis a concept of dependency injection, which enables an object to explicitly declare and require its dependencies to be provided, is distributed across domain boundaries. In support of a distributed dependency injection we provide an external tool (a container) for assembling objects and resolving their dependencies (collaborators) from across domains. We provide a model in which a group of distributed dependency injection containers connect on behalf of the applications. We provide them with a middleware solution for seamless and fault-tolerant sharing of objects/dependencies between interconnected domains. A collection of support services (i.e. the distributed object replication middleware) transparently manages replication of objects created by the dependency injection principles across multiple computers. A fresh failover is ensured by invariable consistency upon invocations. This is temporarily relaxed during degraded situations (e.g. network failures) in order to achieve availability within the isolated groups. Recovery from failures is ensured by logging and check-pointing the state of the system on a regular basis; conflicting modifications are resolved. Our proof-of-concept implementation is an add-on to .NET Remoting middleware and an extension to the Unity Container

    Real-time data acquisition, transmission and archival framework

    Get PDF
    Most human actions are a direct response to stimuli from their five senses. In the past few decades there has been a growing interest in capturing and storing the information that is obtained from the senses using analog and digital sensors. By storing this data it is possible to further analyze and better understand human perception. While many devices have been created for capturing and storing data, existing software and hardware architectures are aimed towards specialized devices and require expensive high-performance systems. This thesis aims to create a framework that supports capture and monitoring of a variety of sensors and can be scaled to run on low and high-performance systems such as netbooks, laptops and desktop systems. The proposed architecture was tested using aural and visual sensors due to their availability and higher bandwidth requirements compared to other sensors. Four different portable computing devices were used for testing with a varied set of hardware capabilities. On each of the systems the same suite of tests were run to benchmark and analyze CPU, memory, network, and storage usage statistics. From the results it was shown that on all of these platforms capturing data from multiple video, audio and other sensor sources was possible in real-time. Performance was shown to scale based on several factors, but the most important were CPU architecture, network topology and data interfaces used

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    Using Shared Memory as a Means to Provide Data Concurrency Across Vm’s in a Cloud Architecture

    Get PDF
    As the world is progressing towards full adoption of the - WWW (World Wide Web), communication of data becomes more vital, especially when it must be achieved by making use of Restful web services. Restful web services enable the ease to transfer data between virtual machines on the cloud or the virtual machines hosted on-premises. The main idea of the project is to showcase that data replication is possible between Primary HOSTS and Secondary HOSTS, with the primary host responsible for sharing the data with other virtual machines, by making use of a REST API. Further, any changes made to the data resided in the primary host must be reflected in the secondary hosts. Concepts of virtualization, cloud computing and Rest API are used here in this paper to achieve the goal of this project

    Distributed Control Architecture

    Get PDF
    This document describes the development and testing of a novel Distributed Control Architecture (DCA). The DCA developed during the study is an attempt to turn the components used to construct unmanned vehicles into a network of intelligent devices, connected using standard networking protocols. The architecture exists at both a hardware and software level and provides a communication channel between control modules, actuators and sensors. A single unified mechanism for connecting sensors and actuators to the control software will reduce the technical knowledge required by platform integrators and allow control systems to be rapidly constructed in a Plug and Play manner. DCA uses standard networking hardware to connect components, removing the need for custom communication channels between individual sensors and actuators. The use of a common architecture for the communication between components should make it easier for software to dynamically determine the vehicle s current capabilities and increase the range of processing platforms that can be utilised. Implementations of the architecture currently exist for Microsoft Windows, Windows Mobile 5, Linux and Microchip dsPIC30 microcontrollers. Conceptually, DCA exposes the functionality of each networked device as objects with interfaces and associated methods. Allowing each object to expose multiple interfaces allows for future upgrades without breaking existing code. In addition, the use of common interfaces should help facilitate component reuse, unit testing and make it easier to write generic reusable software
    corecore