22 research outputs found

    Distributed shared memory for virtual environments

    Get PDF
    Bibliography: leaves 71-77.This work investigated making virtual environments easier to program, by designing a suitable distributed shared memory system. To be usable, the system must keep latency to a minimum, as virtual environments are very sensitive to it. The resulting design is push-based and non-consistent. Another requirement is that the system should be scaleable, over large distances and over large numbers of participants. The latter is hard to achieve with current network protocols, and a proposal was made for a more scaleable multicast addressing system than is used in the Internet protocol. Two sample virtual environments were developed to test the ease-of-use of the system. This showed that the basic concept is sound, but that more support is needed. The next step should be to extend the language and add compiler support, which will enhance ease-of-use and allow numerous optimisations. This can be improved further by providing system-supported containers

    Software architecture for modeling and distributing virtual environments

    Get PDF

    PRMP : a scaleable polling-based reliable multicast protocol

    Get PDF
    PhD ThesisTraditional reliable unicast protocols (e.g., TCP), known as sender-initiated schemes, do not scale well for one-to-many reliable multicast due mainly to implosion losses caused by excessive rate of feedback packets arriving from receivers. So, recent multicast protocols have been devised following the receiver- initiated approach: scalability (in terms of control traffic, protocol state and end-systems processing requirements) is achieved by making the sender independent from receivers; the sender does not know the membership of the destination group. However, this comes with a cost: the lack of knowledge about and control of receivers at the sender has negative implications with respect to throughput, network cost (bandwidth required), and degree of reliability offered to applications. This thesis follows an alternative approach: instead of adopting the receiver-initiated scheme, it greatly enhances the scalability of the sender-initiated scheme, by means of polling-based feedback and hierarchy. The resulting protocol is named PRMP: polling-based Reliable Multicast protocol. Its unique implosion avoidance mechanism polls receivers at carefully planned timing instants achieving a low and uniformly distributed rate of feedback packets. The sender retains controls of receivers: the main PRMP mechanisms are based on a one-to-many sliding window mechanism, which efficiently and elegantly extends the abstraction from reliable unicasting to reliable multicasting. The error control mechanism of PRMP incorporates the use of NACKs and selective, cumulative acknowledgment of packets; additionally, it can wait and judiciously decide between multicast and selective unicast retransmissions. The flow control mechanism prevents unnecessary losses caused by the overrunning of receivers, despite variations in round-trip times and application speeds. The scalability provided by the polling mechanism is further extended by an hierarchic organization to exploit distributed processing and local recovery: receivers are organized according to a tree-structure. However, unlike other tree-based protocols, PRMP is "fully-hierarchic": each parent node forwards data via multicast to its children, and retains/explores the control of and knowledge about its children while autonomously applying error, flow, congestion and session controls in the communication with them. Two congestion control mechanisms, one window-based and another rate-based, have been incorporated to PRMP. As shown through simulation experiments, the resulting protocol q,chieves high though put with cost- effective reliable multicasting. They also show the scalability and effectiveness of PRMP mechanisms. PRMP can achieve reliable multicast with the same kind of reliability guarantees provided by TCP but without incurring prohibitive costs in terms of network cost or recovery latency found in other protocols.Brazilian Research Agency CAPE

    Adaptive delay-constrained internet media transport

    Get PDF
    Reliable transport layer Internet protocols do not satisfy the requirements of packetized, real-time multimedia streams. The available thesis motivates and defines predictable reliability as a novel, capacity-approaching transport paradigm, supporting an application-specific level of reliability under a strict delay constraint. This paradigm is being implemented into a new protocol design -- the Predictably Reliable Real-time Transport protocol (PRRT). In order to predictably achieve the desired level of reliability, proactive and reactive error control must be optimized under the application\u27s delay constraint. Hence, predictably reliable error control relies on stochastic modeling of the protocol response to the modeled packet loss behavior of the network path. The result of the joined modeling is periodically evaluated by a reliability control policy that validates the protocol configuration under the application constraints and under consideration of the available network bandwidth. The adaptation of the protocol parameters is formulated into a combinatorial optimization problem that is solved by a fast search algorithm incorporating explicit knowledge about the search space. Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications.Zuverlässige Internet-Protokolle auf Transport-Layer erfüllen nicht die Anforderungen paketierter Echtzeit-Multimediaströme. Die vorliegende Arbeit motiviert und definiert Predictable Reliability als ein neuartiges, kapazitäterreichendes Transport-Paradigma, das einen anwendungsspezifischen Grad an Zuverlässigkeit unter strikter Zeitbegrenzung unterstützt. Dieses Paradigma wird in ein neues Protokoll-Design implementiert -- das Predictably Reliable Real-time Transport Protokoll (PRRT). Um prädizierbar einen gewünschten Grad an Zuverlässigkeit zu erreichen, müssen proaktive und reaktive Maßnahmen zum Fehlerschutz unter der Zeitbegrenzung der Anwendung optimiert werden. Daher beruht Fehlerschutz mit Predictable Reliability auf der stochastischen Modellierung des Protokoll-Verhaltens unter modelliertem Paketverlust-Verhalten des Netzwerkpfades. Das Ergebnis der kombinierten Modellierung wird periodisch durch eine Reliability Control Strategie ausgewertet, die die Konfiguration des Protokolls unter den Begrenzungen der Anwendung und unter Berücksichtigung der verfügbaren Netzwerkbandbreite validiert. Die Adaption der Protokoll-Parameter wird durch ein kombinatorisches Optimierungsproblem formuliert, welches von einem schnellen Suchalgorithmus gelöst wird, der explizites Wissen über den Suchraum einbezieht. Experimentelle Auswertung von PRRT in realen Internet-Szenarien demonstriert, dass Transport mit Predictable Reliability die strikten Auflagen hochqualitativer, audiovisueller Streaming-Anwendungen erfüllt

    Scaleable audio for collaborative environments

    Get PDF
    This thesis is concerned with supporting natural audio communication in collaborative environments across the Internet. Recent experience with Collaborative Virtual Environments, for example, to support large on-line communities and highly interactive social events, suggest that in the future there will be applications in which many users speak at the same time. Such applications will generate large and dynamically changing volumes of audio traffic that can cause congestion and hence packet loss in the network and so seriously impair audio quality. This thesis reveals that no current approach to audio distribution can combine support for large number of simultaneous speakers with TCP-fair responsiveness to congestion. A model for audio distribution called Distributed Partial Mixing (DPM) is proposed that dynamically adapts both to varying numbers of active audio streams in collaborative environments and to congestion in the network. Each DPM component adaptively mixes subsets of its input audio streams into one or more mixed streams, which it then forwards to the other components along with any unmixed streams. DPM minimises the amount of mixing performed so that end users receive as many separate audio streams as possible within prevailing network resource constraints. This is important in order to allow maximum flexibility of audio presentation (especially spatialisation) to the end user. A distributed partial mixing prototype is realised as part of the audio service in MASSIVE-3. A series of experiments over a single network link demonstrate that DPM gracefully manages the tradeoff between preserving stable audio quality and being responsive to congestion and achieving fairness towards competing TCP traffic. The problem of large scale deployment of DPM over heterogeneous networks is also addressed. The thesis proposes that a shared tree of DPM servers and clients, where the nodes of the tree can perform distributed partial mixing, is an effective basis for wide area deployment. Two models for realising this in two contrasting situations are then explored in more detail: a static, centralised, subscription-based DPM service suitable for fully managed networks, and a fully distributed self-organising DPM service suitable for unmanaged networks (such as the current Internet)

    Scaleable audio for collaborative environments

    Get PDF
    This thesis is concerned with supporting natural audio communication in collaborative environments across the Internet. Recent experience with Collaborative Virtual Environments, for example, to support large on-line communities and highly interactive social events, suggest that in the future there will be applications in which many users speak at the same time. Such applications will generate large and dynamically changing volumes of audio traffic that can cause congestion and hence packet loss in the network and so seriously impair audio quality. This thesis reveals that no current approach to audio distribution can combine support for large number of simultaneous speakers with TCP-fair responsiveness to congestion. A model for audio distribution called Distributed Partial Mixing (DPM) is proposed that dynamically adapts both to varying numbers of active audio streams in collaborative environments and to congestion in the network. Each DPM component adaptively mixes subsets of its input audio streams into one or more mixed streams, which it then forwards to the other components along with any unmixed streams. DPM minimises the amount of mixing performed so that end users receive as many separate audio streams as possible within prevailing network resource constraints. This is important in order to allow maximum flexibility of audio presentation (especially spatialisation) to the end user. A distributed partial mixing prototype is realised as part of the audio service in MASSIVE-3. A series of experiments over a single network link demonstrate that DPM gracefully manages the tradeoff between preserving stable audio quality and being responsive to congestion and achieving fairness towards competing TCP traffic. The problem of large scale deployment of DPM over heterogeneous networks is also addressed. The thesis proposes that a shared tree of DPM servers and clients, where the nodes of the tree can perform distributed partial mixing, is an effective basis for wide area deployment. Two models for realising this in two contrasting situations are then explored in more detail: a static, centralised, subscription-based DPM service suitable for fully managed networks, and a fully distributed self-organising DPM service suitable for unmanaged networks (such as the current Internet)

    Improving the Scalability of High Performance Computer Systems

    Full text link
    Improving the performance of future computing systems will be based upon the ability of increasing the scalability of current technology. New paths need to be explored, as operating principles that were applied up to now are becoming irrelevant for upcoming computer architectures. It appears that scaling the number of cores, processors and nodes within an system represents the only feasible alternative to achieve Exascale performance. To accomplish this goal, we propose three novel techniques addressing different layers of computer systems. The Tightly Coupled Cluster technique significantly improves the communication for inter node communication within compute clusters. By improving the latency by an order of magnitude over existing solutions the cost of communication is considerably reduced. This enables to exploit fine grain parallelism within applications, thereby, extending the scalability considerably. The mechanism virtually moves the network interconnect into the processor, bypassing the latency of the I/O interface and rendering protocol conversions unnecessary. The technique is implemented entirely through firmware and kernel layer software utilizing off-the-shelf AMD processors. We present a proof-of-concept implementation and real world benchmarks to demonstrate the superior performance of our technique. In particular, our approach achieves a software-to-software communication latency of 240 ns between two remote compute nodes. The second part of the dissertation introduces a new framework for scalable Networks-on-Chip. A novel rapid prototyping methodology is proposed, that accelerates the design and implementation substantially. Due to its flexibility and modularity a large application space is covered ranging from Systems-on-chip, to high performance many-core processors. The Network-on-Chip compiler enables to generate complex networks in the form of synthesizable register transfer level code from an abstract design description. Our engine supports different target technologies including Field Programmable Gate Arrays and Application Specific Integrated Circuits. The framework enables to build large designs while minimizing development and verification efforts. Many topologies and routing algorithms are supported by partitioning the tasks into several layers and by the introduction of a protocol agnostic architecture. We provide a thorough evaluation of the design that shows excellent results regarding performance and scalability. The third part of the dissertation addresses the Processor-Memory Interface within computer architectures. The increasing compute power of many-core processors, leads to an equally growing demand for more memory bandwidth and capacity. Current processor designs exhibit physical limitations that restrict the scalability of main memory. To address this issue we propose a memory extension technique that attaches large amounts of DRAM memory to the processor via a low pin count interface using high speed serial transceivers. Our technique transparently integrates the extension memory into the system architecture by providing full cache coherency. Therefore, applications can utilize the memory extension by applying regular shared memory programming techniques. By supporting daisy chained memory extension devices and by introducing the asymmetric probing approach, the proposed mechanism ensures high scalability. We furthermore propose a DMA offloading technique to improve the performance of the processor memory interface. The design has been implemented in a Field Programmable Gate Array based prototype. Driver software and firmware modifications have been developed to bring up the prototype in a Linux based system. We show microbenchmarks that prove the feasibility of our design

    An End-to-End Reliable Multicast Protocol Using Polling for Scaleability

    No full text

    An End-to-End Reliable Multicast Protocol Using Polling for Scaleability

    No full text
    Reliable sender-based one-to-many protocols do not scale well due mainly to implosion caused by excessive rate of feedback packets arriving from receivers. We show that this problem can be circumvented by making the sender poll the receivers at carefully planned timing instants, so that the arrival rate of feedback packets is not large enough to cause implosion. We describe a generic end-to-end protocol which incorporates this polling scheme together with error and flow control mechanisms. We analyse the behaviour of our protocol using simulations which indicate that our scheme can be effective in minimising losses due to implosion, achieving high throughput with low network cost. Keywords: Feedback Implosion, Reliable Multicast, Transport Protocols. 1 Introduction The working principles behind reliable sender-initiated ([1]) one-to-one (unicast) protocols, such as TCP ([2]), are wellknown. Extending them for one-to-many (multicast) communication brings scaleability problems, in part..

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
    corecore