637 research outputs found

    Supporting a Closely Coupled Task between a Distributed Team: Using Immersive Virtual Reality Technology

    Get PDF
    Collaboration and teamwork is important in many areas of our lives. People come together to share and discuss ideas, split and distribute work or help and support each other. The sharing of information and artefacts is a central part of collaboration. This often involves the manipulation of shared objects, both sequentially as well as concurrently. For coordinating an efficient collaboration, communication between the team members is necessary. This can happen verbally in form of speech or text and non-verbally through gesturing, pointing, gaze or facial expressions and the referencing and manipulation of shared objects. Collaborative Virtual Environments (CVE) allow remote users to come together and interact with each other and virtual objects within a computer simulated environment. Immersive display interfaces, such as a walk-in display (e.g. CAVE), that place a human physically into the synthetic environment, lend themselves well to support a natural manipulation of objects as well a set of natural non-verbal human communication, as they can both capture and display human movement. Communication of tracking data, however, can saturate the network and result in delay or loss of messages vital to the manipulation of shared objects. This paper investigates the reality of shared object manipulation between remote users collaborating through linked walk-in displays and extends our research in [27]. Various forms of shared interaction are examined through a set of structured sub tasks within a representative construction task. We report on extensive user-trials between three walk-in displays in the UK and Austria linked over the Internet using a CVE, and demonstrate such effects on a naive implementation of a benchmark application, the Gazebo building task. We then present and evaluate application-level workarounds and conclude by suggesting solutions that may be implemented within next-generation CVE infrastructures

    Lessons learned from the design of a mobile multimedia system in the Moby Dick project

    Get PDF
    Recent advances in wireless networking technology and the exponential development of semiconductor technology have engendered a new paradigm of computing, called personal mobile computing or ubiquitous computing. This offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures. In particular, these devices will have limited battery resources, will handle diverse data types, and will operate in environments that are insecure, dynamic and which vary significantly in time and location. The research performed in the MOBY DICK project is about designing such a mobile multimedia system. This paper discusses the approach made in the MOBY DICK project to solve some of these problems, discusses its contributions, and accesses what was learned from the project

    DIVE on the internet

    Get PDF
    This dissertation reports research and development of a platform for Collaborative Virtual Environments (CVEs). It has particularly focused on two major challenges: supporting the rapid development of scalable applications and easing their deployment on the Internet. This work employs a research method based on prototyping and refinement and promotes the use of this method for application development. A number of the solutions herein are in line with other CVE systems. One of the strengths of this work consists in a global approach to the issues raised by CVEs and the recognition that such complex problems are best tackled using a multi-disciplinary approach that understands both user and system requirements. CVE application deployment is aided by an overlay network that is able to complement any IP multicast infrastructure in place. Apart from complementing a weakly deployed worldwide multicast, this infrastructure provides for a certain degree of introspection, remote controlling and visualisation. As such, it forms an important aid in assessing the scalability of running applications. This scalability is further facilitated by specialised object distribution algorithms and an open framework for the implementation of novel partitioning techniques. CVE application development is eased by a scripting language, which enables rapid development and favours experimentation. This scripting language interfaces many aspects of the system and enables the prototyping of distribution-related components as well as user interfaces. It is the key construct of a distributed environment to which components, written in different languages, connect and onto which they operate in a network abstracted manner. The solutions proposed are exemplified and strengthened by three collaborative applications. The Dive room system is a virtual environment modelled after the room metaphor and supporting asynchronous and synchronous cooperative work. WebPath is a companion application to a Web browser that seeks to make the current history of page visits more visible and usable. Finally, the London travel demonstrator supports travellers by providing an environment where they can explore the city, utilise group collaboration facilities, rehearse particular journeys and access tourist information data

    An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications

    Get PDF
    Distributed Interactive Applications (DIAs) enable geographically dispersed users to interact with each other in a virtual environment. A key factor to the success of a DIA is the maintenance of a consistent view of the shared virtual world for all the participants. However, maintaining consistent states in DIAs is difficult under real networks. State changes communicated by messages over such networks suffer latency leading to inconsistency across the application. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the number of messages transmitted in return for perceptually tolerable inconsistency. This thesis examines the operation of PCMs using concepts and methods derived from information theory. This information theory perspective results in a novel information model of PCMs that quantifies and analyzes the efficiency of such methods in communicating the reduced state information, and a new adaptive multiple-model-based framework for improving consistency in DIAs. The first part of this thesis introduces information measurements of user behavior in DIAs and formalizes the information model for PCM operation. In presenting the information model, the statistical dependence in the entity state, which makes using extrapolation models to predict future user behavior possible, is evaluated. The efficiency of a PCM to exploit such predictability to reduce the amount of network resources required to maintain consistency is also investigated. It is demonstrated that from the information theory perspective, PCMs can be interpreted as a form of information reduction and compression. The second part of this thesis proposes an Information-Based Dynamic Extrapolation Model for dynamically selecting between extrapolation algorithms based on information evaluation and inferred network conditions. This model adapts PCM configurations to both user behavior and network conditions, and makes the most information-efficient use of the available network resources. In doing so, it improves PCM performance and consistency in DIAs

    Accelerating orchestration with in-network offloading

    Get PDF
    The demand for low-latency Internet applications has pushed functionality that was originally placed in commodity hardware into the network. Either in the form of binaries for the programmable data plane or virtualised network functions, services are implemented within the network fabric with the aim of improving their performance and placing them close to the end user. Training of machine learning algorithms, aggregation of networking traffic, virtualised radio access components, are just some of the functions that have been deployed within the network. Therefore, as the network fabric becomes the accelerator for various applications, it is imperative that the orchestration of their components is also adapted to the constraints and capabilities of the deployment environment. This work identifies performance limitations of in-network compute use cases for both cloud and edge environments and makes suitable adaptations. Within cloud infrastructure, this thesis proposes a platform that relies on programmable switches to accelerate the performance of data replication. It then proceeds to discuss design adaptations of an orchestrator that will allow in-network data offloading and enable accelerated service deployment. At the edge, the topic of inefficient orchestration of virtualised network functions is explored, mainly with respect to energy usage and resource contention. An orchestrator is adapted to schedule requests by taking into account edge constraints in order to minimise resource contention and accelerate service processing times. With data transfers consuming valuable resources at the edge, an efficient data representation mechanism is implemented to provide statistical insight on the provenance of data at the edge and enable smart query allocation to nodes with relevant data. Taking into account the previous state of the art, the proposed data plane replication method appears to be the most computationally efficient and scalable in-network data replication platform available, with significant improvements in throughput and up to an order of magnitude decrease in latency. The orchestrator of virtual network functions at the edge was shown to reduce event rejections, total processing time, and energy consumption imbalances over the default orchestrator, thus proving more efficient use of the infrastructure. Lastly, computational cost at the edge was further reduced with the use of the proposed query allocation mechanism which minimised redundant engagement of nodes

    Continuum: an architecture for user evolvable collaborative virtual environments

    Get PDF
    Continuum is a software platform for collaborative virtual environments. Continuum\u27s architecture supplies a world model and defines how to combine object state, behavior code, and resource data into this single shared structure. The system frees distributed users from the constraints of monolithic centralized virtual world architectures and instead allows individual users to extend and evolve the virtual world by creating and controlling their own individual pieces of the larger world model. The architecture provides support for data distribution, code management, resource management, and rapid deployment through standardized viewers. This work not only provides this architecture, but it includes a proven implementation and the associated development tools to allow for creation of these worlds

    Improving the performance of software-defined networks using dynamic flow installation and management techniques

    Get PDF
    As computer networks evolve, they become more complex, introducing several challenges in the areas of performance and management. Such problems can lead to stagnation in network innovation. Software Defined Networks (SDN) framework could be one of the best candidates for improving and revolutionising networking by giving the full control to the network administrators to implement new management and performance optimisation techniques. This thesis examines performance issues faced in SDN due to the introduction of the SDN Controller. These issues include the extra delay due to the round-trip time between the switch and the controller as well as the fact that some packets arrive at the destination out-of-order. We propose a novel dynamic flow installation and management algorithm (OFPE) using the SDN protocol OpenFlow, which preserves the controller to a non-overloaded CPU state and allow it to dynamically add and adjust flow table rules to reduce packet delay and out-of-order packets. In addition, we propose OFPEX, an extension to OFPE algorithm that includes techniques for managing multi-switch environments as well as methods that make use of the packets interarrival time in categorising and serving packet flows. Such techniques allow topology awareness, helping the controller to install flow table rules in such a way to form optimal routes for high priority flows thus increasing network performance. For the performance evaluation of the proposed algorithms, both hardware testbed as well as emulation experiments have been conducted. The performance results indicate that OFPE algorithm achieves a significant enhancement in performance in the form of reduced delay by up to 92.56% (depending on the scenario), reduced packet loss by up to 55.32% and reduced out-of-order packets by up to 69.44%. Furthermore, we propose a novel placement algorithm for distributed Mininet implementations which uses weights in order to distribute the experiment components to the appropriately distributed machines. The proposed algorithm uses static code analysis in order to examine the experimental code as well as it measures the capabilities of physical components in order to create a weights table which is then used to distribute the experiment components properly. The performance results of the proposed algorithm evaluation indicated reductions in delay and packet loss of up to 65.51% and 86.35% respectively, as well as a decrease in the standard deviation of CPU usage by up to 88.63%. These results indicate that the proposed algorithm distributes the experiment components evenly across the available resources. Finally, we propose a series of Benchmarking tests that can be used to rate all the available SDN experimental platforms. These tests allow the selection of the appropriate experimental platform according to the scenario needs as well as they indicate the resources needed by each platform

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Scalable Storage for Digital Libraries

    Get PDF
    I propose a storage system optimised for digital libraries. Its key features are its heterogeneous scalability; its integration and exploitation of rich semantic metadata associated with digital objects; its use of a name space; and its aggressive performance optimisation in the digital library domain
    corecore