4,373 research outputs found

    End-to-end latency and temporal consistency analysis in networked real-time systems

    Get PDF
    International audienceCritical embedded systems are often designed as a set of real-time tasks, running on shared computing modules, and communicating through networks. Because of their critical nature, such systems have to meet strict timing properties. To help the designers to prove the correctness of their system, the real-time systems community has developed numerous approaches for analysing the worst case scenarios either on the processors (e.g., worst case response time of a task) or on the networks (e.g., worst case traversal time of a message). These approaches provide results only for local components behaviours. However, there is a growing need for having a global view of the system, in order to determine end-to-end properties. Such a property applies to functional chains which describe the behaviour of sequences of tasks. We propose an approach to analyse worst case behaviour along functional chains in critical embedded systems. It is based on mixed integer linear programming (MILP) and is general in the sense that it can be applied to a variety of end-to-end properties. This paper focuses on two essential properties: end-to-end latency and temporal consistency. This work was supported by the French National Research Agency within the SATRIMMAP project

    Bounding inconsistency using a novel threshold metric for dead reckoning update packet generation

    Get PDF
    Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. The level of inconsistency arising from the network is proportional to the network delay, and thus a function of bandwidth consumption. Distributed simulation has often used a bandwidth reduction technique known as dead reckoning that combines approximation and estimation in the communication of entity movement to reduce network traffic, and thus improve consistency. However, unless carefully tuned to application and network characteristics, such an approach can introduce more inconsistency than it avoids. The key tuning metric is the distance threshold. This paper questions the suitability of the standard distance threshold as a metric for use in the dead reckoning scheme. Using a model relating entity path curvature and inconsistency, a major performance related limitation of the distance threshold technique is highlighted. We then propose an alternative time—space threshold criterion. The time—space threshold is demonstrated, through simulation, to perform better for low curvature movement. However, it too has a limitation. Based on this, we further propose a novel hybrid scheme. Through simulation and live trials, this scheme is shown to perform well across a range of curvature values, and places bounds on both the spatial and absolute inconsistency arising from dead reckoning

    Exploring the use of local consistency measures as thresholds for dead reckoning update packet generation

    Get PDF
    Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. Techniques and approaches for reducing bandwidth usage can minimize network delays by reducing the network traffic and therefore better exploiting available bandwidth. However, these approaches induce inconsistencies within the level of human perception. Dead reckoning is a well-known technique for reducing the number of update packets transmitted between participating nodes. It employs a distance threshold for deciding when to generate update packets. This paper questions the use of such a distance threshold in the context of absolute consistency and it highlights a major drawback with such a technique. An alternative threshold criterion based on time and distance is examined and it is compared to the distance only threshold. A drawback with this proposed technique is also identified and a hybrid threshold criterion is then proposed. However, the trade-off between spatial and temporal inconsistency remains

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    On Consistency and Network Latency in Distributed Interactive Applications: A Survey—Part I

    Get PDF
    This paper is the first part of a two-part paper that documents a detailed survey of the research carried out on consistency and latency in distributed interactive applications (DIAs) in recent decades. Part I reviews the terminology associated with DIAs and offers definitions for consistency and latency. Related issues such as jitter and fidelity are also discussed. Furthermore, the various consistency maintenance mechanisms that researchers have used to improve consistency and reduce latency effects are considered. These mechanisms are grouped into one of three categories, namely time management, Information management and system architectural management. This paper presents the techniques associated with the time management category. Examples of such mechanisms include time warp, lock step synchronisation and predictive time management. The remaining two categories are presented in part two of the survey

    An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications

    Get PDF
    Distributed Interactive Applications (DIAs) enable geographically dispersed users to interact with each other in a virtual environment. A key factor to the success of a DIA is the maintenance of a consistent view of the shared virtual world for all the participants. However, maintaining consistent states in DIAs is difficult under real networks. State changes communicated by messages over such networks suffer latency leading to inconsistency across the application. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the number of messages transmitted in return for perceptually tolerable inconsistency. This thesis examines the operation of PCMs using concepts and methods derived from information theory. This information theory perspective results in a novel information model of PCMs that quantifies and analyzes the efficiency of such methods in communicating the reduced state information, and a new adaptive multiple-model-based framework for improving consistency in DIAs. The first part of this thesis introduces information measurements of user behavior in DIAs and formalizes the information model for PCM operation. In presenting the information model, the statistical dependence in the entity state, which makes using extrapolation models to predict future user behavior possible, is evaluated. The efficiency of a PCM to exploit such predictability to reduce the amount of network resources required to maintain consistency is also investigated. It is demonstrated that from the information theory perspective, PCMs can be interpreted as a form of information reduction and compression. The second part of this thesis proposes an Information-Based Dynamic Extrapolation Model for dynamically selecting between extrapolation algorithms based on information evaluation and inferred network conditions. This model adapts PCM configurations to both user behavior and network conditions, and makes the most information-efficient use of the available network resources. In doing so, it improves PCM performance and consistency in DIAs

    Dynamic data placement and discovery in wide-area networks

    Get PDF
    The workloads of online services and applications such as social networks, sensor data platforms and web search engines have become increasingly global and dynamic, setting new challenges to providing users with low latency access to data. To achieve this, these services typically leverage a multi-site wide-area networked infrastructure. Data access latency in such an infrastructure depends on the network paths between users and data, which is determined by the data placement and discovery strategies. Current strategies are static, which offer low latencies upon deployment but worse performance under a dynamic workload. We propose dynamic data placement and discovery strategies for wide-area networked infrastructures, which adapt to the data access workload. We achieve this with data activity correlation (DAC), an application-agnostic approach for determining the correlations between data items based on access pattern similarities. By dynamically clustering data according to DAC, network traffic in clusters is kept local. We utilise DAC as a key component in reducing access latencies for two application scenarios, emphasising different aspects of the problem: The first scenario assumes the fixed placement of data at sites, and thus focusses on data discovery. This is the case for a global sensor discovery platform, which aims to provide low latency discovery of sensor metadata. We present a self-organising hierarchical infrastructure consisting of multiple DAC clusters, maintained with an online and distributed split-and-merge algorithm. This reduces the number of sites visited, and thus latency, during discovery for a variety of workloads. The second scenario focusses on data placement. This is the case for global online services that leverage a multi-data centre deployment to provide users with low latency access to data. We present a geo-dynamic partitioning middleware, which maintains DAC clusters with an online elastic partition algorithm. It supports the geo-aware placement of partitions across data centres according to the workload. This provides globally distributed users with low latency access to data for static and dynamic workloads.Open Acces

    A Stochastic Model of Plausibility in Live-Virtual-Constructive Environments

    Get PDF
    Distributed live-virtual-constructive simulation promises a number of benefits for the test and evaluation community, including reduced costs, access to simulations of limited availability assets, the ability to conduct large-scale multi-service test events, and recapitalization of existing simulation investments. However, geographically distributed systems are subject to fundamental state consistency limitations that make assessing the data quality of live-virtual-constructive experiments difficult. This research presents a data quality model based on the notion of plausible interaction outcomes. This model explicitly accounts for the lack of absolute state consistency in distributed real-time systems and offers system designers a means of estimating data quality and fitness for purpose. Experiments with World of Warcraft player trace data validate the plausibility model and exceedance probability estimates. Additional experiments with synthetic data illustrate the model\u27s use in ensuring fitness for purpose of live-virtual-constructive simulations and estimating the quality of data obtained from live-virtual-constructive experiments

    A Presence- and Performance-Driven Framework to Investigate Interactive Networked Music Learning Scenarios

    Get PDF
    Cooperative music making in networked environments has been subject of extensive research, scientific and artistic. Networked music performance (NMP) is attracting renewed interest thanks to the growing availability of effective technology and tools for computer-based communications, especially in the area of distance and blended learning applications. We propose a conceptual framework for NMP research and design in the context of classical chamber music practice and learning: presence-related constructs and objective quality metrics are used to problematize and systematize the many factors affecting the experience of studying and practicing music in a networked environment. To this end, a preliminary NMP experiment on the effect of latency on chamber music duos experience and quality of the performance is introduced. The degree of involvement, perceived coherence, and immersion of the NMP environment are here combined with measures on the networked performance, including tempo trends and misalignments from the shared score. Early results on the impact of temporal factors on NMP musical interaction are outlined, and their methodological implications for the design of pedagogical applications are discussed
    • …
    corecore