414,460 research outputs found

    A Reusable Component for Communication and Data Synchronization in Mobile Distributed Interactive Applications

    Full text link
    In Distributed Interactive Applications (DIA) such as multiplayer games, where many participants are involved in a same game session and communicate through a network, they may have an inconsistent view of the virtual world because of the communication delays across the network. This issue becomes even more challenging when communicating through a cellular network while executing the DIA client on a mobile terminal. Consistency maintenance algorithms may be used to obtain a uniform view of the virtual world. These algorithms are very complex and hard to program and therefore, the implementation and the future evolution of the application logic code become difficult. To solve this problem, we propose an approach where the consistency concerns are handled separately by a distributed component called a Synchronization Medium, which is responsible for the communication management as well as the consistency maintenance. We present the detailed architecture of the Synchronization Medium and the generic interfaces it offers to DIAs. We evaluate our approach both qualitatively and quantitatively. We first demonstrate that the Synchronization Medium is a reusable component through the development of two game applications, a car racing game and a space war game. A performance evaluation then shows that the overhead introduced by the Synchronization Medium remains acceptable.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    TOPAZ:a tool kit for the assembly of transaction managers for non-standard applications

    Full text link
    'Advanced database applications', such as CAD/CAM, CASE, large AI applications or image and voice processing, place demands on transaction management which differ substantially from those in traditional database applications. In particular, there is a need to support 'enriched' data models (which include, for example, complex objects or version and configuration management), 'synergistic' cooperative work, and application- or user-supported consistency. Unfortunately, the demands are not only sophisticated but also diversified, which means that different application areas might even place contradictory demands on transaction management. This paper deals with these problems and offers a solution by introducing a flexible and adaptable tool kit approach for transaction management

    Communication Centric Design in Complex Automotive Embedded Systems

    Get PDF
    Automotive embedded applications like the engine management system are composed of multiple functional components that are tightly coupled via numerous communication dependencies and intensive data sharing, while also having real-time requirements. In order to cope with complexity, especially in multi-core settings, various communication mechanisms are used to ensure data consistency and temporal determinism along functional cause-effect chains. However, existing timing analysis methods generally only support very basic communication models that need to be extended to handle the analysis of industry grade problems which involve more complex communication semantics. In this work, we give an overview of communication semantics used in the automotive industry and the different constraints to be considered in the design process. We also propose a method for model transformation to increase the expressiveness of current timing analysis methods enabling them to work with more complex communication semantics. We demonstrate this transformation approach for concrete implementations of two communication semantics, namely, implicit and LET communication. We discuss the impact on end-to-end latencies and communication overheads based on a full blown engine management system

    Space station advanced automation

    Get PDF
    In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software

    Extensible Signaling Framework for Decentralized Network Management Applications

    Get PDF
    The management of network infrastructures has become increasingly complex over time, which is mainly attributed to the introduction of new functionality to support emerging services and applications. To address this important issue, research efforts in the last few years focused on developing Software-Defined Networking solutions. While initial work proposed centralized architectures, their scalability limitations have led researchers to investigate a distributed control plane, with controller placement algorithms and mechanisms for building a logically centralized network view, being examples of challenges addressed. A critical issue that has not been adequately addressed concerns the communication between distributed decision-making entities to ensure configuration consistency. To this end, this paper proposes a signaling framework that can allow the exchange of information in distributed management and control scenarios. The benefits of the proposed framework are illustrated through a realistic network resource management use case. Based on simulation, we demonstrate the flexibility and extensibility of our solution in meeting the requirements of distributed decision-making processes

    Semantics-based locking:from isolation to cooperation

    Full text link
    'Advanced database applications', such as CAD/CAM, CASE, large AI applications or imageand voice processing, place demands on transaction management which differ substantially from those of traditional database applications. In particular, there is a need to support 'enriched' data models (which include, for example, complex objects or version and configuration management), 'synergistic' cooperative work, and application- or user-supported consistency. This paper deals with a subset of these problems. It develops a methodology for implementing semantics-based concurrency control on the basis of ordinary locking. More specifically, it will be shown how conventional locking can step by step be improved and refined to finally reach our initial goal, namely a comprehensive support of synergistic cooperative work by the exploitation of application-specific semantics. In addition to the 'conventional' binding of locks to transactions we consider the binding of locks to objects (object related) and subjects (subject related locks). Object related locks can define persistent and adaptable access restrictions on objects. This permits, among others, the modeling of different types of version models (time versions, version graphs) as well as library (standard) objects. Subject related locks are bound to subjects (user, application, etc.) and can be used among others to supervise or direct the transfer of objects between transactions.<br/

    Topologically Consistent Models for Efficient Big Geo-Spatio-Temporal Data Distribution

    Get PDF
    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment

    Review of the main developments in the analytic hierarchy process

    Get PDF

    Dynamic Parameter Allocation in Parameter Servers

    Full text link
    To keep up with increasing dataset sizes and model complexity, distributed training has become a necessity for large machine learning tasks. Parameter servers ease the implementation of distributed parameter management---a key concern in distributed training---, but can induce severe communication overhead. To reduce communication overhead, distributed machine learning algorithms use techniques to increase parameter access locality (PAL), achieving up to linear speed-ups. We found that existing parameter servers provide only limited support for PAL techniques, however, and therefore prevent efficient training. In this paper, we explore whether and to what extent PAL techniques can be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks. We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers
    • …
    corecore