51 research outputs found

    Research and Design on Navigation Electronic Map System

    Get PDF
    This paper puts forward the new definition on the basis of the original concept of the navigation electronic map, designs the structure of the navigation electronic map system which contains three parts: hardware equipment, data system and software system, and analyzes each part of them in detail, finally this paper discusses the functional framework of the navigation electronic map

    QoS prediction for dynamic reconfiguration of component based software systems

    No full text
    It is difficult to choose the appropriate reconfiguration approach to satisfy the Quality of Service (QoS) requirements of a software system if the properties of that approach are not known. This problem significantly restricts the application of dynamic reconfiguration approaches to mission-critical or non-stop systems, where QoS is a major performance indicator. This paper proposes a model to predict how the QoS of a running software system will be affected by dynamic reconfiguration and show how it out-performed the existing methods in this area in 3 aspects. First, unlike existing simulation based models, this prediction model was based on easily implemented mathematical functions. Second, compared with the time-consuming simulation approaches, QoS prediction using this model was achieved in a shorter timeframe. Third, unlike the existing approaches that are built on different platforms for individual scenarios, this model generalized QoS prediction onto a single virtual platform that was modeled by abstract hardware and software conditions. The proposed model has been verified by reconfiguration simulation to a reasonable level of accuracy and thus the viability and safety for the use of the model has been confirmed

    The optimization potential of volunteer computing for compute or data intensive applications

    No full text
    Guo, W ORCiD: 0000-0002-3134-3327The poor scalability of Volunteer Computing (VC) hinders the application of it because a tremendous number of volunteers are needed in order to achieve the same performance as that of a traditional HPC. This paper explores optimization potential to improve the scalability of VC from three points of view. First, the heterogeneity of volunteers’ compute-capacity has been chosen from the whole spectrum of impact factors to study optimization potential. Second, a DTH (Distributed Hash Table) based supporting platform and MapReduce are fused together as the discussion context. Third, transformed versions of work stealing have been proposed to optimize VC for both compute-and data-intensive applications. On this basis, the proposed optimization strategies are evaluated by three steps. First, a proof-of-concept prototype is implemented to support the representation and testing of the proposed optimization strategies. Second, virtual tasks are composed to apply certain compute-or data-intensity on the running MapReduce. Third, the competence of VC, running the original equity strategy and the optimization strategies, is tested against the virtual tasks. The evaluation of results has confirmed that the impaired performance has been improved about 24.5% for computeintensive applications and about 19.5% for data-intensive applications

    The optimization potential of volunteer computing for compute or data intensive applications

    No full text
    The poor scalability of Volunteer Computing (VC) hinders the application of it because a tremendous number of volunteers are needed in order to achieve the same performance as that of a traditional HPC. This paper explores optimization potential to improve the scalability of VC from three points of view. First, the heterogeneity of volunteers’ compute-capacity has been chosen from the whole spectrum of impact factors to study optimization potential. Second, a DTH (Distributed Hash Table) based supporting platform and MapReduce are fused together as the discussion context. Third, transformed versions of work stealing have been proposed to optimize VC for both compute-and data-intensive applications. On this basis, the proposed optimization strategies are evaluated by three steps. First, a proof-of-concept prototype is implemented to support the representation and testing of the proposed optimization strategies. Second, virtual tasks are composed to apply certain compute-or data-intensity on the running MapReduce. Third, the competence of VC, running the original equity strategy and the optimization strategies, is tested against the virtual tasks. The evaluation of results has confirmed that the impaired performance has been improved about 24.5% for computeintensive applications and about 19.5% for data-intensive applications

    The competence of volunteer computing for MapReduce big data applications

    No full text
    It is little to find off-the-shelf research results in the current literature about how competent Volunteer Computing (VC) performs big data applications. This paper explores whether VC scales for a large number of volunteers when they commit churn and how large VC needs to scale in order to achieve the same performance as that by High Performance Computing (HPC) or computing grid for a given big data problem. To achieve the goal, this paper proposes a unification model to support the construction of virtual big data problems, virtual HPC clusters, computing grids or VC overlays on the same platform. The model is able to compare the competence of those computing facilities in terms of speedup vs number of computing nodes or volunteers for solving a big data problem. The evaluation results have demonstrated that all the computing facilities scale for the big data problem, with a computing grid or a VC overlay being in need of more or much more computing nodes or volunteers to achieve the same speedup as that of a HPC cluster. This paper has confirmed that VC is competent for big data problems as long as a large number of volunteers is available from the Internet. © Springer Nature Singapore Pte Ltd. 2018

    Achieving dynamic workload balancing for P2P Volunteer Computing

    No full text
    This paper argues that the decentralization feature of Peer-to-Peer (P2P) overlay is more suitable for Volunteer Computing (VC), compared to the centralized master/worker structure in terms of performance bottleneck and single point of failure. Based on the P2P overlay Chord, this paper focused on the design of a workload balancing protocol to coordinate VC. The goal of the protocol was to maximize overall speed-up against the heterogeneity and churn of volunteers. The roles of a facilitator and volunteers (peers) were defined; the key components were designed, including job, result and container. Distributed workload balancing algorithms were proposed to direct the workflow of the key roles for joining and leaving, job search and distribution and result collection. Criteria and metrics were proposed to evaluate the algorithms in regards to the effectiveness against churn and the overall speed-up against number of volunteers. Simulations were devised and completed upon the N-Queen Problem to measure these qualities. Conclusions confirmed that the results were on the right track

    QoS prediction for dynamic reconfiguration of component based software systems

    No full text
    It is difficult to choose the appropriate reconfiguration approach to satisfy the Quality of Service (QoS) requirements of a software system if the properties of that approach are not known. This problem significantly restricts the application of dynamic reconfiguration approaches to mission-critical or non-stop systems, where QoS is a major performance indicator. This paper proposes a model to predict how the QoS of a running software system will be affected by dynamic reconfiguration and show how it out-performed the existing methods in this area in 3 aspects. First, unlike existing simulation based models, this prediction model was based on easily implemented mathematical functions. Second, compared with the time-consuming simulation approaches, QoS prediction using this model was achieved in a shorter timeframe. Third, unlike the existing approaches that are built on different platforms for individual scenarios, this model generalized QoS prediction onto a single virtual platform that was modeled by abstract hardware and software conditions. The proposed model has been verified by reconfiguration simulation to a reasonable level of accuracy and thus the viability and safety for the use of the model has been confirmed

    The scalability of volunteer computing for MapReduce big data applications

    No full text
    Volunteer Computing (VC) has been successfully applied to many compute-intensive scientific projects to solve embarrassingly parallel computing prob-lems. There exist some efforts in the current literature to apply VC to data-intensive (i.e. big data) applications, but none of them has confirmed the scalability of VC for the applications in the opportunistic volunteer envi-ronments. This paper chooses MapReduce as a typical computing paradigm in coping with big data processing in distributed environments and models it on DHT (Distributed Hash Table) P2P overlay to bring this computing para-digm into VC environments. The modelling results in a distributed prototype implementation and a simulator. The experimental evaluation of this paper has confirmed that the scalability of VC for the MapReduce big data (up to 10TB) applications in the cases, where the number of volunteers is fairly large (up to 10K), they commit high churn rates (up to 90%), and they have heterogeneous compute capacities (the fastest is 6 times of the slowest) and bandwidths (the fastest is up to 75 times of the slowest)
    • …
    corecore