36 research outputs found

    DynaQoS©-RDF : a best effort for QoS-assurance of dynamic reconfiguration of dataflow systems

    No full text
    The significance of QoS-assurance is being increasingly recognized by both the research and wider communities. In the latter case, this recognition is driven by the increasing adoption by business of 24/7 software systems and the QoS decline that end-users experience when these systems undergo dynamic reconfiguration. At the beginning of 2006, the author set up a project named DynaQoS©-RDF (QoS-assurance of Dynamic Reconfiguration on Reconfigurable Dataflow Model), which was then sponsored by the CQ University Australia. Over the last two years, the author has investigated QoS-assurance for dataflow systems, which are characterized by the pipe-and-filter architecture. The research has addressed issues such as: the global consistency of protocol transactions, the necessary and sufficient conditions for QoS-assurance, execution overhead control for reconfiguration, state transfer for stateful components, and the design of a QoS benchmark. This paper discusses these research issues. It also proposes various QoS strategies and presents a benchmark for evaluating QoS-assurance strategies for the dynamic reconfiguration of dataflow systems. This benchmark is implemented using the DynaQoS©-RDF v1.0 software platform. Various strategies, including those from the research literature are benchmarked, and the best efforts for QoS-assurance are identified

    Evaluating the impacts of dynamic reconfiguration on the QoS of running systems

    No full text
    A major challenge in dynamic reconfiguration of a running system is to understand in advance the impact on the system's Quality of Service (QoS). For some systems, any unexpected change to QoS is unacceptable. In others, the possibility of dissatisfaction increases due to the impaired performance of the running system or unpredictable errors in the resulting system. In general it is difficult to choose a reasonable reconfiguration approach to satisfy a particular domain application. Our investigation on this issue for dynamic approaches is four-fold. First, we define a set of QoS characteristics to identify the evaluation criteria. Second, we design a set of abstract reconfiguration strategies bringing existing and new approaches into a unified evaluation context. Third, we design a reconfiguration benchmark to expose a rich set of QoS problems. Finally, we test the reconfiguration strategies against the benchmark and evaluate the test results. The analysis of acquired results helps to understand dynamic reconfiguration approaches in terms of their impact on the QoS of running systems and possible enhancements for newer QoS capability

    Meta task graph for volunteer computing

    No full text
    This paper presents the Meta Task Graph (MTG) as an intermediate model of effective expressing parallelism between various problems and efficient algorithms for solving these problems for volunteer computing. One of the goals of this work is to develop effective techniques to deal with the heterogeneities of potential applications and system architectures. Another goal of MTG is to provide a tuple-matching mechanism, which supports fast synchronization of data-dependence and crash tolerance. Ongoing state collection of the underlying system is also a goal. For scheduling to be optimal, various pieces of information are needed. This feature provides full optimisation potential for the design of task assignment and scheduling for load balance to deal with dynamic environment of volunteer computing. To demonstrate MTG's applicability and flexibility, this paper presents a real-world problem and its MTG implementation on our simulation platform GNet. The future research on MTG is presented at the end of this paper

    GNet : a cpn-based simulation platform of volunteer computing

    No full text
    In the absence of simulation tools, it takes risks in terms of costs and cycles to develop and evaluate a volunteer computing system in the real web-based environment, which is very dynamic and uncertain. Based on the expressiveness of concurrent events of Coloured Petri Nets (CPNs) and the synchronous channels of Renew - a reference net formalism of CPNs from the University of Hamburg, we have designed and implemented GNet - a general simulation and evaluation platform of volunteer computing. In current version of GNet1.0, we have two main contributions: fast prototyping and evaluation of resource management strategies; easily migrating the evaluated work into the developments of real systems. The above features come from the three design and fully implemented simulation goals of GNet1.0: scalability, applicability, and adaptive parallelism and fault-tolerance. This paper presents the design methodologies of GNet and directions of future work of GNet1.x and GNet2.x versions

    QoS assurance for dynamic reconfiguration of component-based software systems

    No full text
    A major challenge of dynamic reconfiguration is Quality of Service (QoS) assurance, which is meant to reduce appli-cation disruption to the minimum for the system’s transformation. However, this problem has not been well studied. This paper investigates the problem for component-based software systems from three points of view. First, the whole spectrum of QoS characteristics is defined. Second, the logical and physical requirements for QoS characteristics are analyzed and solutions to achieve them are proposed. Third, prior work is classified by QoS characteristics and then realized by abstract reconfiguration strategies. On this basis, quantitative evaluation of the QoS assurance abilities of existing work and our own approach is conducted through three steps. First, a proof-of-concept prototype called the reconfigurable component model is implemented to support the representation and testing of the reconfiguration strategies. Second, a reconfiguration benchmark is proposed to expose the whole spectrum of QoS problems. Third, each reconfiguration strategy is tested against the benchmark and the testing results are evaluated. The most important conclusion from our investigation is that the classified QoS characteristics can be fully achieved under some acceptable constraints

    Applying an evolutionary algorithm to web search: The methodology of Evagent

    No full text
    An evolutionary algorithm is introduced to find authoritative resources on the Web. The problem of Web search is considered as an optimisation problem within hyperlinked space. We aim to find information that is both relevant andrecent so as to cope with the dynamic nature of the Web. Theoretical studies have been made on problem-specific search space, fitness functions and generic operators. The search space is constructed in the direction so the optimum driven by the reproduction operator with good hubs as a clue. Fitness functions combine text-based and link-based analysis. The (u+7) evolution strategy just implements the selection scheme of elitism. The mutation operator helps to prevent search from trapping in local optimisation by introducing multiple domains. Experiments have been performed to study algorithms performance. The algorithm has been implemented as a kernel component of an intelligent Web agent Evagent

    The scalability of volunteer computing for MapReduce big data applications

    No full text
    Volunteer Computing (VC) has been successfully applied to many compute-intensive scientific projects to solve embarrassingly parallel computing prob-lems. There exist some efforts in the current literature to apply VC to data-intensive (i.e. big data) applications, but none of them has confirmed the scalability of VC for the applications in the opportunistic volunteer envi-ronments. This paper chooses MapReduce as a typical computing paradigm in coping with big data processing in distributed environments and models it on DHT (Distributed Hash Table) P2P overlay to bring this computing para-digm into VC environments. The modelling results in a distributed prototype implementation and a simulator. The experimental evaluation of this paper has confirmed that the scalability of VC for the MapReduce big data (up to 10TB) applications in the cases, where the number of volunteers is fairly large (up to 10K), they commit high churn rates (up to 90%), and they have heterogeneous compute capacities (the fastest is 6 times of the slowest) and bandwidths (the fastest is up to 75 times of the slowest)

    The optimization potential of volunteer computing for compute or data intensive applications

    No full text
    The poor scalability of Volunteer Computing (VC) hinders the application of it because a tremendous number of volunteers are needed in order to achieve the same performance as that of a traditional HPC. This paper explores optimization potential to improve the scalability of VC from three points of view. First, the heterogeneity of volunteers’ compute-capacity has been chosen from the whole spectrum of impact factors to study optimization potential. Second, a DTH (Distributed Hash Table) based supporting platform and MapReduce are fused together as the discussion context. Third, transformed versions of work stealing have been proposed to optimize VC for both compute-and data-intensive applications. On this basis, the proposed optimization strategies are evaluated by three steps. First, a proof-of-concept prototype is implemented to support the representation and testing of the proposed optimization strategies. Second, virtual tasks are composed to apply certain compute-or data-intensity on the running MapReduce. Third, the competence of VC, running the original equity strategy and the optimization strategies, is tested against the virtual tasks. The evaluation of results has confirmed that the impaired performance has been improved about 24.5% for computeintensive applications and about 19.5% for data-intensive applications

    Recognizing the capacities of dynamic reconfiguration for the QoS assurance of running systems in concurrent and parallel environments

    No full text
    Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance

    Dynamic reconfiguration of distributed data flow systems

    No full text
    Although many approaches to dynamic reconfiguration have been proposed, how the impact of reconfiguration on system QoS can be controlled has not been addressed adequately so far. In this paper, we propose an approach to dynamic reconfiguration of distributed data flow systems. It is an improvement of our previous work for dynamic reconfiguration with QoS management into distributed environment. Our approach has three features. First, it uses version control, flow trace, and reconfiguration scheduling to avoid logical and physical impact on system QoS. Second, it plans a reconfiguration in a totally automatic way. Third, it executes a reconfiguration by a decentralized protocol to reduce reconfiguration time in distributed environment. We adopt our approach to the reconfiguration of a real world application, the Data Encryption and Digital Signature System. Experimental results show that our approach has significant advantages in impact control comparing with other existing approaches
    corecore