137,767 research outputs found

    Task allocation in a distributed computing system

    Get PDF
    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies

    Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform

    Full text link
    This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources.Comment: 9 page

    Distributed and Centralized Task Allocation: When and Where to Use Them

    No full text
    Self-organisation is frequently advocated as the solution for managing large, dynamic systems. Distributed algorithms are implicitly designed for infinitely large problems, while small systems are regarded as being controllable using traditional, centralised approaches. Many real-world systems, however, do not fit conveniently into these "small" or "large" categories, resulting in a range of cases where the optimal solution is ambiguous. This difficulty is exacerbated by enthusiasts of either approach constructing problems that suit their preferred control architecture. We address this ambiguity by building an abstract model of task allocation in a community of specialised agents. We are inspired by the problem of work distribution in distributed satellite systems, but the model is also relevant to the resource allocation problems in distributed robotics, autonomic computing and wireless sensor networks. We compare the behaviour of a self-organising, market-based task allocation strategy to a classical approach that uses a central controller with global knowledge. The objective is not to prove one mechanism inherently superior to the other; instead we are interested in the regions of problem space where each of them dominates. Simulation is used to explore the trade-off between energy consumption and robustness in a system of intermediate size, with fixed communication costs and varying rates of component failure. We identify boundaries between regions in the parameter space where one or the other architecture will be favoured. This allows us to derive guidelines for system designers, thus contributing to the development of a disciplined approach to controlling distributed systems using self-organising mechanisms

    Simulation model of load balancing in distributed computing systems

    Get PDF
    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user's request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task

    Distributed simultaneous task allocation and motion coordination of autonomous vehicles using a parallel computing cluster

    Full text link
    Task allocation and motion coordination are the main factors that should be consi-dered in the coordination of multiple autonomous vehicles in material handling systems. Presently, these factors are handled in different stages, leading to a reduction in optimality and efficiency of the overall coordination. However, if these issues are solved simultaneously we can gain near optimal results. But, the simultaneous approach contains additional algorithmic complexities which increase computation time in the simulation environment. This work aims to reduce the computation time by adopting a parallel and distributed computation strategy for Simultaneous Task Allocation and Motion Coordination (STAMC). In the simulation experiments, each cluster node executes the motion coordination algorithm for each autonomous vehicle. This arrangement enables parallel computation of the expensive STAMC algorithm. Parallel and distributed computation is performed directly within the interpretive MATLAB environment. Results show the parallel and distributed approach provides sub-linear speedup compared to a single centralised computing node. © 2007 Springer-Verlag Berlin Heidelberg

    Performance Models of Data Parallel DAG Workflows for Large Scale Data Analytics

    Get PDF
    Directed Acyclic Graph (DAG) workflows are widely used for large-scale data analytics in cluster-based distributed computing systems. Building an accurate performance model for a DAG on data-parallel frameworks (e.g., MapReduce) is critical to implement autonomic self-management big data systems. An accurate performance model is challenging because the allocation of pre-emptable system resources among parallel jobs may dynamically vary during execution. This resource allocation variation during execution makes it difficult to accurately estimate the execution time. In this paper, we tackle this challenge by proposing a new cost model, called Bottleneck Oriented Estimation (BOE), to estimate the allocation of preemptable resources by identifying the bottleneck to accurately predict task execution time. For a DAG workflow, we propose a state-based approach to iteratively use the resource allocation property among stages to estimate the overall execution plan. Extensive experiments were performed to validate these cost models with HiBench and TPC-H workloads. The BOE model outperforms the state-of-the-art models by a factor of five for task execution time estimation.Peer reviewe

    Multi-Agent Systems Meet GPU: Deploying Agent-Based Architectures on Graphics Processors

    Get PDF
    Part 5: Computational SystemsInternational audienceEven given today’s rich hardware platforms, computation-intensive algorithms and applications, such as large-scale simulations, are still challenging to run with acceptable response times. One way to increase the performance of these algorithms and applications is by using the computing power of Graphics Processing Units (GPU). However, effectively mapping distributed software models to GPU is a non-trivial endeavor. In this paper, we investigate ways of improving execution performance of multi-agent systems (MAS) models by means of relevant task allocation mechanisms, which are suitable for GPU execution. Several task allocation architecture variants for MAS using GPU are identified and their properties analyzed. In particular, we study three cases: Agents and their runtime environment can be (i) completely on the host (CPU); (ii) partly on host and device (GPU); (iii) completely on the device. For each of these architecture variants, we propose task allocation models that take GPU restrictions into account
    corecore