10,192 research outputs found

    On File and Task Placements and Dynamic Load Balancing in Distributed Systems

    Get PDF
    [[abstract]]Two distributed system problems, the file and task placement problem and the dynamic load balancing problem, are investigated in this paper. To find the placement of files and tasks at sites with minimal total communication overhead, we propose using the Simulated Annealing approach and multiple objective functions. Experimental results show that our proposed approach depicts superior performance with much less complexity over the previously introduced Genetic Algorithm approach. Dynamic load balancing is employed to equalize processor loads in a distributed system. It allows excessive tasks at a heavily loaded processor to be migrated to another processor with a light load during execution. To effectively lift up the acceptance rates for such task migration requests, we propose an efficient new scheme that yields much improved acceptance rates, followed by reduced unnecessary request messages and communication overhead, when compared with the standard sender-initiated scheme and the fairly complicated GA-based approach.[[notice]]補正完

    A Framework for Developing Real-Time OLAP algorithm using Multi-core processing and GPU: Heterogeneous Computing

    Full text link
    The overwhelmingly increasing amount of stored data has spurred researchers seeking different methods in order to optimally take advantage of it which mostly have faced a response time problem as a result of this enormous size of data. Most of solutions have suggested materialization as a favourite solution. However, such a solution cannot attain Real- Time answers anyhow. In this paper we propose a framework illustrating the barriers and suggested solutions in the way of achieving Real-Time OLAP answers that are significantly used in decision support systems and data warehouses

    Multi-objective engineering shape optimization using differential evolution interfaced to the Nimrod/O tool

    Get PDF
    This paper presents an enhancement of the Nimrod/O optimization tool by interfacing DEMO, an external multiobjective optimization algorithm. DEMO is a variant of differential evolution – an algorithm that has attained much popularity in the research community, and this work represents the first time that true multiobjective optimizations have been performed with Nimrod/O. A modification to the DEMO code enables multiple objectives to be evaluated concurrently. With Nimrod/O’s support for parallelism, this can reduce the wall-clock time significantly for compute intensive objective function evaluations. We describe the usage and implementation of the interface and present two optimizations. The first is a two objective mathematical function in which the Pareto front is successfully found after only 30 generations. The second test case is the three-objective shape optimization of a rib-reinforced wall bracket using the Finite Element software, Code_Aster. The interfacing of the already successful packages of Nimrod/O and DEMO yields a solution that we believe can benefit a wide community, both industrial and academic

    Parallel detrended fluctuation analysis for fast event detection on massive PMU data

    Get PDF
    ("(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")Phasor measurement units (PMUs) are being rapidly deployed in power grids due to their high sampling rates and synchronized measurements. The devices high data reporting rates present major computational challenges in the requirement to process potentially massive volumes of data, in addition to new issues surrounding data storage. Fast algorithms capable of processing massive volumes of data are now required in the field of power systems. This paper presents a novel parallel detrended fluctuation analysis (PDFA) approach for fast event detection on massive volumes of PMU data, taking advantage of a cluster computing platform. The PDFA algorithm is evaluated using data from installed PMUs on the transmission system of Great Britain from the aspects of speedup, scalability, and accuracy. The speedup of the PDFA in computation is initially analyzed through Amdahl's Law. A revision to the law is then proposed, suggesting enhancements to its capability to analyze the performance gain in computation when parallelizing data intensive applications in a cluster computing environment

    Data Placement And Task Mapping Optimization For Big Data Workflows In The Cloud

    Get PDF
    Data-centric workflows naturally process and analyze a huge volume of datasets. In this new era of Big Data there is a growing need to enable data-centric workflows to perform computations at a scale far exceeding a single workstation\u27s capabilities. Therefore, this type of applications can benefit from distributed high performance computing (HPC) infrastructures like cluster, grid or cloud computing. Although data-centric workflows have been applied extensively to structure complex scientific data analysis processes, they fail to address the big data challenges as well as leverage the capability of dynamic resource provisioning in the Cloud. The concept of “big data workflows” is proposed by our research group as the next generation of data-centric workflow technologies to address the limitations of exist-ing workflows technologies in addressing big data challenges. Executing big data workflows in the Cloud is a challenging problem as work-flow tasks and data are required to be partitioned, distributed and assigned to the cloud execution sites (multiple virtual machines). In running such big data work-flows in the cloud distributed across several physical locations, the workflow execution time and the cloud resource utilization efficiency highly depends on the initial placement and distribution of the workflow tasks and datasets across the multiple virtual machines in the Cloud. Several workflow management systems have been developed for scientists to facilitate the use of workflows; however, data and work-flow task placement issue has not been sufficiently addressed yet. In this dissertation, I propose BDAP strategy (Big Data Placement strategy) for data placement and TPS (Task Placement Strategy) for task placement, which improve workflow performance by minimizing data movement across multiple virtual machines in the Cloud during the workflow execution. In addition, I propose CATS (Cultural Algorithm Task Scheduling) for workflow scheduling, which improve workflow performance by minimizing workflow execution cost. In this dissertation, I 1) formalize data and task placement problems in workflows, 2) propose a data placement algorithm that considers both initial input dataset and intermediate datasets obtained during workflow run, 3) propose a task placement algorithm that considers placement of workflow tasks before workflow run, 4) propose a workflow scheduling strategy to minimize the workflow execution cost once the deadline is provided by user and 5)perform extensive experiments in the distributed environment to validate that our proposed strategies provide an effective data and task placement solution to distribute and place big datasets and tasks into the appropriate virtual machines in the Cloud within reasonable time
    corecore