26 research outputs found

    MPI-based Evaluation of Coordinator Election Algorithms

    Full text link
    In this paper, we detail how two types of distributed coordinator election algorithms can be compared in terms of performance based on an evaluation on the High Performance Computing (HPC) infrastructure. An experimental approach based on an MPI (Message Passing Interface) implementation is presented, with the goal to characterize the relevant evaluation metrics based on statistical processing of the results. The presented approach can be used to learn master students of a course on distributed software the basics of algorithms for coordinator election, and how to conduct an experimental performance evaluation study. Finally, use cases where distributed coordinator election algorithms are useful are presented.Comment: Updated references - Corrected typo

    Elementary Concepts of Big Data and Hadoop

    Get PDF
    This paper is an effort to present the basic importance of Big Data and also its importance in an organization from its performance point of view. The term Big data, refers the data sets, whose volume, complexity and also rate of growth make them more difficult to capture, manage, process and also analyzed. For such type of data –intensive applications, the Apache Hadoop Framework has newly concerned a lot of attention. Hadoop is the core platform for structuring Big data, and solves the problem of making it helpful for analytics idea. Hadoop is an open source software project that enables the distributed processing of enormous data and framework for the analysis and transformation of very large data sets using the MapReduce paradigm. This paper deals with the architecture of Hadoop with its various components

    Comparison of intelligent charging algorithms for electric vehicles to reduce peak load and demand variability in a distribution grid

    Get PDF
    A potential breakthrough of the electrification of the vehicle fleet will incur a steep rise in the load on the electrical power grid. To avoid huge grid investments, coordinated charging of those vehicles is a must. In this paper, we assess algorithms to schedule charging of plug-in (hybrid) electric vehicles as to minimize the additional peak load they might cause. We first introduce two approaches, one based on a classical optimization approach using quadratic programming, and a second one, market based coordination, which is a multi-agent system that uses bidding on a virtual market to reach an equilibrium, price that matches demand and supply. We benchmark these two methods against each other, as well as to a baseline scenario of uncontrolled charging. Our simulation results covering a residential area with 63 households show that controlled charging reduces peak load, load variability, and deviations from the nominal grid voltage

    Assessment and mitigation of voltage violations by solar panels in a residential distribution grid

    Get PDF
    Distributed renewable electricity generators, such as solar cells and wind turbines introduce bidirectional energy flows in the low-voltage power grid, possibly causing voltage violations and grid instabilities. The current solution to this problem comprises automatically switching off some of the local generators, resulting in a loss of green energy. In this paper we study the impact of different solar panel penetration levels in an residential area and the corresponding effects on the distribution feeder line. To mitigate these problems, we assess how effective it is to locally store excess energy in batteries. A case study on a residential feeder serving 63 houses shows that if 80% of them have photo-voltaic (PV) panels, 45% of them would be switched off, resulting in 482 kWh of PV-generated energy being lost. We show that providing a 9 kWh battery at each house can mitigate some voltage violations, and therefor allowing for more renewable energy to be used

    Exploiting V2G to optimize residential energy consumption with electrical vehicle (dis)charging

    Get PDF
    Abstract-The potential breakthrough of pluggable (hybrid) electrical vehicles (PHEVs) will impose various challenges to the power grid, and esp. implies a significant increase of its load. Adequately dealing with such PHEVs is one of the challenges and opportunities for smart grids. In particular, intelligent control strategies for the charging process can significantly alleviate peak load increases that are to be expected from e.g. residential vehicle charging at home. In addition, the car batteries connected to the grid can also be exploited to deliver grid services, and in particular give stored energy back to the grid to help coping with peak demands stemming from e.g. household appliances. In this paper, we will address such so-called vehicle-to-grid (V2G) scenarios while considering the optimization of PHEV charging in a residential scenario. In particular, we will assess the optimal car battery (dis)charging scheduling to achieve peak shaving and reduction of the variability (over time) of the load of households connected to a local distribution grid. We compare (i) a business-as-usual (BAU) scenario, without any intelligent charging, (ii) intelligent local charging optimization without V2G, and (iii) charging optimization with V2G. To evaluate these scenarios, we make use of our simulation tool, based on OMNeT++, which combines ICT and power network models and incorporates a Matlab model that allows e.g. assessing voltage violations. In a case study on a threefeeder distribution network spanning 63 households, we observe that non-V2G optimized charging can reduce the peak demand compared to BAU with 64%. If we apply V2G to the intelligent charging, we can further cut the non-V2G peak demand with 17% (i.e., achieve a peak load which is only 30% of BAU)

    High Performance Fault-Tolerant Hadoop Distributed File System

    Get PDF
    The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. Huge amounts of data generated from many sources daily. Maintenance of such data is a challenging task. One proposing solution is to use Hadoop. The solution provided by Google, ?Doug Cutting? and his team developed an Open Source Project called Hadoop. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware. The Hadoop Distributed File System (HDFS) is designed to be scalable, fault-tolerant, distributed storage system. Hadoop?s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. The HDFS stores filesystem Metadata and application data separately. HDFS stores Metadata on separate dedicated server called NameNode and application data stored on separate servers called DataNodes. The file system data is accessed via HDFS clients, which first contact the NameNode data location and then transfer data to (write) or from (read) the specified DataNodes. Download file request chooses only one of the servers to download. Other replicated servers are not used. As the file size increases the download time increases. In this paper we work on three policies for selection of blocks. Those are first, random and loadbased. By observing the results the speed of download time for file is ?first? runs slower than ?random? and ?random? runs slower than ?loadbased?

    MIN-COST WITH DELAY SCHEDULING FOR LARGE SCALE CLOUD-BASED WORKFLOW APPLICATIONS PLATFORM

    Get PDF
    Cloud computing is a promising solution to provide the resource scalability dynamically. In order to support large scale workflow applications, we present Nuts-LSWAP which is implementation for Cloud workflow. Then, a novel Min-cost with delay scheduling algorithm is presented in this paper. We also focuses on the global scheduling including genetic evolution method and other scheduling methods (sequence and greedy) to evaluate and decrease the execution cost. Finally, three primary experiments divided into two parts. One parts of experiment demonstrate the global mapping algorithm effectively, and the second parts compare execution of a large scale workflow instances with or without delay scheduling. It is primarily proved the Nuts-LSWAP is efficient platform for building Cloud workflow environment

    Extending the Globus Information Service with the Common Information Model

    Get PDF
    This is a post-peer-review, pre-copyedit version. The final authenticated version is available online at: http://dx.doi.org/10.1109/ISPA.2011.62[Abstract] The need of task-adapted and complete information for the management of resources is a well known issue in Grid computing. Globus Toolkit 4 (GT4) includes the Monitoring and Discovery System component (MDS4) to carry out resource management. The Common Information Model (CIM) provides a standard conceptual view of the managed environment. This work improves the MDS4 functionality through the use of CIM, with the aim of providing a unified, standard representation of the Grid resources. Since a practical CIM model may contain a large volume of information, a new Index Service that represents the CIM information through Java instances is presented. In addition, a solution that keeps data in persistent storage has also been implemented. The evaluation of the proposed solutions achieves encouraging results, with an important reduction in memory consumption, a good scalability when the number of instances increases, and with a reasonable response time.Ministerio de Ciencia e Innovación; TIN2007-67537-C03Ministerio de Ciencia e Innovación; TIN2010-1637
    corecore