1,811 research outputs found

    Actors: The Ideal Abstraction for Programming Kernel-Based Concurrency

    Get PDF
    GPU and multicore hardware architectures are commonly used in many different application areas to accelerate problem solutions relative to single CPU architectures. The typical approach to accessing these hardware architectures requires embedding logic into the programming language used to construct the application; the two primary forms of embedding are: calls to API routines to access the concurrent functionality, or pragmas providing concurrency hints to a language compiler such that particular blocks of code are targeted to the concurrent functionality. The former approach is verbose and semantically bankrupt, while the success of the latter approach is restricted to simple, static uses of the functionality. Actor-based applications are constructed from independent, encapsulated actors that interact through strongly-typed channels. This paper presents a first attempt at using actors to program kernels targeted at such concurrent hardware. Besides the glove-like fit of a kernel to the actor abstraction, quantitative code analysis shows that actor-based kernels are always significantly simpler than API-based coding, and generally simpler than pragma-based coding. Additionally, performance measurements show that the overheads of actor-based kernels are commensurate to API-based kernels, and range from equivalent to vastly improved for pragma-based annotations, both for sample and real-world applications

    A hyper-heuristic ensemble method for static job-shop scheduling.

    Get PDF
    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyperheuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances

    A machine learning enhanced multi-start heuristic to efficiently solve a serial-batch scheduling problem

    Get PDF
    Serial-batch scheduling problems are widespread in several industries (e.g., the metal processing industry or industrial 3D printing) and consist of two subproblems that must be solved simultaneously: the grouping of jobs into batches and the sequencing of the created batches. This problem’s NP-hard nature prevents optimally solving large-scale problems; therefore, heuristic solution methods are a common choice to effectively tackle the problem. One of the best-performing heuristics in the literature is the ATCS–BATCS(β) heuristic which has three control parameters. To achieve a good solution quality, most appropriate parameters must be determined a priori or within a multi-start approach. As multi-start approaches performing (full) grid searches on the parameters lack efficiency, we propose a machine learning enhanced grid search. To that, Artificial Neural Networks are used to predict the performance of the heuristic given a specific problem instance and specific heuristic parameters. Based on these predictions, we perform a grid search on a smaller set of most promising heuristic parameters. The comparison to the ATCS–BATCS(β) heuristics shows that our approach reaches a very competitive mean solution quality that is only 2.5% lower and that it is computationally much more efficient: computation times can be reduced by 89.2% on average

    Advances in Computational Intelligence Applications in the Mining Industry

    Get PDF
    This book captures advancements in the applications of computational intelligence (artificial intelligence, machine learning, etc.) to problems in the mineral and mining industries. The papers present the state of the art in four broad categories: mine operations, mine planning, mine safety, and advances in the sciences, primarily in image processing applications. Authors in the book include both researchers and industry practitioners

    Energy Consumption Forecasts by Gradient Boosting Regression Trees

    Get PDF
    Recent years have seen an increasing interest in developing robust, accurate and possibly fast forecasting methods for both energy production and consumption. Traditional approaches based on linear architectures are not able to fully model the relationships between variables, particularly when dealing with many features. We propose a Gradient-Boosting–Machine-based framework to forecast the demand of mixed customers of an energy dispatching company, aggregated according to their location within the seven Italian electricity market zones. The main challenge is to provide precise one-day-ahead predictions, despite the most recent data being two months old. This requires exogenous regressors, e.g., as historical features of part of the customers and air temperature, to be incorporated in the scheme and tailored to the specific case. Numerical simulations are conducted, resulting in a MAPE of 5–15% according to the market zone. The Gradient Boosting performs significantly better when compared to classical statistical models for time series, such as ARMA, unable to capture holidays

    Machine Learning Tool for Transmission Capacity Forecasting of Overhead Lines based on Distributed Weather Data

    Get PDF
    Die Erhöhung des Anteils intermittierender erneuerbarer Energiequellen im elektrischen Energiesystem ist eine Herausforderung für die Netzbetreiber. Ein Beispiel ist die Zunahme der Nord-Süd Übertragung von Windenergie in Deutschland, die zu einer Erhöhung der Engpässe in den Freileitungen führt und sich direkt in den Stromkosten der Endverbraucher niederschlägt. Neben dem Ausbau neuer Freileitungen ist ein witterungsabhängiger Freileitungsbetrieb eine Lösung, um die aktuelle Auslastung des Systems zu verbessern. Aus der Analyse in einer Probeleitung in Deutschland wurde gezeigt, dass einen Zuwachs von ca. 28% der Stromtragfähigkeit eine Reduzierung der Kosten für Engpassmaßnahmen um ca. 55% bedeuten kann. Dieser Vorteil kann nur vom Netzbetreiber wahrgenommen werden, wenn eine Belastbarkeitsprognose für die Stromerzeugunsgplanung der konventionellen Kraftwerke zur Verfügung steht. Das in dieser Dissertation vorgestellte System prognostiziert die Belastbarkeit von Freileitungen für 48 Stunden, mit einer Verbesserung der Prognosegenauigkeit im Vergleich zum Stand-der-Technik von 6,13% in Durchschnitt. Der Ansatz passt die meteorologischen Vorhersagen an die lokale Wettersituation entlang der Leitung an. Diese Anpassungen sind aufgrund von Veränderungen der Topographie entlang der Leitungstrasse und Windschatten der umliegenden Bäume notwendig, da durch die meteorologischen Modelle diese nicht beschrieben werden können. Außerdem ist das in dieser Dissertation entwickelte Modell in der Lage die Genauigkeitsabweichungen der Wettervorhersage zwischen Tag und Nacht abzugleichen, was vorteilhaft für die Strombelastbarkeitsprognose ist. Die Zuverlässigkeit und deswegen auch die Effizienz des Stromerzeugungsplans für den nächsten 48 Stunden wurde um 10% gegenüber dem Stand der Technik erhöht. Außerdem wurde in Rahmen dieser Arbeit ein Verfahren für die Positionierung der Wetterstationen entwickelt, um die wichtigsten Stellen entlang der Leitung abzudecken und gleichzeitig die Anzahl der Wetterstationen zu minimieren. Wird ein verteiltes Sensornetzwerk in ganz Deutschland umgesetzt, wird die Einsparung von Redispatchingkosten eine Kapitalrendite von ungefähr drei Jahren bedeuten. Die Durchführung einer transienten Analyse ist im entwickelten System ebenfalls möglich, um Engpassfälle für einige Minuten zu lösen, ohne die maximale Leitertemperatur zu erreichen. Dieses Dokument versucht, die Vorteile der Freileitungsmonitoringssysteme zu verdeutlichen und stellt eine Lösung zur Unterstützung eines flexiblen elektrischen Netzes vor, die für eine erfolgreiche Energiewende erforderlich ist

    The Alloy Theoretic Automated Toolkit: A User Guide

    Get PDF
    Although the formalism that allows the calculation of alloy thermodynamic properties from first-principles has been known for decades, its practical implementation has so far remained a tedious process. The Alloy Theoretic Automated Toolkit (ATAT) drastically simplifies this procedure by implementing decision rules based on formal statistical analysis that frees the researchers from a constant monitoring during the calculation process and automatically "glues" together the input and the output of various codes, in order to provide a high-level interface to the calculation of alloy thermodynamic properties from first-principles. ATAT implements the Structure Inversion Method (SIM), also known as the Connolly-Williams method, in combination with semi-grand-canonical Monte Carlo simulations. In order to make this powerful toolkit available to the wide community of researchers who could benefit from it, this article present a concise user guide outlining the steps required to obtain thermodynamic information from ab initio calculations.Comment: 15 pages, 4 figure

    Power-Aware Job Dispatching in High Performance Computing Systems

    Get PDF
    This works deals with the power-aware job dispatching problem in supercomputers; broadly speaking the dispatching consists of assigning finite capacity resources to a set of activities, with a special concern toward power and energy efficient solutions. We introduce novel optimization approaches to address its multiple aspects. The proposed techniques have a broad application range but are aimed at applications in the field of High Performance Computing (HPC) systems. Devising a power-aware HPC job dispatcher is a complex, where contrasting goals must be satisfied. Furthermore, the online nature of the problem request that solutions must be computed in real time respecting stringent limits. This aspect historically discouraged the usage of exact methods and favouring instead the adoption of heuristic techniques. The application of optimization approaches to the dispatching task is still an unexplored area of research and can drastically improve the performance of HPC systems. In this work we tackle the job dispatching problem on a real HPC machine, the Eurora supercomputer hosted at the Cineca research center, Bologna. We propose a Constraint Programming (CP) model that outperforms the dispatching software currently in use. An essential element to take power-aware decisions during the job dispatching phase is the possibility to estimate jobs power consumptions before their execution. To this end, we applied Machine Learning techniques to create a prediction model that was trained and tested on the Euora supercomputer, showing a great prediction accuracy. Then we finally develop a power-aware solution, considering the same target machine, and we devise different approaches to solve the dispatching problem while curtailing the power consumption of the whole system under a given threshold. We proposed a heuristic technique and a CP/heuristic hybrid method, both able to solve practical size instances and outperform the current state-of-the-art techniques
    • …
    corecore