51,093 research outputs found

    Locating a bioenergy facility using a hybrid optimization method

    Get PDF
    In this paper, the optimum location of a bioenergy generation facility for district energy applications is sought. A bioenergy facility usually belongs to a wider system, therefore a holistic approach is adopted to define the location that optimizes the system-wide operational and investment costs. A hybrid optimization method is employed to overcome the limitations posed by the complexity of the optimization problem. The efficiency of the hybrid method is compared to a stochastic (genetic algorithms) and an exact optimization method (Sequential Quadratic Programming). The results confirm that the hybrid optimization method proposed is the most efficient for the specific problem. (C) 2009 Elsevier B.V. All rights reserved

    An optimization model for multi-biomass tri-generation energy supply

    Get PDF
    In this paper, a decision support system (DSS) for multi-biomass energy conversion applications is presented. The system in question aims at supporting an investor by thoroughly assessing an investment in locally existing multi-biomass exploitation for tri-generation applications (electricity, heating and cooling), in a given area. The approach followed combines use of holistic modelling of the system, including the multi-biomass supply chain, the energy conversion facility and the district heating and cooling network, with optimization of the major investment-related variables to maximize the financial yield of the investment. The consideration of multi-biomass supply chain presents significant potential for cost reduction, by allowing spreading of capital costs and reducing warehousing requirements, especially when seasonal biomass types are concerned. The investment variables concern the location of the bioenergy exploitation facility and its sizing, as well as the types of biomass to be procured, the respective quantities and the maximum collection distance for each type. A hybrid optimization method is employed to overcome the inherent limitations of every single method. The system is demand-driven, meaning that its primary aim is to fully satisfy the energy demand of the customers. Therefore, the model is a practical tool in the hands of an investor to assess and optimize in financial terms an investment aiming at covering real energy demand. optimization is performed taking into account various technical, regulatory, social and logical constraints. The model characteristics and advantages are highlighted through a case study applied to a municipality of Thessaly, Greece. (C) 2008 Elsevier Ltd. All rights reserved

    A Novel Convex Relaxation for Non-Binary Discrete Tomography

    Full text link
    We present a novel convex relaxation and a corresponding inference algorithm for the non-binary discrete tomography problem, that is, reconstructing discrete-valued images from few linear measurements. In contrast to state of the art approaches that split the problem into a continuous reconstruction problem for the linear measurement constraints and a discrete labeling problem to enforce discrete-valued reconstructions, we propose a joint formulation that addresses both problems simultaneously, resulting in a tighter convex relaxation. For this purpose a constrained graphical model is set up and evaluated using a novel relaxation optimized by dual decomposition. We evaluate our approach experimentally and show superior solutions both mathematically (tighter relaxation) and experimentally in comparison to previously proposed relaxations

    Optimizing production scheduling of steel plate hot rolling for economic load dispatch under time-of-use electricity pricing

    Get PDF
    Time-of-Use (TOU) electricity pricing provides an opportunity for industrial users to cut electricity costs. Although many methods for Economic Load Dispatch (ELD) under TOU pricing in continuous industrial processing have been proposed, there are still difficulties in batch-type processing since power load units are not directly adjustable and nonlinearly depend on production planning and scheduling. In this paper, for hot rolling, a typical batch-type and energy intensive process in steel industry, a production scheduling optimization model for ELD is proposed under TOU pricing, in which the objective is to minimize electricity costs while considering penalties caused by jumps between adjacent slabs. A NSGA-II based multi-objective production scheduling algorithm is developed to obtain Pareto-optimal solutions, and then TOPSIS based multi-criteria decision-making is performed to recommend an optimal solution to facilitate filed operation. Experimental results and analyses show that the proposed method cuts electricity costs in production, especially in case of allowance for penalty score increase in a certain range. Further analyses show that the proposed method has effect on peak load regulation of power grid.Comment: 13 pages, 6 figures, 4 table

    Distributed-Memory Breadth-First Search on Massive Graphs

    Full text link
    This chapter studies the problem of traversing large graphs using the breadth-first search order on distributed-memory supercomputers. We consider both the traditional level-synchronous top-down algorithm as well as the recently discovered direction optimizing algorithm. We analyze the performance and scalability trade-offs in using different local data structures such as CSR and DCSC, enabling in-node multithreading, and graph decompositions such as 1D and 2D decomposition.Comment: arXiv admin note: text overlap with arXiv:1104.451

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio
    corecore