18,780 research outputs found

    Cost-based heuristic search is sensitive to the ratio of operator costs

    Get PDF
    Abstract In many domains, different actions have different costs. In this paper, we show that various kinds of best-first search algorithms are sensitive to the ratio between the lowest and highest operator costs. First, we take common benchmark domains and show that when we increase the ratio of operator costs, the number of node expansions required to find a solution increases. Second, we provide a theoretical analysis showing one reason this phenomenon occurs. We also discuss additional domain features that can cause this increased difficulty. Third, we show that searching using distance-togo estimates can significantly ameliorate this problem. Our analysis takes an important step toward understanding algorithm performance in the presence of differing costs. This research direction will likely only grow in importance as heuristic search is deployed to solve real-world problems

    Real-time Planning as Decision-making Under Uncertainty

    Get PDF
    In real-time planning, an agent must select the next action to take within a fixed time bound. Many popular real-time heuristic search methods approach this by expanding nodes using time-limited A* and selecting the action leading toward the frontier node with the lowest f value. In this thesis, we reconsider real-time planning as a problem of decision-making under uncertainty. We treat heuristic values as uncertain evidence and we explore several backup methods for aggregating this evidence. We then propose a novel lookahead strategy that expands nodes to minimize risk, the expected regret in case a non-optimal action is chosen. We evaluate these methods in a simple synthetic benchmark and the sliding tile puzzle and find that they outperform previous methods. This work illustrates how uncertainty can arise even when solving deterministic planning problems, due to the inherent ignorance of time-limited search algorithms about those portions of the state space that they have not computed, and how an agent can benefit from explicitly meta-reasoning about this uncertainty

    Integrating labor awareness to energy-efficient production scheduling under real-time electricity pricing : an empirical study

    Get PDF
    With the penetration of smart grid into factories, energy-efficient production scheduling has emerged as a promising method for industrial demand response. It shifts flexible production loads to lower-priced periods to reduce energy cost for the same production task. However, the existing methods only focus on integrating energy awareness to conventional production scheduling models. They ignore the labor cost which is shift-based and follows an opposite trend of energy cost. For instance, the energy cost is lower during nights while the labor cost is higher. Therefore, this paper proposes a method for energy-efficient and labor-aware production scheduling at the unit process level. This integrated scheduling model is mathematically formulated. Besides the state-based energy model and genetic algorithm-based optimization, a continuous-time shift accumulation heuristic is proposed to synchronize power states and labor shifts. In a case study of a Belgian plastic bottle manufacturer, a set of empirical sensitivity analyses were performed to investigate the impact of energy and labor awareness, as well as the production-related factors that influence the economic performance of a schedule. Furthermore, the demonstration was performed in 9 large-scale test instances, which encompass the cases where energy cost is minor, moderate, and major compared to the joint energy and labor cost. The results have proven that the ignorance of labor in existing energy-efficient production scheduling studies increases the joint energy and labor cost, although the energy cost can be minimized. To achieve effective production cost reduction, energy and labor awareness are recommended to be jointly considered in production scheduling. (C) 2017 Elsevier Ltd. All rights reserved

    Genetic and memetic algorithms for scheduling railway maintenance activities

    Get PDF
    Nowadays railway companies are confronted with high infrastructure maintenance costs. Therefore good strategies are needed to carry out these maintenance activities in a most cost effective way. In this paper we solve the preventive maintenance scheduling problem (PMSP) using genetic algorithms, memetic algorithms and a two-phase heuristic based on opportunities. The aim of the PMSP is to schedule the (short) routine activities and (long) unique projects for one link in the rail network for a certain planning period such that the overall cost is minimized. To reduce costs and inconvenience for the travellers and operators, these maintenance works are clustered as much as possible in the same time period. The performance of the algorithms presented in this paper are compared with the performance of the methods from an earlier work, Budai et al. (2006), using some randomly generated instances.genetic algorithm;heuristics;opportunities;maintenance optimization;memetic algorithm

    Surrogate Search As a Way to Combat Harmful Effects of Ill-behaved Evaluation Functions

    Full text link
    Recently, several researchers have found that cost-based satisficing search with A* often runs into problems. Although some "work arounds" have been proposed to ameliorate the problem, there has been little concerted effort to pinpoint its origin. In this paper, we argue that the origins of this problem can be traced back to the fact that most planners that try to optimize cost also use cost-based evaluation functions (i.e., f(n) is a cost estimate). We show that cost-based evaluation functions become ill-behaved whenever there is a wide variance in action costs; something that is all too common in planning domains. The general solution to this malady is what we call a surrogatesearch, where a surrogate evaluation function that doesn't directly track the cost objective, and is resistant to cost-variance, is used. We will discuss some compelling choices for surrogate evaluation functions that are based on size rather that cost. Of particular practical interest is a cost-sensitive version of size-based evaluation function -- where the heuristic estimates the size of cheap paths, as it provides attractive quality vs. speed tradeoffsComment: arXiv admin note: substantial text overlap with arXiv:1103.368

    Improved decision support for engine-in-the-loop experimental design optimization

    Get PDF
    Experimental optimization with hardware in the loop is a common procedure in engineering and has been the subject of intense development, particularly when it is applied to relatively complex combinatorial systems that are not completely understood, or where accurate modelling is not possible owing to the dimensions of the search space. A common source of difficulty arises because of the level of noise associated with experimental measurements, a combination of limited instrument precision, and extraneous factors. When a series of experiments is conducted to search for a combination of input parameters that results in a minimum or maximum response, under the imposition of noise, the underlying shape of the function being optimized can become very difficult to discern or even lost. A common methodology to support experimental search for optimal or suboptimal values is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies, such as simulated annealing, can be significantly challenged in the presence of noise, since approximating the gradient at any point becomes highly unreliable. Often, experiments are accepted as a result of random noise which should be rejected, and vice versa. This is also true for other sampling techniques, including tabu and evolutionary algorithms. After the general introduction, this paper is divided into two main sections (sections 2 and 3), which are followed by the conclusion. Section 2 introduces a decision support methodology based upon response surfaces, which supplements experimental management based on a variable neighbourhood search and is shown to be highly effective in directing experiments in the presence of a significant signal-to-noise ratio and complex combinatorial functions. The methodology is developed on a three-dimensional surface with multiple local minima, a large basin of attraction, and a high signal-to-noise ratio. In section 2, the methodology is applied to an automotive combinatorial search in the laboratory, on a real-time engine-in-the-loop application. In this application, it is desired to find the maximum power output of an experimental single-cylinder spark ignition engine operating under a quasi-constant-volume operating regime. Under this regime, the piston is slowed at top dead centre to achieve combustion in close to constant volume conditions. As part of the further development of the engine to incorporate a linear generator to investigate free-piston operation, it is necessary to perform a series of experiments with combinatorial parameters. The objective is to identify the maximum power point in the least number of experiments in order to minimize costs. This test programme provides peak power data in order to achieve optimal electrical machine design. The decision support methodology is combined with standard optimization and search methods – namely gradient descent and simulated annealing – in order to study the reductions possible in experimental iterations. It is shown that the decision support methodology significantly reduces the number of experiments necessary to find the maximum power solution and thus offers a potentially significant cost saving to hardware-in-the-loop experi- mentation
    • …
    corecore