209 research outputs found

    Penyelesaian Permasalahan Penjadwalan Aktivitas Proyek dengan Batasan Sumber Daya Menggunakan Metode Cross Entropy

    Get PDF
    Project scheduling is essential to be planned before activities. Standard methods of project scheduling based on precedence constraints scheduling events and the assumption that resources are not unlimited capacity. But in fact the project activity has resource limits. The main objective is to minimize the total duration of the project with precedence constraints and resource constraints for all project activities. Many optimization methods are used to improve the quality of scheduling and speed up the calculation time. This paper proposed the use of Cross Entropy (CE) method to solve resource constraints project scheduling problem, as well as comparing the advantages and disadvantages between the CE methods with Differential Evolution (DE) method. The purpose CE method consist of four critical steps including generating a sample of random solution, calculating the performance according to the specific fitness function, selecting elite sample and finally updating the previous parameters to get a better sample in the next iteration. To speed up the computation time, in this study decrease the number of samples for each iteration in Cross Entropy algorithm. Numerical experiments with several data sets from the Project Scheduling library (PSPLIB) showed that CE can provide the optimal total duration of the project same as the DE with calculation time is faster than DE

    Probabilistic modelling of oil rig drilling operations for business decision support: a real world application of Bayesian networks and computational intelligence.

    Get PDF
    This work investigates the use of evolved Bayesian networks learning algorithms based on computational intelligence meta-heuristic algorithms. These algorithms are applied to a new domain provided by the exclusive data, available to this project from an industry partnership with ODS-Petrodata, a business intelligence company in Aberdeen, Scotland. This research proposes statistical models that serve as a foundation for building a novel operational tool for forecasting the performance of rig drilling operations. A prototype for a tool able to forecast the future performance of a drilling operation is created using the obtained data, the statistical model and the experts' domain knowledge. This work makes the following contributions: applying K2GA and Bayesian networks to a real-world industry problem; developing a well-performing and adaptive solution to forecast oil drilling rig performance; using the knowledge of industry experts to guide the creation of competitive models; creating models able to forecast oil drilling rig performance consistently with nearly 80% forecast accuracy, using either logistic regression or Bayesian network learning using genetic algorithms; introducing the node juxtaposition analysis graph, which allows the visualisation of the frequency of nodes links appearing in a set of orderings, thereby providing new insights when analysing node ordering landscapes; exploring the correlation factors between model score and model predictive accuracy, and showing that the model score does not correlate with the predictive accuracy of the model; exploring a method for feature selection using multiple algorithms and drastically reducing the modelling time by multiple factors; proposing new fixed structure Bayesian network learning algorithms for node ordering search-space exploration. Finally, this work proposes real-world applications for the models based on current industry needs, such as recommender systems, an oil drilling rig selection tool, a user-ready rig performance forecasting software and rig scheduling tools

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    Markov decision processes and discrete-time mean-field games constrained with costly observations: Stochastic optimal control constrained with costly observations

    Get PDF
    In this thesis, we consider Markov decision processes with actively controlled observations. Optimal strategies involve the optimisation of observation times as well as the subsequent action values. We first consider an observation cost model, where the underlying state is observed only at chosen observation times at a cost. By including the time elapsed from observations as part of the augmented Markov system, the value function satisfies a system of quasi-variational inequalities (QVIs). Such a class of QVIs can be seen as an extension to the interconnected obstacle problem. We prove a comparison principle for this class of QVIs, which implies uniqueness of solutions to our proposed problem. Penalty methods are then utilised to obtain arbitrarily accurate solutions. Finally, we perform numerical experiments on three applications which illustrate this model. We then consider a model where agents can exercise control actions that affect their speed of access to information. The agents can dynamically decide to receive observations with less delay by paying higher observation costs. Agents seek to exploit their active information gathering by making further decisions to influence their state dynamics to maximise rewards. We also extend this notion to a corresponding mean-field game (MFG). In the mean field equilibrium, each generic agent solves individually a partially observed Markov decision problem in which the way partial observations are obtained is itself also subject to dynamic control actions by the agent. Based on a finite characterisation of the agents’ belief states, we show how the mean field game with controlled costly information access can be formulated as an equivalent standard mean field game on a suitably augmented but finite state space. We prove that with sufficient entropy regularisation, a fixed point iteration converges to the unique MFG equilibrium and yields an approximate ε-Nash equilibrium for a large but finite population size. We illustrate our MFG by an example from epidemiology, where medical testing results at different speeds and costs can be chosen by the agents

    Advanced modelling and simulation of water distribution systems with discontinuous control elements

    Get PDF
    Water distribution systems are large and complex structures. Hence, their construction, management and improvements are time consuming and expensive. But nearly all the optimisation methods, whether aimed at design or operation, suffer from the need for simulation models necessary to evaluate the performance of solutions to the problem. These simulation models, however, are increasing in size and complexity, and especially for operational control purposes, where there is a need to regularly update the control strategy to account for the fluctuations in demands, the combination of a hydraulic simulation model and optimisation is likely to be computationally excessive for all but the simplest of networks. The work presented in this thesis has been motivated by the need for reduced, whilst at the same time appropriately accurate, models to replicate the complex and nonlinear nature of water distribution systems in order to optimise their operation. This thesis attempts to establish the ground rules to form an underpinning basis for the formulation and subsequent evaluation of such models. Part I of this thesis introduces some of the modelling, simulation and optimisation problems currently faced by water industry. A case study is given to emphasise one particular subject, namely reduction of water distribution system models. A systematic research resulted in development of a new methodology which encapsulate not only the system mass balance but also the system energy distribution within the model reduction process. The methodology incorporates the energy audits concepts into the model reduction algorithm allowing the preservation of the original model energy distribution by imposing new pressure constraints in the reduced model. The appropriateness of the new methodology is illustrated on the theoretical and industrial case studies. Outcomes from these studies demonstrate that the new extension to the model reduction technique can simplify the inherent complexity of water networks while preserving the completeness of original information. An underlying premise which forms a common thread running through the thesis, linking Parts I and II, is in recognition of the need for the more efficient paradigm to model and simulate water networks; effectively accounting for the discontinuous behaviour exhibited by water network components. Motivated largely by the potential of contemplating a new paradigm to water distribution system modelling and simulation, a further major research area, which forms the basis of Part II, leads to a study of the discrete event specification formalism and quantised state systems to formulate a framework within which water distribution systems can be modelled and simulated. In contrast to the classic time-slicing simulators, depending on the numerical integration algorithms, the quantisation of system states would allow accounting for the discontinuities exhibited by control elements in a more efficient manner, and thereby, offer a significant increase in speed of the simulation of water network models. The proposed approach is evaluated on a number of case studies and compared with results obtained from the Epanet2 simulator and OpenModelica. Although the current state-of-art of the simulation tools utilising the quantised state systems do not allow to fully exploit their potential, the results from comparison demonstrate that, if the second or third order quantised-based integrations are used, the quantised state systems approach can outperform the conventional water network simulation methods in terms of simulation accuracy and run-time

    An adaptive ant colony optimization algorithm for rule-based classification

    Get PDF
    Classification is an important data mining task with different applications in many fields. Various classification algorithms have been developed to produce classification models with high accuracy. Differing from other complex and difficult classification models, rules-based classification algorithms produce models which are understandable for users. Ant-Miner is a variant of ant colony optimisation and a prominent intelligent algorithm widely use in rules-based classification. However, the Ant-Miner has overfitting and easily falls into local optima problems which resulted in low classification accuracy and complex classification rules. In this study, a new Ant-Miner classifier is developed, named Adaptive Genetic Iterated-AntMiner (AGI-AntMiner) that aims to avoid local optima and overfitting problems. The components of AGI-AntMiner includes: i) an Adaptive AntMiner which is a prepruning technique to dynamically select the appropriate threshold based on the quality of the rules; ii) Genetic AntMiner that improves the post-pruning by adding/removing terms in a dual manner; and, iii) an Iterated Local Search-AntMiner that improves exploitation based on multiple-neighbourhood structure. The proposed AGI-AntMiner algorithm is evaluated on 16 benchmark datasets of medical, financial, gaming and social domains obtained from the University California Irvine repository. The algorithm’s performance was compared with other variants of Ant-Miner and state-of-the-art rules-based classification algorithms based on classification accuracy and model complexity. Experimental results proved that the proposed AGI-AntMiner algorithm is superior in two (2) aspects. Hybridization of local search in AGI-AntMiner has improved the exploitation mechanism which leads to the discovery of more accurate classification rules. The new pre-pruning and postpruning techniques have improved the pruning ability to produce shorter classification rules which are easier to interpret by the users. Thus, the proposed AGI-AntMiner algorithm is capable in conducting an efficient search in finding the best classification rules that balance the classification accuracy and model complexity to overcome overfitting and local optima problems

    Demand Side Management in the Smart Grid

    Get PDF

    Design and analysis of scalable rule induction systems

    Get PDF
    Machine learning has been studied intensively during the past two decades. One motivation has been the desire to automate the process of knowledge acquisition during the construction of expert systems. The recent emergence of data mining as a major application for machine learning algorithms has led to the need for algorithms that can handle very large data sets. In real data mining applications, data sets with millions of training examples, thousands of attributes and hundreds of classes are common. Designing learning algorithms appropriate for such applications has thus become an important research problem. A great deal of research in machine learning has focused on classification learning. Among the various machine learning approaches developed for classification, rule induction is of particular interest for data mining because it generates models in the form of IF-THEN rules which are more expressive and easier for humans to comprehend. One weakness with rule induction algorithms is that they often scale relatively poorly with large data sets, especially on noisy data. The work reported in this thesis aims to design and develop scalable rule induction algorithms that can process large data sets efficiently while building from them the best possible models. There are two main approaches for rule induction, represented respectively by CN2 and the AQ family of algorithms. These approaches vary in the search strategy employed for examining the space of possible rules, each of which has its own advantages and disadvantages. The first part of this thesis introduces a new rule induction algorithm for learning classification rules, which broadly follows the approach of algorithms represented by CN2. The algorithm presents a new search method which employs several novel search-space pruning rules and rule-evaluation techniques. This results in a highly efficient algorithm with improved induction performance. Real-world data do not only contain nominal attributes but also continuous attributes. The ability to handle continuously valued data is thus crucial to the success of any general purpose learning algorithm. Most current discretisation approaches are developed as pre- processes for learning algorithms. The second part of this thesis proposes a new approach which discretises continuous-valued attributes during the learning process. Incorporating discretisation into the learning process has the advantage of taking into account the bias inherent in the learning system as well as the interactions between the different attributes. This in turn leads to improved performance. Overfitting the training data is a major problem in machine learning, particularly when noise is present. Overfitting increases learning time and reduces both the accuracy and the comprehensibility of the generated rules, making learning from large data sets more difficult. Pruning is a technique widely used for addressing such problems and consequently forms an essential component of practical learning algorithms. The third part of this thesis presents three new pruning techniques for rule induction based on the Minimum Description Length (MDL) principle. The result is an effective learning algorithm that not only produces an accurate and compact rule set, but also significantly accelerates the learning process. RULES-3 Plus is a simple rule induction algorithm developed at the author's laboratory which follows a similar approach to the AQ family of algorithms. Despite having been successfully applied to many learning problems, it has some drawbacks which adversely affect its performance. The fourth part of this thesis reports on an attempt to overcome these drawbacks by utilising the ideas presented in the first three parts of the thesis. A new version of RULES-3 Plus is reported that is a general and efficient algorithm with a wide range of potential applications
    • …
    corecore