3,186 research outputs found
Applications of simulation and optimization techniques in optimizing room and pillar mining systems
The goal of this research was to apply simulation and optimization techniques in solving mine design and production sequencing problems in room and pillar mines (R&P). The specific objectives were to: (1) apply Discrete Event Simulation (DES) to determine the optimal width of coal R&P panels under specific mining conditions; (2) investigate if the shuttle car fleet size used to mine a particular panel width is optimal in different segments of the panel; (3) test the hypothesis that binary integer linear programming (BILP) can be used to account for mining risk in R&P long range mine production sequencing; and (4) test the hypothesis that heuristic pre-processing can be used to increase the computational efficiency of branch and cut solutions to the BILP problem of R&P mine sequencing.
A DES model of an existing R&P mine was built, that is capable of evaluating the effect of variable panel width on the unit cost and productivity of the mining system. For the system and operating conditions evaluated, the result showed that a 17-entry panel is optimal. The result also showed that, for the 17-entry panel studied, four shuttle cars per continuous miner is optimal for 80% of the defined mining segments with three shuttle cars optimal for the other 20%. The research successfully incorporated risk management into the R&P production sequencing problem, modeling the problem as BILP with block aggregation to minimize computational complexity. Three pre-processing algorithms based on generating problem-specific cutting planes were developed and used to investigate whether heuristic pre-processing can increase computational efficiency. Although, in some instances, the implemented pre-processing algorithms improved computational efficiency, the overall computational times were higher due to the high cost of generating the cutting planes --Abstract, page iii
Stochastic make-to-stock inventory deployment problem: an endosymbiotic psychoclonal algorithm based approach
Integrated steel manufacturers (ISMs) have no specific product, they just produce finished product from the ore. This enhances the uncertainty prevailing in the ISM regarding the nature of the finished product and significant demand by customers. At present low cost mini-mills are giving firm competition to ISMs in terms of cost, and this has compelled the ISM industry to target customers who want exotic products and faster reliable deliveries. To meet this objective, ISMs are exploring the option of satisfying part of their demand by converting strategically placed products, this helps in increasing the variability of product produced by the ISM in a short lead time. In this paper the authors have proposed a new hybrid evolutionary algorithm named endosymbiotic-psychoclonal (ESPC) to decide what and how much to stock as a semi-product in inventory. In the proposed theory, the ability of previously proposed psychoclonal algorithms to exploit the search space has been increased by making antibodies and antigen more co-operative interacting species. The efficacy of the proposed algorithm has been tested on randomly generated datasets and the results compared with other evolutionary algorithms such as genetic algorithms (GA) and simulated annealing (SA). The comparison of ESPC with GA and SA proves the superiority of the proposed algorithm both in terms of quality of the solution obtained and convergence time required to reach the optimal/near optimal value of the solution
Optimization-Based Architecture for Managing Complex Integrated Product Development Projects
By the mid-1990\u27s, the importance of early introduction of new products to both market share and profitability became fully understood. Thus, reducing product time-to-market became an essential requirement for continuous competition. Integrated Product Development (IPD) is a holistic approach that helps to overcome problems that arise in a complex product development project. IPD emphasis is to provide a framework for an effective planning and managing of engineering projects. Coupled with the fact that about 70% of the life cycle cost of a product is committed at early design phases, the motivation for developing and implementing more effective methodologies for managing the design process of IPD projects became very strong.
The main objective of this dissertation is to develop an optimization-based architecture that helps guiding the project manager efforts for managing the design process of complex integrated product development projects. The proposed architecture consists of three major phases: system decomposition, process re-engineering, and project scheduling and time-cost trade-off analysis. The presented research contributes to five areas of research: (1) Improving system performance through efficient re-engineering of its structure. The Dependency Structure Matrix (DSM) provides an effective tool for system structure understanding. An optimization algorithm called Simulated Annealing (SA) was implemented to find an optimal activity sequence of the DSM representing a design project. (2) A simulation-based optimization framework that integrates simulated annealing with a commercial risk analysis software called Crystal Ball was developed to optimally re-sequence the DSM activities given stochastic activity data. (3) Since SA was originally developed to handle deterministic objective functions, a modified SA algorithm able to handle stochastic objective functions was presented. (4) A methodology for the conversion of the optimally sequenced DSM into an equivalent DSM, and then into a project schedule was proposed. (5) Finally, a new hybrid time-cost trade-off model based on the trade-off of resources for project networks was presented.
These areas of research were further implemented through a developed excel add-in called “optDSM”. The tool was developed by the author using Visual Basic for Application (VBA) programming language
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Recommended from our members
Optimal distributed generation planning based on NSGA-II and MATPOWER
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe UK and the world are moving away from central energy resource to distributed generation (DG) in order to lower carbon emissions. Renewable energy resources comprise a big percentage of DGs and their optimal integration to the grid is the main attempt of planning/developing projects with in electricity network.
Feasibility and thorough conceptual design studies are required in the planning/development process as most of the electricity networks are designed in a few decades ago, not considering the challenges imposed by DGs. As an example, the issue of voltage rise during steady state condition becomes problematic when large amount of dispersed generation is connected to a distribution network. The efficient transfer of power out or toward the network is not currently an efficient solution due to phase angle difference of each network supplied by DGs. Therefore optimisation algorithms have been developed over the last decade in order to do the planning purpose optimally to alleviate the unwanted effects of DGs. Robustness of proposed algorithms in the literature has been only partially addressed due to challenges of power system problems such multi-objective nature of them. In this work, the contribution provides a novel platform for optimum integration of distributed generations in power grid in terms of their site and size. The work provides a modified non-sorting genetic algorithm (NSGA) based on MATPOWER (for power flow calculation) in order to find a fast and reliable solution to optimum planning. The proposed multi-objective planning tool, presents a fast convergence method for the case studies, incorporating the economic and technical aspects of DG planning from the planner‟s perspective. The proposed method is novel in terms of power flow constraints handling and can be applied to other energy planning problems
Fopid Controller Design for Robust Performance Using Particle Swarm Optimization
Mathematics Subject Classification: 26A33; 93C15, 93C55, 93B36, 93B35,
93B51; 03B42; 70Q05; 49N05This paper proposes a novel method to design an H∞ -optimal fractional order PID (FOPID) controller with ability to control the transient,
steady-state response and stability margins characteristics. The method uses particle swarm optimization algorithm and operates based on minimizing a general cost function. Minimization of the cost function is carried out
subject to the H∞ -norm; this norm is also included in the cost function to
achieve its lower value. The method is applied to a phase-locked-loop motor
speed system and an electromagnetic suspension system as two examples to
illustrate the design procedure and verify performance of the proposed controller. The results show that the proposed method is capable of improving system responses as compared to the conventional H∞ -optimal controller while still maintains the H∞ -optimality of the solutions
Recommended from our members
Operation and planning of distribution networks with integration of renewable distributed generators considering uncertainties: a review
YesDistributed generators (DGs) are a reliable solution to supply economic and reliable electricity to customers. It is the last stage in delivery of electric power which can be defined as an electric power source connected directly to the distribution network or on the customer site. It is necessary to allocate DGs optimally (size, placement and the type) to obtain commercial, technical, environmental and regulatory advantages of power systems. In this context, a comprehensive literature review of uncertainty modeling methods used for modeling uncertain parameters related to renewable DGs as well as methodologies used for the planning and operation of DGs integration into distribution network.This work was supported in part by the SITARA project funded by the British Council and the Department for Business, Innovation and Skills, UK and in part by the University of Bradford, UK under the CCIP grant 66052/000000
- …