2,575 research outputs found

    Evolving Crushers

    Get PDF
    This paper describes the use of an evolutionary algorithm to solve an engineering design problem. The problem involves determining the geometry and operating settings for a crusher in a comminution circuit for ore processing. The intention is to provide a tool for consulting engineers that can be used to explore candidate designs for various scenarios. The algorithm has proved capable of deriving designs that are clearly superior to existing designs, promising significant financial benefit

    PARAMETRIC ANALYSIS OF A GRINDING PROCESS USING THE ROUGH SETS THEORY

    Get PDF
    With continuous automation of the manufacturing industries and the development of advanced data acquisition systems, a huge volume of manufacturing-related data is now available which can be effectively mined to extract valuable knowledge and unfold the hidden patterns. In this paper, a data mining tool, in the form of the rough sets theory, is applied to a grinding process to investigate the effects of its various input parameters on the responses. Rotational speed of the grinding wheel, depth of cut and type of the cutting fluid are grinding parameters, and average surface roughness, amplitude of vibration and grinding ratio are the responses. The best parametric settings of the grinding parameters are also derived to control the quality characteristics of the ground components. The developed decision rules are quite easy to understand and can truly predict the response values at varying combinations of the considered grinding parameters

    Aggregate process planning and manufacturing assessment for concurrent engineering

    Get PDF
    The introduction of concurrent engineering has led to a need to perform product development tasks with reduced information detail. Decisions taken during the early design stages will have the greatest influence on the cost of manufacture. The manufacturing requirements for alternative design options should therefore be considered at this time. Existing tools for product manufacture assessment are either too detailed, requiring the results of detailed design information, or too abstract, unable to consider small changes in design configuration. There is a need for an intermediate level of assessment which will make use of additional design detail where available, whilst allowing assessment of early designs. This thesis develops the concept of aggregate process planning as a methodology for supporting concurrent engineering. A methodology for performing aggregate process planning of early product designs is presented. Process and resources alternatives are identified for each feature of the component and production plans are generated from these options. Alternative production plans are assessed in terms of cost, quality and production time. A computer based system (CESS, Concurrent Engineering Support System) has been developed to implement the proposed methodology. The system employs object oriented modelling techniques to represent designs, manufacturing resources and process planning knowledge. A product model suitable for the representation of component designs at varying levels of detail is presented. An aggregate process planning functionality has been developed to allow the generation of sets of alternative plans for a component in a given factory. Manufacturing cost is calculated from the cost of processing, set-ups, transport, material and quality. Processing times are calculated using process specific methods which are based on standard cutting data. Process quality cost is estimated from a statistical analysis of historical SPC data stored for similar operations performed in the factory, where available. The aggregate process planning functionality has been tested with example component designs drawn from industry

    Concurrent optimization of process parameters and product design variables for near net shape manufacturing processes

    Get PDF
    This paper presents a new systematic approach to the optimization of both design and manufacturing variables across a multi-step production process. The approach assumes a generic manufacturing process in which an initial Near Net Shape (NNS) process is followed by a limited number of finishing operations. In this context the optimisation problem becomes a multi-variable problem in which the aim is to optimize by minimizing cost (or time) and improving technological performances (e.g. turning force). To enable such computation a methodology, named Conditional Design Optimization (CoDeO) is proposed which allows the modelling and simultaneous optimization of process parameters and product design (geometric variables), using single or multi-criteria optimization strategies. After investigation of CoDeO’s requirements, evolutionary algorithms, in particular Genetic Algorithms, are identified as the most suitable for overall NNS manufacturing chain optimization The CoDeO methodology is tested using an industrial case study that details a process chain composed of casting and machining processes. For the specific case study presented the optimized process resulted in cost savings of 22% (corresponding to equivalent machining time savings) and a 10% component weight reduction

    Dynamic simulation of industrial grinding circuits : mineral liberation, advanced process control, and real-time optimisation

    Get PDF
    Étant donné que les minéraux apparaissent fréquemment dans des associations complexes dans la nature, la libération minérale est un aspect clé du traitement de minerais et celle-ci est accomplie par comminution. Cette opération est certainement l’une des plus importantes, mais aussi des plus coûteuses dans l’industrie. La réussite globale d’une usine dépend souvent de la performance du circuit de broyage car il existe un compromis pour atteindre la taille des particules libérant les minéraux ciblés afin d’obtenir des concentrés de haute pureté tout en ayant de faibles coûts d’opération, lesquels sont largement influencés par la consommation énergétique. Dans les années récentes, les entreprises ont été confrontées à des objectifs de performance plus exigeants, une concurrence accrue sur les marchés, et des réglementations environnementales et de sécurité plus strictes. D’autres défis supplémentaires sont inhérents aux circuits de broyage, par exemple les réponses non linéaires, le niveau élevé d’intercorrélation entre les variables et les recirculations de matière. Les problèmes ci-dessus soulignent la pertinence d’avoir des systèmes de contrôle et d’optimisation adéquats pour lesquels les praticiens profitent de plus en plus des approches basées sur des modèles pour y faire face de façon systématique. La modélisation et la simulation sont des outils puissants ayant des avantages significatifs tels que les faibles coûts, les temps requis pour réaliser des expériences relativement courts et la possibilité de tester des conditions opérationnelles extrêmes ainsi que différentes configurations des circuits sans interrompre la production. De toute évidence, la qualité des résultats sera aussi bonne que la capacité du modèle à représenter la réalité, ce qui souligne l’importance d’avoir des modèles précis et des procédures de calibrage appropriées, un sujet fréquemment omis dans la littérature. Un autre aspect essentiel qui n’a pas été rapporté est l’intégration efficace de la libération minérale aux systèmes de contrôle et d’optimisation de procédés. Bien qu’il s’agisse d’une information clé directement liée aux performances de l’étape de concentration, la plupart des stratégies se concentrent exclusivement sur la taille de particule du produit. Ceci est compréhensible étant donné qu’il est impossible de mesurer la distribution de libération présentement. Basée sur une librairie de simulation d’usines de traitement des minerais déjà existante, cette recherche aborde lesdits problèmes en (1) développant un modèle de libération minérale visant à coupler les étapes de broyage et de concentration ; (2) programmant et validant par calibrage un modèle phénoménologique de broyeur autogène/semi-autogène (BA/BSA), nécessaire pour compléter la librairie de simulation ; (3) couplant un simulateur de circuit de broyage à un procédé de concentration avec le modèle de libération, et (4) développant un système de contrôle et d’optimisation qui considère explicitement des données de libération minérale pour évaluer les avantages économiques. Les principaux résultats confirment que le modèle de libération est capable de reproduire avec précision des distributions de libération minérale couramment observées dans l’industrie. Cependant, si les données de calibrage correspondent à un point d’opération unique, la validité pourrait être limitée aux régions voisines proches. Le problème de caractériser l’évolution de la libération minérale aux diverses conditions d’opération ainsi qu’aux régimes transitoires reste à être abordé. Le modèle de libération s’est aussi révélé utile pour coupler des circuits de broyage avec des procédés de concentration, en particulier pour une unité de flottation. Quant au modèle de BA/BSA, celui-ci peut capturer le régime statique ainsi que la dynamique d’un broyeur réel et conjointement avec le reste des équipements dans la librairie de simulation, des circuits de broyage industriels. Ceci a été confirmé par le calibrage à partir des données d’opération d’une usine et des tests en laboratoire, tout en suivant une procédure systématique, contribuant aussi au sujet de l’établissement de méthodologies de calibrage standardisées. Pour terminer, les expériences concernant la stratégie de contrôle et d’optimisation basée sur la libération minérale suggèrent que l’utilisation de cette information peut améliorer la performance globale des circuits de broyage-séparation en réagissant aux variations des caractéristiques de libération, qui à leur tour influencent l’efficacité de séparation. L’étude de cas réalisé révèle que cela peut entraîner une augmentation du débit massique et de la teneur du concentré, de la récupération des métaux et des revenus de l’ordre de +0.5%, +1%, +1% et +5%, respectivement, par rapport au cas où ces informations sont omises.As minerals frequently appear in complex associations in nature, mineral liberation is one of the most relevant aspects in ore processing and is achieved through comminution. This operation is one of the most important, but also one of the most expensive ones in industry. The global efficiency of a plant often depends on the performance of the grinding circuit, since there is a compromise to achieve the particle size liberating the targetted minerals in order to obtain high purity concentrates while maintaining low operating costs, which are largely influenced by the energy consumption. In recent years, companies have been facing more demanding performance targets, stronger competition, and more stringent environmental and safety regulations. Additional challenges are inherent to the grinding circuits themselves, e.g. the nonlinear responses, high degree of intercorrelation of the different variables, and material recirculations. The abovementioned issues highlight the relevance of adequate process control and optimisation, and practitioners rely more often on model-based approaches in order to face them systematically. Modeling and simulation are powerful tools with significant advantages such as low costs, required times for conducting experiments are relatively short, and the possibility of testing extreme operational conditions as well as different circuit configurations without disrupting production. Evidently, the quality of the results will only be as good as the model capacity to represent the reality, which emphasises the relevance of having precise models and proper calibration procedures, the latter being a topic frequently omitted in the literature. Another crucial aspect that has not been reported yet is the effective integration of mineral liberation in control and optimisation schemes. Although it is a key piece of information directly related to the performance of the concentration stage, most strategies focus exclusively on the particle size. This is understandable given that it is currently impossible to measure the liberation distribution online. Based on an existing mineral processing plant simulation library, this research addresses these problems by (1) developing a mineral liberation model aiming at linking the grinding and concentration stages; (2) programming a phenomenological autogenous/semiautogenous (AG/SAG) mill model, required to complement the simulation toolbox, and validating it through calibration; (3) coupling a grinding circuit simulator to a concentration process by means of the liberation model, and (4) developing a plantwide control and optimisation scheme considering mineral liberation data explicitly to evaluate the economic benefits. The main results confirm that the liberation model is capable of reproducing accurately mineral distributions observed in industry. If calibration data correspond to a single operating point, its validity may however be limited to the close neighbourhood. Characterising the evolution of mineral liberation in different operating conditions and transient states remains to be addressed. The liberation model proved to be equally useful in coupling grinding circuits with concentration processes, specifically for flotation. As for the AG/SAG mill model, it can capture the steady state and dynamic behaviour of an actual device and, along with the rest of pieces of equipment in the simulation toolbox, of industrial grinding circuits. This was confirmed through calibration from plant data and laboratory testwork following a systematic procedure, contributing to the endeavour of establishing standard calibration methodologies. Lastly, the results of the designed control and optimisation scheme suggest that using liberation data for control and real-time optimisation can improve the overall performance of grinding-separation circuits by reacting to variations in the liberation characteristics, which in turn influence the concentration performance. The case study reveals that doing so can lead to increases in the concentrate mass flow rate and grade, metal recovery, and global profits in the order of +0.5%, +1%, +1%, and +5%, respectively, compared to the case omitting this information

    Optimum Allocation of Inspection Stations in Multistage Manufacturing Processes by Using Max-Min Ant System

    Get PDF
    In multistage manufacturing processes it is common to locate inspection stations after some or all of the processing workstations. The purpose of the inspection is to reduce the total manufacturing cost, resulted from unidentified defective items being processed unnecessarily through subsequent manufacturing operations. This total cost is the sum of the costs of production, inspection and failures (during production and after shipment). Introducing inspection stations into a serial multistage manufacturing process, although constituting an additional cost, is expected to be a profitable course of action. Specifically, at some positions the associated inspection costs will be recovered from the benefits realised through the detection of defective items, before wasting additional cost by continuing to process them. In this research, a novel general cost modelling for allocating a limited number of inspection stations in serial multistage manufacturing processes is formulated. In allocation of inspection station (AOIS) problem, as the number of workstations increases, the number of inspection station allocation possibilities increases exponentially. To identify the appropriate approach for the AOIS problem, different optimisation methods are investigated. The MAX-MIN Ant System (MMAS) algorithm is proposed as a novel approach to explore AOIS in serial multistage manufacturing processes. MMAS is an ant colony optimisation algorithm that was designed originally to begin an explorative search phase and, subsequently, to make a slow transition to the intensive exploitation of the best solutions found during the search, by allowing only one ant to update the pheromone trails. Two novel heuristics information for the MMAS algorithm are created. The heuristic information for the MMAS algorithm is exploited as a novel means to guide ants to build reasonably good solutions from the very beginning of the search. To improve the performance of the MMAS algorithm, six local search methods which are well-known and suitable for the AOIS problem are used. Selecting relevant parameter values for the MMAS algorithm can have a great impact on the algorithm’s performance. As a result, a method for tuning the most influential parameter values for the MMAS algorithm is developed. The contribution of this research is, for the first time, a methodology using MMAS to solve the AOIS problem in serial multistage manufacturing processes has been developed. The methodology takes into account the constraints on inspection resources, in terms of a limited number of inspection stations. As a result, the total manufacturing cost of a product can be reduced, while maintaining the quality of the product. Four numerical experiments are conducted to assess the MMAS algorithm for the AOIS problem. The performance of the MMAS algorithm is compared with a number of other methods this includes the complete enumeration method (CEM), rule of thumb, a pure random search algorithm, particle swarm optimisation, simulated annealing and genetic algorithm. The experimental results show that the effectiveness of the MMAS algorithm lies in its considerably shorter execution time and robustness. Further, in certain conditions results obtained by the MMAS algorithm are identical to the CEM. In addition, the results show that applying local search to the MMAS algorithm has significantly improved the performance of the algorithm. Also the results demonstrate that it is essential to use heuristic information with the MMAS algorithm for the AOIS problem, in order to obtain a high quality solution. It was found that the main parameters of MMAS include the pheromone trail intensity, heuristic information and evaporation of pheromone are less sensitive within the specified range as the number of workstations is significantly increased

    Forecast based traffic signal coordination using congestion modelling and real-time data

    Get PDF
    This dissertation focusses on the implementation of a Real-Time Simulation-Based Signal Coordination module for arterial traffic, as proof of concept for the potential of integrating a new generation of advanced heuristic optimisation tools into Real-Time Traffic Management Systems. The endeavour represents an attempt to address a number of shortcomings observed in most currently marketed on-line signal setting solutions and provide better adaptive signal timings. It is unprecedented in its use of a Genetic Algorithm coupled with Continuous Dynamic Traffic Assignment as solution evaluation method, only made possible by the recently presented parallelisation strategies for the underlying algorithms. Within a fully functional traffic modelling and management framework, the optimiser is developed independently, leaving ample space for future adaptations and extensions, while relying on the best available technology to provide it fast and realistic solution evaluation based on reliable real-time supply and demand data. The optimiser can in fact operate on high quality network models that are well calibrated and always up-to-date with real-world road conditions; rely on robust, multi-source network wide traffic data, rather than being attached to single detectors; manage area coordination using an external simulation engine, rather than a na¨ıve flow propagation model that overlooks crucial traffic dynamics; and even incorporate real-time traffic forecast to account for transient phenomena in the near future to act as a feedback controller. Results clearly confirm the efficacy of the proposed method, by which it is possible to obtain relevant and consistent corridor performance improvements with respect to widely known arterial bandwidth maximisation techniques under a range of different traffic conditions. The computational efforts involved are already manageable for realistic real-world applications, and future extensions of the presented approach to more complex problems seem within reach thanks to the load distribution strategies already envisioned and prepared for in the context of this work
    corecore