69 research outputs found

    Heuristics and metaheuristics for heavily constrained hybrid flowshop problems

    Full text link
    Due to the current trends in business as the necessity to have a large catalogue of products, orders that increase in frequency but not in size, globalisation and a market that is increasingly competitive, the production sector faces an ever harder economical environment. All this raises the need for production scheduling with maximum efficiency and effectiveness. The first scientific publications on production scheduling appeared more than half a century ago. However, many authors have recognised a gap between the literature and the industrial problems. Most of the research concentrates on optimisation problems that are actually a very simplified version of reality. This allows for the use of sophisticated approaches and guarantees in many cases that optimal solutions are obtained. Yet, the exclusion of real-world restrictions harms the applicability of those methods. What the industry needs are systems for optimised production scheduling that adjust exactly to the conditions in the production plant and that generates good solutions in very little time. This is exactly the objective in this thesis, that is, to treat more realistic scheduling problems and to help closing the gap between the literature and practice. The considered scheduling problem is called the hybrid flowshop problem, which consists in a set of jobs that flow through a number of production stages. At each of the stages, one of the machines that belong to the stage is visited. A series of restriction is considered that include the possibility to skip stages, non-eligible machines, precedence constraints, positive and negative time lags and sequence dependent setup times. In the literature, such a large number of restrictions has not been considered simultaneously before. Briefly, in this thesis a very realistic production scheduling problem is studied. Various optimisation methods are presented for the described scheduling problem. A mixed integer programming model is proposed, in order to obtaiUrlings ., T. (2010). Heuristics and metaheuristics for heavily constrained hybrid flowshop problems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8439Palanci

    Native metaheuristics for non-permutation flowshop scheduling

    Get PDF
    The most general flowshop scheduling problem is also addressed in the literature as non-permutation flowshop (NPFS). Current processors are able to cope with the combinatorial complexity of (n!)exp m. NPFS scheduling by metaheuristics. After briefly discussing the requirements for a manufacturing layout to be designed and modeled as non-permutation flowshop, a disjunctive graph (digraph) approach is used to build native solutions. The implementation of an Ant Colony Optimization (ACO) algorithm has been described in detail; it has been shown how the biologically inspired mechanisms produce eligible schedules, as opposed to most metaheuristics approaches, which improve permutation solutions. ACO algorithms are an example of native non-permutation (NNP) solutions of the flowshop scheduling problem, opening a new perspective on building purely native approaches. The proposed NNP-ACO has been assessed over existing native approaches improving most makespan upper bounds of the benchmark problems from Demirkol et al. (1998)

    A survey of scheduling problems with setup times or costs

    Get PDF
    Author name used in this publication: C. T. NgAuthor name used in this publication: T. C. E. Cheng2007-2008 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Performance Analysis and Capacity Planning of Multi-stage Stochastic Order Fulfilment Systems with Levelled Order Release and Order Deadlines

    Get PDF
    Kundenorientierte Auftragsbearbeitungsprozesse in Logistik- und Produktionssystemen sind heutzutage mit einem kontinuierlich steigenden Auftragsvolumen zunehmend kleinvolumiger Aufträge, hohen Kundenanforderungen hinsichtlich kurzfristiger und individueller Lieferfristen und einer stark stochastisch schwankenden Kundennachfrage konfrontiert. Um trotz der volatilen Kundennachfrage eine effiziente Auftragsbearbeitung und die Einhaltung der kundenindividuellen Lieferfristen gewährleisten zu können, muss die Arbeitslast kundenorientierter Auftragsbearbeitungsprozesse auf geeignete Weise geglättet werden. Hopp und Spearman (2004) unterscheiden zur Kompensation von Schwankungen in Produktionssystemen zwischen den Dimensionen Bestand, Zeit und Kapazität. Diese stellen auch einen guten Ausgangspunkt für die Entwicklung von Glättungskonzepten für stochastische, kundenorientierte Bearbeitungsprozesse dar. In dieser Arbeit werden die Potentiale der Dimensionen Zeit und Kapazität in der Strategie der nivellierten Auftragseinlastung zusammengeführt, um die Arbeitslast mehrstufiger, stochastischer Auftragsbearbeitungsprozesse mit kundenindividuellen Fälligkeitsfristen auf taktischer Ebene zeitlich zu glätten. Ziel dieser Arbeit ist (1) die Entwicklung eines Glättungskonzeptes, der so genannten Strategie der nivellierten Auftragseinlastung, (2) die Entwicklung eines zeitdiskreten analytischen Modells zur Leistungsanalyse und (3) die Entwicklung eines Algorithmus zur Kapazitätsplanung unter Gewährleistung bestimmter Leistungsanforderungen für mehrstufige, stochastische Auftragsbearbeitungsprozesse mit nivellierter Auftragseinlastung und kundenindividuellen Fälligkeitsfristen. Die Strategie der nivellierten Auftragseinlastung zeichnet sich durch die Bereitstellung zeitlich konstanter Kapazitäten für die Auftragsbearbeitung und eine Auftragsbearbeitung gemäß aufsteigender Fälligkeitsfristen aus. Auf diese Weise wird der zeitliche Spielraum jedes Auftrags zwischen dessen Auftragseingang und dessen Fälligkeitsfrist systematisch zur Kompensation der stochastischen Nachfrageschwankungen genutzt. Die verbleibende Variabilität wird in Abhängigkeit der Leistungsanforderungen der Kunden durch die Höhe der bereitgestellten Kapazität kompensiert. Das analytische Modell zur Leistungsanalyse mehrstufiger, stochastischer Auftragsbearbeitungsprozesse mit nivellierter Auftragseinlastung und kundenindividuellen Fälligkeitsfristen bildet die Auftragsbearbeitung als zeitdiskrete Markov-Kette ab und berechnet verschiedene stochastische und deterministische Leistungskenngrößen auf Basis deren asymptotischer Zustandsverteilung. Diese Kenngrößen, wie beispielsweise Durchsatz, Servicegrad, Auslastung, Anzahl Lost Sales sowie Zeitpuffer und Rückstandsdauer eines Auftrags, ermöglichen eine umfassende und exakte Leistungsanalyse von mehrstufigen, stochastischen Auftragsbearbeitungsprozessen mit nivellierter Auftragseinlastung und kundenindividuellen Fälligkeitsfristen. Der Zusammenhang zwischen der bereitgestellten Kapazität und der damit erreichbaren Leistungsfähigkeit kann nicht explizit durch eine mathematische Gleichung beschrieben werden, sondern ist implizit durch das analytische Modell gegeben. Daher ist das Entscheidungsproblem der Kapazitätsplanung unter Gewährleistung bestimmter Leistungsanforderungen ein Blackbox-Optimierungsproblem. Die problemspezifischen Konfigurationen der Blackbox-Optimierungsalgorithmen Mesh Adaptive Direct Search und Surrogate Optimisation Integer ermöglichen eine zielgerichtete Bestimmung des minimalen prozessspezifischen Kapazitätsbedarfs, der zur Gewährleistung der Leistungsanforderungen der Kunden bereitgestellt werden muss. Diese werden anhand einer oder mehrerer Leistungskenngrößen des Auftragsbearbeitungsprozesses spezifiziert. Numerische Untersuchungen zur Beurteilung der Leistungsfähigkeit der Strategie der nivellierten Auftragseinlastung zeigen, dass in Systemen mit einer Auslastung größer als 0,6 durch den Einsatz der Strategie der nivellierten Auftragseinlastung ein deutlich höherer α\alpha- und β\beta-Servicegrad erreicht werden kann als mit First come first serve. Außerdem ist der Kapazitätsbedarf zur Gewährleistung eines bestimmten α\alpha-Servicegrads bei Einsatz der Strategie der nivellierten Auftragseinlastung höchstens so hoch wie bei Einsatz von First come first serve

    Enhancing Operational Flood Detection Solutions through an Integrated Use of Satellite Earth Observations and Numerical Models

    Get PDF
    Among natural disasters floods are the most common and widespread hazards worldwide (CRED and UNISDR, 2018). Thus, making communities more resilient to flood is a priority, particularly in large flood-prone areas located in emerging countries, because the effects of extreme events severely setback the development process (Wright, 2013). In this context, operational flood preparedness requires novel modeling approaches for a fast delineation of flooding in riverine environments. Starting from a review of advances in the flood modeling domain and a selection of the more suitable open toolsets available in the literature, a new method for the Rapid Estimation of FLood EXtent (REFLEX) at multiple scales (Arcorace et al., 2019) is proposed. The simplified hydraulic modeling adopted in this method consists of a hydro-geomorphological approach based on the Height Above the Nearest Drainage (HAND) model (Nobre et al., 2015). The hydraulic component of this method employs a simplified version of fluid mechanic equations for natural river channels. The input runoff volume is distributed from channel to hillslope cells of the DEM by using an iterative flood volume optimization based on Manning\u2019s equation. The model also includes a GIS-based method to expand HAND contours across neighbor watersheds in flat areas, particularly useful in flood modeling expansion over coastal zones. REFLEX\u2019s flood modeling has been applied in multiple case studies in both surveyed and ungauged river basins. The development and the implementation of the whole modeling chain have enabled a rapid estimation of flood extent over multiple basins at different scales. When possible, flood modeling results are compared with reference flood hazard maps or with detailed flood simulations. Despite the limitations of the method due to the employed simplified hydraulic modeling approach, obtained results are promising in terms of flood extent and water depth. Given the geomorphological nature of the method, it does not require initial and boundary conditions as it is in traditional 1D/2D hydraulic modeling. Therefore, its usage fits better in data-poor environments or large-scale flood modeling. An extensive employment of this slim method has been adopted by CIMA Research Foundation researchers for flood hazard mapping purposes over multiple African countries. As collateral research, multiple types of Earth observation (EO) data have been employed in the REFLEX modeling chain. Remotely sensed data from the satellites, in fact, are not only a source to obtain input digital terrain models but also to map flooded areas. Thus, in this work, different EO data exploitation methods are used for estimating water extent and surface height. Preliminary results by using Copernicus\u2019s Sentinel-1 SAR and Sentinel-3 radar altimetry data highlighted their potential mainly for model calibration and validation. In conclusion, REFLEX combines the advantages of geomorphological models with the ones of traditional hydraulic modeling to ensure a simplified steady flow computation of flooding in open channels. This work highlights the pros and cons of the method and indicates the way forward for future research in the hydro-geomorphological domain

    Aplicación de la meta-heurística colonia de hormigas para la resolución de problemas multi-objetivo de programación de la producción en Flowshops híbridos (flexibles)

    Get PDF
    150 páginasCon el fin de mejorar los niveles de competitividad, las empresas de manufactura y de servicio están obligadas a la implementación constante de procedimientos formales que les permitan optimizar sus procesos. En ese sentido, en lo referente a las operaciones de manufactura, la logística de producción, y más específicamente la programación de operaciones, juega un papel importante en cuanto al uso eficiente de los recursos. La programación de operaciones (scheduling, en inglés) es una rama de la optimización combinatoria que consiste en la asignación de recursos para la realización de un conjunto de actividades con el fin de optimizar uno o varios objetivos. Debido a la complejidad intrínseca en la mayoría de los problemas de programación de la producción, los cuales son del tipo NP-duro (esto es, el tiempo que requieren para resolver un caso particular de un problema crece en el peor de los casos de manera exponencial con respecto al tamaño del problema), los métodos exactos convencionales de resolución tales como: programación lineal, entera y mixta, entre otros, no son eficientes en términos del tiempo de cálculo para llegar a la solución óptima. Por lo tanto, se hace necesario el uso de enfoques alternativos para resolver este tipo de problemas en un tiempo razonablemente corto para el tomador de decisiones, sobre todo aquellas que se toman diariamente. Dentro de estos enfoques se encuentran las metaheurísticas, que consisten en procedimientos formales desarrollados con el fin de superar esta dificultad que se presenta con los métodos tradicionales. Los procedimientos meta-heurísticos más comunes para la resolución de problemas combinatorios son: los algoritmos genéticos, la búsqueda tabú, la colonia de hormigas y el recocido simulado entre otros

    Hybrid Image Classification Technique for Land-Cover Mapping in the Arctic Tundra, North Slope, Alaska

    Get PDF
    Remotely sensed image classification techniques are very useful to understand vegetation patterns and species combination in the vast and mostly inaccessible arctic region. Previous researches that were done for mapping of land cover and vegetation in the remote areas of northern Alaska have considerably low accuracies compared to other biomes. The unique arctic tundra environment with short growing season length, cloud cover, low sun angles, snow and ice cover hinders the effectiveness of remote sensing studies. The majority of image classification research done in this area as reported in the literature used traditional unsupervised clustering technique with Landsat MSS data. It was also emphasized by previous researchers that SPOT/HRV-XS data lacked the spectral resolution to identify the small arctic tundra vegetation parcels. Thus, there is a motivation and research need to apply a new classification technique to develop an updated, detailed and accurate vegetation map at a higher spatial resolution i.e. SPOT-5 data. Traditional classification techniques in remotely sensed image interpretation are based on spectral reflectance values with an assumption of the training data being normally distributed. Hence it is difficult to add ancillary data in classification procedures to improve accuracy. The purpose of this dissertation was to develop a hybrid image classification approach that effectively integrates ancillary information into the classification process and combines ISODATA clustering, rule-based classifier and the Multilayer Perceptron (MLP) classifier which uses artificial neural network (ANN). The main goal was to find out the best possible combination or sequence of classifiers for typically classifying tundra type vegetation that yields higher accuracy than the existing classified vegetation map from SPOT data. Unsupervised ISODATA clustering and rule-based classification techniques were combined to produce an intermediate classified map which was used as an input to a Multilayer Perceptron (MLP) classifier. The result from the MLP classifier was compared to the previous classified map and for the pixels where there was a disagreement for the class allocations, the class having a higher kappa value was assigned to the pixel in the final classified map. The results were compared to standard classification techniques: simple unsupervised clustering technique and supervised classification with Feature Analyst. The results indicated higher classification accuracy (75.6%, with kappa value of .6840) for the proposed hybrid classification method than the standard classification techniques: unsupervised clustering technique (68.3%, with kappa value of 0.5904) and supervised classification with Feature Analyst (62.44%, with kappa value of 0.5418). The results were statistically significant at 95% confidence level

    Variant-oriented Planning Models for Parts/Products Grouping, Sequencing and Operations

    Get PDF
    This research aims at developing novel methods for utilizing the commonality between part/product variants to make modern manufacturing systems more flexible, adaptable, and agile for dealing with less volume per variant and minimizing total changes in the setup between variants. Four models are developed for use in four important domains of manufacturing systems: production sequencing, product family formation, production flow, and products operations sequences retrieval. In all these domains, capitalizing on commonality between the part/product variants has a pivotal role. For production sequencing; a new policy based on setup similarity between product variants is proposed and its results are compared with a developed mathematical model in a permutation flow shop. The results show the proposed algorithm is capable of finding solutions in less than 0.02 seconds with an average error of 1.2%. For product family formation; a novel operation flow based similarity coefficient is developed for variants having networked structures and integrated with two other similarity coefficients, operation and volume similarity, to provide a more comprehensive similarity coefficient. Grouping variants based on the proposed integrated similarity coefficient improves changeover time and utilization of the system. A sequencing method, as a secondary application of this approach, is also developed. For production flow; a new mixed integer programing (MIP) model is developed to assign operations of a family of product variants to candidate machines and also to select the best place for each machine among the candidate locations. The final sequence of performing operations for each variant having networked structures is also determined. The objective is to minimize the total backtracking distance leading to an improvement in total throughput of the system (7.79% in the case study of three engine blocks). For operations sequences retrieval; two mathematical models and an algorithm are developed to construct a master operation sequence from the information of the existing variants belonging to a family of parts/products. This master operation sequence is used to develop the operation sequences for new variants which are sufficiently similar to existing variants. Using the proposed algorithm decreases time of developing the operations sequences of new variants to the seconds
    corecore