441 research outputs found

    A Multiple Objective Formulation and Algorithm for the Layout Design of Food Processing Facilities.

    Get PDF
    A multiple objective formulation, which incorporates robustness and constraint enforcement as design criteria, is utilized to model the layout of food processing facilities. These facilities are subject to the compliance with guidelines dictated by public health agencies, changes in product mix, and variation in production levels due to seasonality, which render existing layout design algorithms unsuitable for their design. The solution of the robust multiple objective formulation is implemented using a construction heuristic algorithm, MORCH, and an improvement heuristics, MOLAD. The MORCH/MOLAD hybrid algorithm performs comparably to well known heuristic algorithms where materials handling cost is used as the only design criterion. Also, the MORCH/MOLAD solutions are more robust than those of robust heuristic algorithms. Moreover, through the use of a qualitative constraint matrix, the hybrid algorithm generates layouts that conform to guidelines imposed by U.S. regulatory agencies without significantly penalizing materials handling cost. As a qualitative constraint matrix in conjunction with materials handling cost are present in the model, a multicriteria decision making aid that deals with qualitative and quantitative factors, the Analytic Hierarchy Process, is used to select the most suitable layout and to guide the generation of and search for good alternative layout solutions by the hybrid algorithm

    Parallelized modelling and solution scheme for hierarchically scaled simulations

    Get PDF
    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy

    Dynamic Facility Layout for Cellular and Reconfigurable Manufacturing using Dynamic Programming and Multi-Objective Metaheuristics

    Get PDF
    The facility layout problem is one of the most classical yet influential problems in the planning of production systems. A well-designed layout minimizes the material handling costs (MHC), personnel flow distances, work in process, and improves the performance of these systems in terms of operating costs and time. Because of this importance, facility layout has a rich literature in industrial engineering and operations research. Facility layout problems (FLPs) are generally concerned with positioning a set of facilities to satisfy some criteria or objectives under certain constraints. Traditional FLPs try to put facilities with the high material flow as close as possible to minimize the MHC. In static facility layout problems (SFLP), the product demands and mixes are considered deterministic parameters with constant values. The material flow between facilities is fixed over the planning horizon. However, in today’s market, manufacturing systems are constantly facing changes in product demands and mixes. These changes make it necessary to change the layout from one period to the other to be adapted to the changes. Consequently, there is a need for dynamic approaches of FLP that aim to generate layouts with high adaptation concerning changes in product demand and mix. This thesis focuses on studying the layout problems, with an emphasis on the changing environment of manufacturing systems. Despite the fact that designing layouts within the dynamic environment context is more realistic, the SFLP is observed to have been remained worthy to be analyzed. Hence, a math-heuristic approach is developed to solve an SFLP. To this aim, first, the facilities are grouped into many possible vertical clusters, second, the best combination of the generated clusters to be in the final layout are selected by solving a linear programming model, and finally, the selected clusters are sequenced within the shop floor. Although the presented math-heuristic approach is effective in solving SFLP, applying approaches to cope with the changing manufacturing environment is required. One of the most well-known approaches to deal with the changing manufacturing environment is the dynamic facility layout problem (DFLP). DFLP suits reconfigurable manufacturing systems since their machinery and material handling devices are reconfigurable to encounter the new necessities for the variations of product mix and demand. In DFLP, the planning horizon is divided into some periods. The goal is to find a layout for each period to minimize the total MHC for all periods and the total rearrangement costs between the periods. Dynamic programming (DP) has been known as one of the effective methods to optimize DFLP. In the DP method, all the possible layouts for every single period are generated and given to DP as its state-space. However, by increasing the number of facilities, it is impossible to give all the possible layouts to DP and only a restricted number of layouts should be fed to DP. This leads to ignoring some layouts and losing the optimality; to deal with this difficulty, an improved DP approach is proposed. It uses a hybrid metaheuristic algorithm to select the initial layouts for DP that lead to the best solution of DP for DFLP. The proposed approach includes two phases. In the first phase, a large set of layouts are generated through a heuristic method. In the second phase, a genetic algorithm (GA) is applied to search for the best subset of layouts to be given to DP. DP, improved by starting with the most promising initial layouts, is applied to find the multi-period layout. Finally, a tabu search algorithm is utilized for further improvement of the solution obtained by improved DP. Computational experiments show that improved DP provides more efficient solutions than DP approaches in the literature. The improved DP can efficiently solve DFLP and find the best layout for each period considering both material handling and layout rearrangement costs. However, rearrangement costs may include some unpredictable costs concerning interruption in production or moving of facilities. Therefore, in some cases, managerial decisions tend to avoid any rearrangements. To this aim, a semi-robust approach is developed to optimize an FLP in a cellular manufacturing system (CMS). In this approach, the pick-up/drop-off (P/D) points of the cells are changed to adapt the layout with changes in product demand and mix. This approach suits more a cellular flexible manufacturing system or a conventional system. A multi-objective nonlinear mixed-integer programming model is proposed to simultaneously search for the optimum number of cells, optimum allocation of facilities to cells, optimum intra- and inter-cellular layout design, and the optimum locations of the P/D points of the cells in each period. A modified non-dominated sorting genetic algorithm (MNSGA-II) enhanced by an improved non-dominated sorting strategy and a modified dynamic crowding distance procedure is used to find Pareto-optimal solutions. The computational experiments are carried out to show the effectiveness of the proposed MNSGA-II against other popular metaheuristic algorithms

    Algorithm engineering in geometric network planning and data mining

    Get PDF
    The geometric nature of computational problems provides a rich source of solution strategies as well as complicating obstacles. This thesis considers three problems in the context of geometric network planning, data mining and spherical geometry. Geometric Network Planning: In the d-dimensional Generalized Minimum Manhattan Network problem (d-GMMN) one is interested in finding a minimum cost rectilinear network N connecting a given set of n pairs of points in ℝ^d such that each pair is connected in N via a shortest Manhattan path. The decision version of this optimization problem is known to be NP-hard. The best known upper bound is an O(log^{d+1} n) approximation for d>2 and an O(log n) approximation for 2-GMMN. In this work we provide some more insight in, whether the problem admits constant factor approximations in polynomial time. We develop two new algorithms, a `scale-diversity aware' algorithm with an O(D) approximation guarantee for 2-GMMN. Here D is a measure for the different `scales' that appear in the input, D ∈ O(log n) but potentially much smaller, depending on the problem instance. The other algorithm is based on a primal-dual scheme solving a more general, combinatorial problem - which we call Path Cover. On 2-GMMN it performs well in practice with good a posteriori, instance-based approximation guarantees. Furthermore, it can be extended to deal with obstacle avoiding requirements. We show that the Path Cover problem is at least as hard to approximate as the Hitting Set problem. Moreover, we show that solutions of the primal-dual algorithm are 4ω^2 approximations, where ω ≤ n denotes the maximum overlap of a problem instance. This implies that a potential proof of O(1)-inapproximability for 2-GMMN requires gadgets of many different scales and non-constant overlap in the construction. Geometric Map Matching for Heterogeneous Data: For a given sequence of location measurements, the goal of the geometric map matching is to compute a sequence of movements along edges of a spatially embedded graph which provides a `good explanation' for the measurements. The problem gets challenging as real world data, like traces or graphs from the OpenStreetMap project, does not exhibit homogeneous data quality. Graph details and errors vary in areas and each trace has changing noise and precision. Hence, formalizing what a `good explanation' is becomes quite difficult. We propose a novel map matching approach, which locally adapts to the data quality by constructing what we call dominance decompositions. While our approach is computationally more expensive than previous approaches, our experiments show that it allows for high quality map matching, even in presence of highly variable data quality without parameter tuning. Rational Points on the Unit Spheres: Each non-zero point in ℝ^d identifies a closest point x on the unit sphere S^{d-1}. We are interested in computing an ε-approximation y ∈ ℚ^d for x, that is exactly on S^{d-1} and has low bit-size. We revise lower bounds on rational approximations and provide explicit spherical instances. We prove that floating-point numbers can only provide trivial solutions to the sphere equation in ℝ^2 and ℝ^3. However, we show how to construct a rational point with denominators of at most 10(d-1)/ε^2 for any given ε ∈ (0, 1/8], improving on a previous result. The method further benefits from algorithms for simultaneous Diophantine approximation. Our open-source implementation and experiments demonstrate the practicality of our approach in the context of massive data sets, geo-referenced by latitude and longitude values.Die geometrische Gestalt von Berechnungsproblemen liefert vielfältige Lösungsstrategieen aber auch Hindernisse. Diese Arbeit betrachtet drei Probleme im Gebiet der geometrischen Netzwerk Planung, des geometrischen Data Minings und der sphärischen Geometrie. Geometrische Netzwerk Planung: Im d-dimensionalen Generalized Minimum Manhattan Network Problem (d-GMMN) möchte man ein günstigstes geradliniges Netzwerk finden, welches jedes der gegebenen n Punktepaare aus ℝ^d mit einem kürzesten Manhattan Pfad verbindet. Es ist bekannt, dass die Entscheidungsvariante dieses Optimierungsproblems NP-hart ist. Die beste bekannte obere Schranke ist eine O(log^{d+1} n) Approximation für d>2 und eine O(log n) Approximation für 2-GMMN. Durch diese Arbeit geben wir etwas mehr Einblick, ob das Problem eine Approximation mit konstantem Faktor in polynomieller Zeit zulässt. Wir entwickeln zwei neue Algorithmen. Ersterer nutzt die `Skalendiversität' und hat eine O(D) Approximationsgüte für 2-GMMN. Hierbei ist D ein Maß für die in Eingaben auftretende `Skalen'. D ∈ O(log n), aber potentiell deutlichen kleiner für manche Problem Instanzen. Der andere Algorithmus basiert auf einem Primal-Dual Schema zur Lösung eines allgemeineren, kombinatorischen Problems, welches wir Path Cover nennen. Die praktisch erzielten a posteriori Approximationsgüten auf Instanzen von 2-GMMN verhalten sich gut. Dieser Algorithmus kann für Netzwerk Planungsprobleme mit Hindernis-Anforderungen angepasst werden. Wir zeigen, dass das Path Cover Problem mindestens so schwierig zu approximieren ist wie das Hitting Set Problem. Darüber hinaus zeigen wir, dass Lösungen des Primal-Dual Algorithmus 4ω^2 Approximationen sind, wobei ω ≤ n die maximale Überlappung einer Probleminstanz bezeichnet. Daher müssen potentielle Beweise, die konstante Approximationen für 2-GMMN ausschließen möchten, Instanzen mit vielen unterschiedlichen Skalen und nicht konstanter Überlappung konstruieren. Geometrisches Map Matching für heterogene Daten: Für eine gegebene Sequenz von Positionsmessungen ist das Ziel des geometrischen Map Matchings eine Sequenz von Bewegungen entlang Kanten eines räumlich eingebetteten Graphen zu finden, welche eine `gute Erklärung' für die Messungen ist. Das Problem wird anspruchsvoll da reale Messungen, wie beispielsweise Traces oder Graphen des OpenStreetMap Projekts, keine homogene Datenqualität aufweisen. Graphdetails und -fehler variieren in Gebieten und jeder Trace hat wechselndes Rauschen und Messgenauigkeiten. Zu formalisieren, was eine `gute Erklärung' ist, wird dadurch schwer. Wir stellen einen neuen Map Matching Ansatz vor, welcher sich lokal der Datenqualität anpasst indem er sogenannte Dominance Decompositions berechnet. Obwohl unser Ansatz teurer im Rechenaufwand ist, zeigen unsere Experimente, dass qualitativ hochwertige Map Matching Ergebnisse auf hoch variabler Datenqualität erzielbar sind ohne vorher Parameter kalibrieren zu müssen. Rationale Punkte auf Einheitssphären: Jeder, von Null verschiedene, Punkt in ℝ^d identifiziert einen nächsten Punkt x auf der Einheitssphäre S^{d-1}. Wir suchen eine ε-Approximation y ∈ ℚ^d für x zu berechnen, welche exakt auf S^{d-1} ist und niedrige Bit-Größe hat. Wir wiederholen untere Schranken an rationale Approximationen und liefern explizite, sphärische Instanzen. Wir beweisen, dass Floating-Point Zahlen nur triviale Lösungen zur Sphären-Gleichung in ℝ^2 und ℝ^3 liefern können. Jedoch zeigen wir die Konstruktion eines rationalen Punktes mit Nennern die maximal 10(d-1)/ε^2 sind für gegebene ε ∈ (0, 1/8], was ein bekanntes Resultat verbessert. Darüber hinaus profitiert die Methode von Algorithmen für simultane Diophantische Approximationen. Unsere quell-offene Implementierung und die Experimente demonstrieren die Praktikabilität unseres Ansatzes für sehr große, durch geometrische Längen- und Breitengrade referenzierte, Datensätze

    New solution approaches for the quadratic assignment problem

    Get PDF
    MSc., Faculty of Science, University of the Witwatersrand, 2011A vast array of important practical problems, in many di erent elds, can be modelled and solved as quadratic assignment problems (QAP). This includes problems such as university campus layout, forest management, assignment of runners in a relay team, parallel and distributed computing, etc. The QAP is a di cult combinatorial optimization problem and solving QAP instances of size greater than 22 within a reasonable amount of time is still challenging. In this dissertation, we propose two new solution approaches to the QAP, namely, a Branch-and-Bound method and a discrete dynamic convexized method. These two methods use the standard quadratic integer programming formulation of the QAP. We also present a lower bounding technique for the QAP based on an equivalent separable convex quadratic formulation of the QAP. We nally develop two di erent new techniques for nding initial strictly feasible points for the interior point method used in the Branch-and-Bound method. Numerical results are presented showing the robustness of both methods

    Similarity measures and algorithms for cartographic schematization

    Get PDF

    Design of plant layout having passages and inner structural wall using particle swarm optimization

    Get PDF
    The FLP has applications in both manufacturing and the service industry. The FLP is a common industrial problem of allocating facilities to either maximize adjacency requirement or minimize the cost of transporting materials between them. The “maximizing adjacency” objective uses a relationship chart that qualitatively specifies a closeness rating for each facility pair. This is then used to determine an overall adjacency measure for a given layout. The “minimizing of transportation cost” objective uses a value that is calculated by multiplying together the flow, distance, and unit transportation cost per distance for each facility pair. The resulting values for all facility pairs are then added. Most of the published research work for facilities layout design deals with equal-area facilities. By disregarding the actual shapes and sizes of the facilities, the problem is generally formulated as a quadratic assignment problem (QAP) of assigning equal area facilities to discrete locations on a grid with the objective of minimizing a given cost function. Heuristic techniques such as simulated annealing, simulated evolution, and various genetic algorithms developed for this purpose have also been applied for layout optimization of unequal area facilities by first subdividing the area of each facility in a number of “unit cells”. The particle swarm optimization(PSO) technique has developed by Eberhart and Kennedy in 1995 and it is a simple evolutionary algorithm, which differs from other evolutionary computation techniques in that it is motivated from the simulation of social behavior. PSO exhibits good performance in finding solutions to static optimization problems. Particle swarm optimization is a swarm intelligence method that roughly models the social behavior of swarms. PSO is characterized by its simplicity and straightforward applicability, and it has proved to be efficient on a plethora of problems in science and engineering. Several studies have been recently performed with PSO on multi objective optimization problems, and new variants of the method, which are more suitable for such problems, have been developed. PSO has been recognized as an evolutionary computation technique and has features of both genetic algorithms (GA) and Evolution strategies (ES). It is similar to a GA in that the System is initialized with a population of random solutions. However, unlike a GA each population individual is also assigned a randomized velocity, in effect, flying them through the solution hyperspace. As is obvious, it is possible to simultaneously search for an optimum solution in multiple dimensions. In this project we have utilized the advantages of the PSO algorithm and the results are compared with the existing GA. Need Statement of Thesis: To Find the best facility Layout or to determine the best sequence and area of facilities to be allocated and location of passages for minimum material handling cost using particle swarm optimization and taking a case study. The criteria for the optimization are minimum material cost and adjacency ratios. Minimize F = ∑∑ . ……………………………………………... (1) = = M i M j ij f ij d 1 1 * g1= αi min – αi ≤ 0,………………………………………………………… (2) g2= αi - αi max ≤ 0, ……………………………………………………… (3) g3= ai min – ai ≤ 0,…………………………………………………………. (4) g4= ∑ - A = M i ai 1 available ≤ 0,…………………………………………………... (5) g5= αi min – αi ≤ 0,………………………………………………………… (6) g6= αi min – αi ≤ 0,………………………………………………………… (7) g7 = (xi r - xi i.s.w ) (xi i . s.w - xi l ) ≤ 0,…………………………………………... (8) Where i, j= 1, 2, 3…….M, S= 1, 2, 3…P fij : Material flow between the facility i and j, dij : Distance between centroids of the facility i and j, M: Number of the facilities, αi : Aspect ratio of the facility i, αi min and αi max : Lower and upper bounds of the aspect ratio αi ai : Assigned area of the facility i, ai min and ai max : Lower and upper bounds of the assigned area ai Aavailable : Available area, P: Number of the inner structure walls, Since large number of different combination are possible, so we can’t interpret each to find the best one. For this we have used particle swarm optimization Techniques. The way we have used is different way of PSO. The most interesting facts that the program in C that we has been made is its “Generalized form”. In this generalized form we can find out the optimum layout configuration by varying: Different area of layout Total number of facilitates to be allocated. Number of rows Number of facilities in each row Area of each Facility Dimension of each passage Now we have compared it with some other heuristic method like Genetic algorithm, simulated annealing and tried to include Maximum adjacency criteria and taking a case study
    corecore