2,441 research outputs found

    Solution Approaches to the Three-index Assignment Problem

    Get PDF
    This thesis explores the axial Three-Index Assignment Problem (3IAP), also called the Multidimensional Assignment Problem. The problem consists in allocating n jobs to n machines in n factories, such that exactly one job is executed by one machine in one factory at a minimum total cost. The 3IAP is an extension of the classical two-dimensional assignment problem. This combinatorial optimisation problem has been the subject of numerous research endeavours, and proven NP-hard due to its inextricable nature. The study adopts an algorithmic approach to develop swift and e ective methods for solving the problem, focusing on balancing computational e ciency and solution accuracy. The Greedy-Style Procedure (GSP) is a novel heuristic algorithm for solving the 3IAP, guaranteeing feasible solutions in polynomial time. Speci c arrangements of cost matrices can lead to the generation of higher-quality feasible solutions. In addressing the 3IAP, analysing the tie-cases and the matrix ordering led to new variants. Further exploration of cost matrix characteristics has allowed two new heuristic classes to be devised for solving 3IAP. The approach focuses on selecting the best solution within each class, resulting in an optimal or a high-quality approximate solution. Numerical experiments con rm the e ciency of these heuristics, consistently delivering quality feasible solutions in competitive computational times. Moreover, by employing diverse optimisation solvers, we propose and implement two e ective methods to achieve optimal solutions for 3IAP in good CPU times. The study introduces two local search methods based on evolutionary algorithms to solve 3IAP. These approaches explore the solution space through random permutations and the Hungarian method. Building on this, a hybrid genetic algorithm that integrates these local search strategies has been proposed for solving the 3IAP. Implementing the Hybrid Genetic Algorithm (HGA) produces high-quality solutions with reduced computational time, surpassing traditional deterministic approaches. The e ciency of the HGA is demonstrated through experimental results and comparative analyses. On medium to large 3IAP instances, our method delivers comparable or better solutions within a competitive computational time frame. Two potential future developments and expected applications are proposed at the end of this project. The rst extension will examine the correlation between cost matrices and the optimal total cost of the assignment and will investigate the dependence structure of matrices and its inuence on optimal solutions. Copula theory and Sklar's theorem can help with this analysis. The focus will be on understanding the stochastic dependence of cost matrices and their multivariate properties. Furthermore, the impact of variations in cost distributions, is often modelled based on economic sectors. The second extension involves integrating variable costs de ned by speci c probability distributions, enhancing the comprehensive analysis of economic scenarios and their impact on the assignment problem. The study considers various well-de ned probability distributions and highlights more practical applications of the assignment problem in real-world economics. The project's original contribution lies in its algorithmic approach to investigating the 3IAP, which has led to the development of new, fast, and e cient heuristic methods that strategically balance computational speed and the accuracy of the solutions achieved

    Agent Based Individual Traffic Guidance

    Get PDF

    "Rotterdam econometrics": publications of the econometric institute 1956-2005

    Get PDF
    This paper contains a list of all publications over the period 1956-2005, as reported in the Rotterdam Econometric Institute Reprint series during 1957-2005.

    Probabilistic Analysis of the Median Rule: Asymptotics and Applications

    Get PDF
    The solution of integer optimization problems by relaxation methods consists of three parts. First, the discrete problem is converted into a continuous optimization problem, which is generally more tractable. Second, the relaxed problem is solved efficiently, yielding a optimal solution in the continuous space. Finally, an assignment procedure is used to map this solution to a suitable discrete solution. One heuristic - we call it the relaxation heuristic - that often guides the choice and design of assignment algorithms is: given a continuous optimal solution, the corresponding integer optimal solution is likely to be nearby (with respect to some well defined metric). Intuitively, this heuristic is reasonable for objective functions that are, say, Lipschitz functions. For such functions, an assignment algorithm might map the continuous optimal solution to the nearest feasible solution in the discrete space, in the hope that the discrete solution will be optimal as well. In this paper, we consider properties of a particular assignment algorithm known as the median rule. Define a binary vector to be balanced when the numbers of its 1 \u27s and 0\u27s differ at most by one. The median rule used to assign n-dimensional real vectors to n-dimensional balanced binary vectors, may be loosely described as follows: map the ith component of a real vector to a 0 or 1, depending on whether that component is smaller or greater than the median value of the vector components. We address two aspects of the median rule. The first result is that given a real vector, the median rule produces the closest balanced binary vector, with respect to any Schur-convex distance criteria. This includes several Minkowski norms, entropy measures, gauge functions etc. In this sense, the median rule optimally implements the relaxation heuristic. The second result addresses the issue of relaxation error. Though the median rule produces the nearest balanced integer solution to a given real vector, it is possible that this solution is sub-optimal, and the actual optimal solution is located elsewhere. The difference between the actual optimal cost and the cost of the solution obtained by the median rule is called the relaxation error. We consider the optimization of real valued, parametrized, multivariable Lipschitz functions where domains are the set of balanced binary vectors. Varying the parameters over the range of their values, we obtain an ensemble of such problems. Each problem instance in the ensemble has an optimal real cost, an integer cost, and an associated relaxation error. We establish upper bounds on the probability that the relaxation error is greater than a given threshold t. In general, these bounds depend on the random model being considered. These results have an immediate bearing on the important graph bisection width problem, which involves the minimization of a certain semidefinite quadratic cost function over balanced binary domains. This important problem arises in a variety of areas including load balancing, [11,16], storage management [22], distributed directories [15], and VLSI design [10]. The results obtained indicate that the median rule in a certain precise sense, is an optimal assignment procedure for this problem. The rest of the paper is organized as follows: In section 3, we prove the shortest distance properties of the median rule. In section 4, we introduce the concept of relaxation error and the Lipschitz bisection problem. Upper bounds on the relaxation error are obtained in Section 5. A discussion on these results is given in Section 6

    Advanced timeline systems

    Get PDF
    The Mission Planning Division of the Mission Operations Laboratory at NASA's Marshall Space Flight Center is responsible for scheduling experiment activities for space missions controlled at MSFC. In order to draw statistically relevant conclusions, all experiments must be scheduled at least once and may have repeated performances during the mission. An experiment consists of a series of steps which, when performed, provide results pertinent to the experiment's functional objective. Since these experiments require a set of resources such as crew and power, the task of creating a timeline of experiment activities for the mission is one of resource constrained scheduling. For each experiment, a computer model with detailed information of the steps involved in running the experiment, including crew requirements, processing times, and resource requirements is created. These models are then loaded into the Experiment Scheduling Program (ESP) which attempts to create a schedule which satisfies all resource constraints. ESP uses a depth-first search technique to place each experiment into a time interval, and a scoring function to evaluate the schedule. The mission planners generate several schedules and choose one with a high value of the scoring function to send through the approval process. The process of approving a mission timeline can take several months. Each timeline must meet the requirements of the scientists, the crew, and various engineering departments as well as enforce all resource restrictions. No single objective is considered in creating a timeline. The experiment scheduling problem is: given a set of experiments, place each experiment along the mission timeline so that all resource requirements and temporal constraints are met and the timeline is acceptable to all who must approve it. Much work has been done on multicriteria decision making (MCDM). When there are two criteria, schedules which perform well with respect to one criterion will often perform poorly with respect to the other. One schedule dominates another if it performs strictly better on one criterion, and no worse on the other. Clearly, dominated schedules are undesireable. A nondominated schedule can be generated by some sort of optimization problem. Generally there are two approaches: the first is a hierarchical approach while the second requires optimizing a weighting or scoring function

    Urban Public Transportation Planning with Endogenous Passenger Demand

    Get PDF
    An effective and efficient public transportation system is crucial to people\u27s mobility, economic production, and social activities. The Operations Research community has been studying transit system optimization for the past decades. With disruptions from the private sector, especially the parking operators, ride-sharing platforms, and micro-mobility services, new challenges and opportunities have emerged. This thesis contributes to investigating the interaction of the public transportation systems with significant private sector players considering endogenous passenger choice. To be more specific, this thesis aims to optimize public transportation systems considering the interaction with parking operators, competition and collaboration from ride-sharing platforms and micro-mobility platforms. Optimization models, algorithms and heuristic solution approaches are developed to design the transportation systems. Parking operator plays an important role in determining the passenger travel mode. The capacity and pricing decisions of parking and transit operators are investigated under a game-theoretic framework. A mixed-integer non-linear programming (MINLP) model is formulated to simulate the player\u27s strategy to maximize profits considering endogenous passenger mode choice. A three-step solution heuristic is developed to solve the large-scale MINLP problem. With emerging transportation modes like ride-sharing services and micro-mobility platforms, this thesis aims to co-optimize the integrated transportation system. To improve the mobility for residents in the transit desert regions, we co-optimize the public transit and ride-sharing services to provide a more environment-friendly and equitable system. Similarly, we design an integrated system of public transit and micro-mobility services to provide a more sustainable transportation system in the post-pandemic world

    Compiling Programs for Nonshared Memory Machines

    Get PDF
    Nonshared-memory parallel computers promise scalable performance for scientific computing needs. Unfortunately, these machines are now difficult to program because the message-passing languages available for them do not reflect the computational models used in designing algorithms. This introduces a semantic gap in the programming process which is difficult for the programmer to fill. The purpose of this research is to show how nonshared-memory machines can be programmed at a higher level than is currently possible. We do this by developing techniques for compiling shared-memory programs for execution on those architectures. The heart of the compilation process is translating references to shared memory into explicit messages between processors. To do this, we first define a formal model for distribution data structures across processor memories. Several abstract results describing the messages needed to execute a program are immediately derived from this formalism. We then develop two distinct forms of analysis to translate these formulas into actual programs. Compile-time analysis is used when enough information is available to the compiler to completely characterize the data sent in the messages. This allows excellent code to be generated for a program. Run-time analysis produces code to examine data references while the program is running. This allows dynamic generation of messages and a correct implementation of the program. While the over-head of the run-time approach is higher than the compile-time approach, run-time analysis is applicable to any program. Performance data from an initial implementation show that both approaches are practical and produce code with acceptable efficiency
    corecore