534 research outputs found

    INCORPORATING TRAVEL TIME RELIABILITY INTO TRANSPORTATION NETWORK MODELING

    Get PDF
    Travel time reliability is deemed as one of the most important factors affecting travelers’ route choice decisions. However, existing practices mostly consider average travel time only. This dissertation establishes a methodology framework to overcome such limitation. Semi-standard deviation is first proposed as the measure of reliability to quantify the risk under uncertain conditions on the network. This measure only accounts for travel times that exceed certain pre-specified benchmark, which offers a better behavioral interpretation and theoretical foundation than some currently used measures such as standard deviation and the probability of on-time arrival. Two path finding models are then developed by integrating both average travel time and semi-standard deviation. The single objective model tries to minimize the weighted sum of average travel time and semi-standard deviation, while the multi-objective model treats them as separate objectives and seeks to minimize them simultaneously. The multi-objective formulation is preferred to the single objective model, because it eliminates the need for prior knowledge of reliability ratios. It offers an additional benefit of providing multiple attractive paths for traveler’s further decision making. The sampling based approach using archived travel time data is applied to derive the path semi-standard deviation. The approach provides a nice workaround to the problem that there is no exact solution to analytically derive the measure. Through this process, the correlation structure can be implicitly accounted for while simultaneously avoiding the complicated link travel time distribution fitting and convolution process. Furthermore, the metaheuristic algorithm and stochastic dominance based approach are adapted to solve the proposed models. Both approaches address the issue where classical shortest path algorithms are not applicable due to non-additive semi-standard deviation. However, the stochastic dominance based approach is preferred because it is more computationally efficient and can always find the true optimal paths. In addition to semi-standard deviation, on-time arrival probability and scheduling delay measures are also investigated. Although these three measures share similar mathematical structures, they exhibit different behaviors in response to large deviations from the pre-specified travel time benchmark. Theoretical connections between these measures and the first three stochastic dominance rules are also established. This enables us to incorporate on-time arrival probability and scheduling delay measures into the methodology framework as well

    Minimum costs paths in intermodal transportation networks with stochastic travel times and overbookings

    Get PDF
    In intermodal transportation, it is essential to balance the trade-off between the cost and duration of a route. The duration of a path is inherently stochastic because of delays and the possibility of overbooking. We study a problem faced by a company that supports shippers with advice for the route selection. The challenge is to find Pareto-optimal solutions regarding the route's costs and the probability of arriving before a specific deadline. We show how this probability can be calculated in a network with scheduled departure times and the possibility of overbookings. To solve this problem, we give an optimal algorithm, but as its running time becomes too long for larger networks, we also develop a heuristic. The idea of this heuristic is to replace the stochastic variables by deterministic risk measures and solve the resulting deterministic optimization problem. The heuristic produces, in a fraction of the optimal algorithm's running time, solutions of which the costs are only a few percent higher than the optimal costs

    Change-Averse Nash Equilibria in Congestion Games

    Get PDF
    Εισάγουμε ένα νέο μοντέλο στα Παίγνια Συμφόρησης, όπου οι παίκτες επιλέγουν τη στρατηγική τους σύμφωνα με το νέο κόστος τους, όπως επίσης και με τη διαφορά της υφιστάμενής τους κατάστασης σε σχέση με τη νέα. Το τελευταίο κομμάτι της διαδικασίας απόφασης βασίζεται στην υπόθεση ότι παίκτες που σκέφτονται να κάνουν μια μεγάλη αλλαγή έχουν μικρότερη τάση να την κάνουν, παρά παίκτες με μικρότερη αλλαγή. Αυτό το μοντέλο έχει αναλογίες με τις ε-προσεγγιστικές ισορροπίες. Μπορούμε εύκολα να δούμε ότι το νέο αυτό μοντέλο περιέχει ένα πλουσιότερο σύνολο ισορροπιών σε σχέση με τις ε-προσεγγιστικές ισορροπίες. Ο Χριστοδούλου et al. αποδεικνύουν ότι σε σχέση με γραμμικά παιγνία συμφόρησης, έχουμε καλά φράγματα στο Τίμημα της Αναρχίας. Αποδεικνύουμε ότι όμοια αποτελέσματα ισχύουν και στη δική μας περίπτωση. Επίσης, αποδεικνύουμε ότι οι παίκτες συγκλίνουν σε μια τέτοια ισορροπία και μάλιστα συγκλίνουν με αποδεκτή ταχύτητα.We introduce a new model in Congestion Games, where the players choose their strategy according to the new cost they incur, as well as the difference between their current state and the new state they are considering. The latter part of the decision-making process is based on the assumption that players who are considering a signicant change are less prone to take it, than they do on a similar choice. This model has analogies with ϵ approximate equilibria. We can easily see that this new model provides a richer set of equilibria than approximate equilibria. Christodoulou et al. prove that as far as Linear Congestion Games are concerned, we have good bounds on the Price of Anarchy. We prove that similar results are true in our case. We also prove that players do actually converge on such an equilibrium and relatively quickly

    Multi-stage stochastic optimization and reinforcement learning for forestry epidemic and covid-19 control planning

    Get PDF
    This dissertation focuses on developing new modeling and solution approaches based on multi-stage stochastic programming and reinforcement learning for tackling biological invasions in forests and human populations. Emerald Ash Borer (EAB) is the nemesis of ash trees. This research introduces a multi-stage stochastic mixed-integer programming model to assist forest agencies in managing emerald ash borer insects throughout the U.S. and maximize the public benets of preserving healthy ash trees. This work is then extended to present the first risk-averse multi-stage stochastic mixed-integer program in the invasive species management literature to account for extreme events. Significant computational achievements are obtained using a scenario dominance decomposition and cutting plane algorithm.The results of this work provide crucial insights and decision strategies for optimal resource allocation among surveillance, treatment, and removal of ash trees, leading to a better and healthier environment for future generations. This dissertation also addresses the computational difficulty of solving one of the most difficult classes of combinatorial optimization problems, the Multi-Dimensional Knapsack Problem (MKP). A novel 2-Dimensional (2D) deep reinforcement learning (DRL) framework is developed to represent and solve combinatorial optimization problems focusing on MKP. The DRL framework trains different agents for making sequential decisions and finding the optimal solution while still satisfying the resource constraints of the problem. To our knowledge, this is the first DRL model of its kind where a 2D environment is formulated, and an element of the DRL solution matrix represents an item of the MKP. Our DRL framework shows that it can solve medium-sized and large-sized instances at least 45 and 10 times faster in CPU solution time, respectively, with a maximum solution gap of 0.28% compared to the solution performance of CPLEX. Applying this methodology, yet another recent epidemic problem is tackled, that of COVID-19. This research investigates a reinforcement learning approach tailored with an agent-based simulation model to simulate the disease growth and optimize decision-making during an epidemic. This framework is validated using the COVID-19 data from the Center for Disease Control and Prevention (CDC). Research results provide important insights into government response to COVID-19 and vaccination strategies

    Large-scale optimization under uncertainty: applications to logistics and healthcare

    Get PDF
    Many decision making problems in real life are affected by uncertainty. The area of optimization under uncertainty has been studied widely and deeply for over sixty years, and it continues to be an active area of research. The overall aim of this thesis is to contribute to the literature by developing (i) theoretical models that reflect problem settings closer to real life than previously considered in literature, as well as (ii) solution techniques that are scalable. The thesis focuses on two particular applications to this end, the vehicle routing problem and the problem of patient scheduling in a healthcare system. The first part of this thesis studies the vehicle routing problem, which asks for a cost-optimal delivery of goods to geographically dispersed customers. The probability distribution governing the customer demands is assumed to be unknown throughout this study. This assumption positions the study into the domain of distributionally robust optimization that has a well developed literature, but had so far not been extensively studied in the context of the capacitated vehicle routing problem. The study develops theoretical frameworks that allow for a tractable solution of such problems in the context of rise-averse optimization. The overall aim is to create a model that can be used by practitioners to solve problems specific to their requirements with minimal adaptations. The second part of this thesis focuses on the problem of scheduling elective patients within the available resources of a healthcare system so as to minimize overall years of lives lost. This problem has been well studied for a long time. The large scale of a healthcare system coupled with the inherent uncertainty affecting the evolution of a patient make this a particularly difficult problem. The aim of this study is to develop a scalable optimization model that allows for an efficient solution while at the same time enabling a flexible modelling of each patient in the system. This is achieved through a fluid approximation of the weakly-coupled counting dynamic program that arises out of modeling each patient in the healthcare system as a dynamic program with states, actions, transition probabilities and rewards reflecting the condition, treatment options and evolution of a given patient. A case-study for the National Health Service in England highlights the usefulness of the prioritization scheme obtained as a result of applying the methodology developed in this study.Open Acces
    corecore