154 research outputs found

    AO* and penalty based algorithms for the Canadian traveler problem

    Get PDF
    Tezin basılısı İstanbul Şehir Üniversitesi Kütüphanesi'ndedir.The Canadian Traveler Problem (CTP) is a challenging path planning problem on stochastic graphs where some edges are blocked with certain probabilities and status of edges can be disambiguated only upon reaching an end vertex. The goal is to devise a traversal policy that results in the shortest expected traversal length between a given starting vertex and a termination vertex. The organization of this thesis is as follows: In the first chapter we define CTP and its variant SOSP and present an extensive literature review related to these problems. In the second chapter, we introduce an optimal algorithm for the problem, based on an MDP formulation which is a new improvement on AO* search that takes advantage of the special problem structure in CTP. The new algorithm is called CAO*, which stands for AO* with Caching. CAO* uses a caching mechanism and makes use of admissible upper bounds for dynamic state-space pruning. CAO* is not polynomial-time, but it can dramatically shorten the execution time needed to find an exact solution for moderately sized instances. We present computational experiments on a realistic variant of the problem involving an actual maritime minefield data set. In the third chapter, we introduce a simple, yet fast and effective penalty-based heuristic for CTP that can be used in an online fashion. We present computational experiments involving real-world and synthetic data that suggest our algorithm finds near-optimal policies in very short execution times. Another efficient method for sub-optimally solving CTP, rollout-based algorithms, have also been shown to provide high quality policies for CTP. In the final chapter, we com- pare the two algorithmic frameworks via computational experiments involving Delaunay and grid graphs using one specific penalty-based algorithm and four rollout-based algo- rithms. Our results indicate that the penalty-based algorithm executes several orders of magnitude faster than rollout-based ones while also providing better policies, suggest- ing that penalty-based algorithms stand as a prominent candidate for fast and efficient sub-optimal solution of CTP.Declaration of Authorship ii Abstract iii Öz iv Acknowledgments v List of Figures viii List of Tables ix Abbreviations x 1 Introduction 1 1.1 Overview .................................... 1 1.2 The Canadian Traveler Problem ........................ 1 1.2.1 The Discrete Stochastic Obstacle Scene Problem .......... 2 1.3 Literature Review ................................ 3 1.4 Organization of the Thesis ........................... 4 2 An AO* Based Exact Algorithm for the Canadian Traveler Problem 5 2.1 Introduction ................................... 5 2.2 MDP and POMDP Formulations ....................... 6 2.2.1 MDP Formulation and The Bellman Equation ............ 7 2.2.2 Deterministic POMDP Formulation ................. 9 2.3 The CAO* Algorithm ............................. 11 2.3.1 AO Trees ................................ 11 2.3.2 The AO* Algorithm .......................... 14 2.3.3 The CAO* Algorithm ......................... 16 2.4 Computational Experiments .......................... 19 2.4.1 The BAO* and PAO* Algorithms ................... 19 2.4.2 Experimental Setup .......................... 21 2.4.3 Simulation Environment A ...................... 21 2.4.4 Simulation Environment B ....................... 22 2.4.5 Simulation Environment C....................... 24 2.4.6 Simulation Environment D ...................... 25 2.5 Summary and Conclusions ........................... 26 3 A Fast and Effective Online Algorithm for the Canadian Traveler Prob- lem 29 3.1 Introduction ................................... 29 3.2 The DT Algorithm ............................... 30 3.3 Computational Experiments .......................... 32 3.3.1 Environment 1 ............................. 32 3.3.2 Environment 2 ............................. 34 3.4 Conclusions and Future Research ....................... 34 3.4.1 Conclusions ............................... 34 3.4.2 Limitations and Future Research ................... 35 4 A Comparison of Penalty and Rollout-Based Policies for the Canadian Traveler Problem 36 4.1 Introduction ................................... 36 4.2 Algorithms for CTP .............................. 37 4.2.1 Optimism (OMT) ........................... 37 4.2.2 Hindsight Optimization (HOP) .................... 38 4.2.3 Optimistic Rollout (ORO) ....................... 39 4.2.4 Blind UCT (UCTB) .......................... 39 4.2.5 Optimistic UCT (UCTO) ....................... 40 4.3 Computational Experiments .......................... 41 4.3.1 Delaunay Graph Results ........................ 43 4.3.2 Grid Graph Results .......................... 45 4.4 Conclusions and Future Research ....................... 46 4.4.1 Conclusions ............................... 46 4.4.2 Limitations and Future Research ................... 46 A Problem Instances in Simulation Environments C and D 48 Bibliography 5

    Risk-Aware Planning for Sensor Data Collection

    Get PDF
    With the emergence of low-cost unmanned air vehicles, civilian and military organizations are quickly identifying new applications for affordable, large-scale collectives to support and augment human efforts via sensor data collection. In order to be viable, these collectives must be resilient to the risk and uncertainty of operating in real-world environments. Previous work in multi-agent planning has avoided planning for the loss of agents in environments with risk. In contrast, this dissertation presents a problem formulation that includes the risk of losing agents, the effect of those losses on the mission being executed, and provides anticipatory planning algorithms that consider risk. We conduct a thorough analysis of the effects of risk on path-based planning, motivating new solution methods. We then use hierarchical clustering to generate risk-aware plans for a variable number of agents, outperforming traditional planning methods. Next, we provide a mechanism for distributed negotiation of stable plans, utilizing coalitional game theory to provide cost allocation methods that we prove to be fair and stable. Centralized planning with redundancy is then explored, planning for parallel task completion to mitigate risk and provide further increased expected value. Finally, we explore the role of cost uncertainty as additional source of risk, using bi-objective optimization to generate sets of alternative plans. We demonstrate the capability of our algorithms on randomly generated problem instances, showing an improvement over traditional multi-agent planning methods as high as 500% on very large problem instances

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    Pride and Prejudice, Practices and Perceptions: A Comparative Case Study in North Atlantic Environmental History

    Get PDF
    Due to escalating carbon-based emissions, anthropogenic climate change is wreaking havoc on the natural and built environment as higher near-surface temperatures cause arctic ice-melt, rising sea levels and unpredictable turbulent weather patterns. The effects are especially devastating to inhabitants living in the water-worlds of developing countries where environmental pressure only exacerbates their vulnerability to oppressive economic policies. As climatic and economic pressures escalate, threats to local resources, living space, safety and security are all reaching a tipping point. Climate refugees may survive, but they will fall victim to displacement, economic insecurity, and socio-cultural destruction. With the current economic system in peril, it is now a matter of urgency that the global community determine ways to modify their behaviour in order to minimize the impact of climate change. This interdisciplinary comparative analysis contributes to the dialogue by turning to environmental history for similar scenarios with contrasting outcomes. It isolates two North Atlantic water-worlds and their inhabitants at an historical juncture when the combination of climatic and economic pressures threatened their survival. During the sixteenth and seventeenth centuries, the Hebrideans in the Scottish Insular Gàidhealtachd and the Wabanaki in Ketakamigwa were both responding to the harsh conditions of the ‘Little Ice Age.’ While modifying their resource management, settlement patterns, and subsistence behaviours to accommodate climate change, they were simultaneously targeted by foreign opportunists whose practices and perceptions inevitably induced oppressive economic pressure. This critical period in their history serves as the centre of a pendulum that swings back to deglaciation and then forward again to the eighteenth century to examine the relationship between climate change and human behaviour in the North Atlantic. It will be demonstrated that both favourable and deteriorating climate conditions determine resource availability, but how humans manage those resources during feast or famine can determine their collective vulnerability to predators when the climate changes. It is argued that, historically, climate has determined levels of human development and survival on either side of the North Atlantic, regardless of sustainable practices. However, when cultural groups were under extreme environmental and economic pressure, there were additional factors that determined their fate. First, the condition of their native environment and prospect for continuing to inhabit it was partially determined by the level of sustainable practices. And, secondly, the way in which they perceived and treated one another partially determined their endurance. If they avoided internal stratification and self-protectionism by prioritising the needs of the group over that of the individual, they minimised fragmentation, avoided displacement, and maintained their social and culture cohesion

    Ice Blink

    Get PDF
    Northern Canada's distinctive landscapes, its complex social relations and the contested place of the North in contemporary political, military, scientific and economic affairs have fueled recent scholarly discussion. At the same time, both the media and the wider public have shown increasing interest in the region. This timely volume extends our understanding of the environmental history of northern Canada - clarifying both its practice and promise, and providing critical perspectives on current public debates. Ice Blink provides opportunities to consider critical issues in other disciplines and geographic contexts. Contributors also examine whether distinctive approaches to environmental history are required when studying the Canadian North, and consider a range of broader questions. What, if anything, sets the study of environmental history in particular regions apart from its study elsewhere? Do environmental historians require regionally-specific research practices? How can the study of environmental history take into consideration the relations between Indigenous peoples, the environment, and the state? How can the history of regions be placed most effectively within transnational and circumpolar contexts? How relevant are historical approaches to contemporary environmental issues? Scholars from universities in Canada, the United States and Britain contribute to this examination of the relevance of historical study for contemporary arctic and sub-arctic issues, especially environmental challenges, security and sovereignty, indigenous politics and the place of science in northern affairs. By asking such questions, the volume offers lessons about the general practice of environmental history and engages an international body of scholarship that addresses the value of regional and interdisciplinary approaches. Crucially, however, it makes a distinctive contribution to the field of Canadian environmental history by identifying new areas of research and exploring how international scholarly developments might play out in the Canadian context

    A Bayesian approach to Hybrid Choice models

    Get PDF
    Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2010-2011Les modèles microéconométriques de choix discrets ont pour but d’expliquer le processus du choix individuel des consommateurs parmi un ensemble limité et exhaustive d’options mutuellement exclusives. Les modèles dits de choix hybrides sont une généralisation des modèles de choix discrets standard, où des modèles indépendants plus sophistiqués sont considérés simultanément. Dans cette thèse des techniques d’estimation simultanée sont analysées et appliquées pour un modèle de choix hybride qui, sous la forme d’un système complexe d’équations structurelles généralisées, intègre à la fois des choix discrets et des variables latentes en tant que facteurs explicatifs des processus décisionnels. Ce qui motive l’étude de ce genre de modèles est que pour comprendre le processus du choix il faut incorporer des attitudes, des perceptions et des attributs qualitatifs à l’intérieur de modèles décisionnels économiques conventionnels, tout en prenant ce qui dit la recherche en sciences cognitives ainsi qu’en psychologie sociale. Quoique l’estimation du système d’équations d’un modèle de choix hybride requière l’évaluation d’intégrales multidimensionnelles complexes, on résoudre empiriquement ce problème en applicant la méthode du maximum de vraisemblance simulée. Ensuite on dérive une procédure d’échantillonnage de Gibbs pour l’estimation simultanée bayésienne du modèle qui offre des estimateurs convergents et efficaces. Ceci devient une méthode plus avantageuse comparativement aux méthodes classiques dans un cadre analytique avec un grand nombre de variables latentes. En effet, en vertu de l’approche bayésienne il suffit de considérer des régressions ordinaires pour les variables latentes. Par ailleurs, dériver les intervalles de confiance bayésiennes pour les parts de marché ainsi que pour des dispositions à payer devient trivial. De par sa grande géneralité, le modèle de choix hybride est capable de s’adapter à des situations pratiques. En particulier, la réponse des consommateurs suite à l’innovation technologique est analysée. Par exemple, on étudie les préférences pro-environnementales dans un modèle économique des décisions d’achat de véhicules verts selon lequel les consommateurs soucieux de l’environnement sont prêts à payer davantage pour des véhicules à faibles émissions, en dépit des inconvénients potentiels. En outre, en utilisant un noyau probit et des indicateurs dichotomiques on montre que des connaissances préalables ainsi que des attitudes positives envers l’adoption de nouvelles technologies favorisent l’adoption de la téléphonie IP.Microeconometric discrete choice models aim to explain the process of individual choice by consumers among a mutually exclusive, exhaustive and finite group of alternatives. Hybrid choice models are a generalization of standard discrete choice models where independent expanded models are considered simultaneously. In my dissertation I analyze, implement, and apply simultaneous estimation techniques for a hybrid choice model that, in the form of a complex generalized structural equation model, simultaneously integrates discrete choice and latent explanatory variables, such as attitudes and qualitative attributes. The motivation behind hybrid choice models is that the key to understanding choice comes through incorporating attitudinal and perceptual data to conventional economic models of decision making, taking elements from cognitive science and social psychology. The Bayesian Gibbs sampler I derive for simultaneous estimation of hybrid choice models offers a consistent and efficient estimator that outperforms frequentist full information simulated maximum likelihood. Whereas the frequentist estimator becomes fairly complex in situations with a large choice set of interdependent alternatives with a large number of latent variables, the inclusion of latent variables in the Bayesian approach translates into adding independent ordinary regressions. I also find that when using the Bayesian estimates it is easier to consider behavioral uncertainty; in fact, I show that forecasting and deriving confidence intervals for willingness to pay measures is straightforward. Finally, I confirm the capacity of hybrid choice modeling to adapt to practical situations. In particular, I analyze consumer response to innovation. For instance, I incorporate proenvironmental preferences toward low-emission vehicles into an economic model of purchase behavior where environmentally-conscious consumers are willing to pay more for sustainable solutions despite potential drawbacks. In addition, using a probit kernel and dichotomous effect indicators I show that knowledge as well as a positive attitude toward the adoption of new technologies favor the adoption of IP telephony

    Performance modelling of urban metro rail systems: an application of frontiers, regression, and causal inference techniques

    No full text
    Metro rail provides a vital role towards facilitating the travel needs of major urban economies, and has contributed substantially in transporting the population within cities. However, implementing a safe service to meet with the statutory requirements of operation is fraught with difficulties. Due to high capital expenditures and need for public money, metros are politically sensitive and are subject to scrutiny. Consequently, understanding variation in metro performance continues to be a major research objective. This has proven to be far from straightforward due to the complex nature of the industry and that metro operators are generally monopolistic in nature, with no source of performance comparisons in the same region. This emphasises the need for an international comparison. This thesis focuses on technical efficiency, which concerns the use of input factors (such as capital and labour) to produce metro services. The study is bolstered by using a high quality panel dataset, consisting of 27 metro systems for the period 2004 to 2012. Additional insight into the variation of metro performance is provided as shortcomings in the literature include the lack of appropriate data and insufficient application of statistical techniques. Three empirical contributions are provided. Firstly, by assessing the relative performance of a group of metro systems by calculating technical efficiency scores using Stochastic Frontier Analysis, the study reveals a number of drivers of performance that affect output efficiency. Secondly, the study identifies reliability to be a key influence, and this is subsequently investigated further. Count data regression models are estimated to reveal determinants of incidents which cause a delay to service and provide a means for carrying out future forecasting of incident rates. Finally, given the growing capacity restrictions experienced by metros, the study investigates the causal impact of introducing a technological treatment (in this case, moving block signalling) on technical efficiency using a Propensity Score Matching approach.Open Acces

    Exponential time algorithms via separators and random subsets

    Full text link
    Exponential time algorithms for NP-hard problems is rich and diverse research area. This thesis aims to improve known problems with new algorithms and careful analysis of running times by extending on and using known techniques such as graph separators, and random subset selection. We first present a polynomial-space algorithm that computes the number of independent sets of any input graph in time O(1.1389^n) for graphs with maximum degree 3 and in time O(1.2356^n) for general graphs, where n is the number of vertices. Together with the inclusion-exclusion approach of [Björklund, Husfeldt, and Koivisto 2009], this leads to a faster polynomial-space algorithm for the graph coloring problem with running time O(2.2356^n). As a byproduct, we also obtain an exponential-space O(1.2330^n) time algorithm for counting independent sets. We also consider the family of Φ-Subset problems, where the input consists of an instance I of size N over a universe U_I of size n and the task is to check whether the universe contains a subset with property Φ (e.g., Φ could be the property of being a feedback vertex set for the input graph of size at most k). Our main tool is a simple randomized algorithm which solves Φ-Subset in time (1 + b − 1/c)^n N^O(1), provided that there is an algorithm for the Φ-Extension problem with b^{n−|X|}c^k N^O(1) running time. Here, the input for Φ-Extension is an instance I of size N over a universe UI of size n, a subset X ⊆ U_I , and an integer k, and the task is to check whether there is a set Y with X ⊆ Y ⊆ UI and |Y \ X| ≤ k with property Φ. We also derandomize this algorithm at the cost of increasing the running time by a subexponential factor in n, and we adapt it to the enumeration setting where we need to enumerate all subsets of the universe with property Φ. Lastly we consider the application of random subset selection to approximation algorithms and improve on known approximation algorithms [Escoffier, Paschos and Tourniaire 2016] through a careful application of subset selection, and new analytic methods based on Monotone Local Search. This involves using as sub- routines Fixed Parameter Tractable Algorithms for an exponential time approximation algorithm, and we also investigate using parameterized approximation algorithms for subroutines from [Kulik and Shachnai 2020]

    Optimising outcomes for potentially resectable pancreatic cancer through personalised predictive medicine : the application of complexity theory to probabilistic statistical modeling

    Get PDF
    Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models.Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models
    corecore