102 research outputs found

    Impact of alife simulation of Darwinian and Lamarckian evolutionary theories

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementUntil nowadays, the scientific community firmly rejected the Theory of Inheritance of Acquired Characteristics, a theory mostly associated with the name of Jean-Baptiste Lamarck (1774-1829). Though largely dismissed when applied to biological organisms, this theory found its place in a young discipline called Artificial Life. Based on the two abstract models of Darwinian and Lamarckian evolutionary theories built using neural networks and genetic algorithms, this research aims to present a notion of the potential impact of implementation of Lamarckian knowledge inheritance across disciplines. In order to obtain our results, we conducted a focus group discussion between experts in biology, computer science and philosophy, and used their opinions as qualitative data in our research. As a result of completing the above procedure, we have found some implications of such implementation in each mentioned discipline. In synthetic biology, this means that we would engineer organisms precisely up to our specific needs. At the moment, we can think of better drugs, greener fuels and dramatic changes in chemical industry. In computer science, Lamarckian evolutionary algorithms have been used for quite some years, and quite successfully. However, their application in strong ALife can only be approximated based on the existing roadmaps of futurists. In philosophy, creating artificial life seems consistent with nature and even God, if there is one. At the same time, this implementation may contradict the concept of free will, which is defined as the capacity for an agent to make choices in which the outcome has not been determined by past events. This study has certain limitations, which means that larger focus group and more prepared participants would provide more precise results

    Autonomous Driving at Intersections: A Critical-Turning-Point Approach for Planning and Decision Making

    Get PDF
    Left-turning at unsignalized intersection is one of the most challenging tasks for urban automated driving, due to the various shapes of different intersections, and rapidly changing nature of the driving scenarios. Many algorithms including rule-based approach, graph-based approach, optimization-based approach, etc. have been developed to overcome the problems. However, most algorithms implemented were difficult to guarantee the safety at intersection scenarios in real time due to the large uncertainty of the intersections. Other algorithms that aim to always keep a safe distance in all cases often become overly conservative, which might also be dangerous and inefficient. This thesis addresses this challenge by proposing a generalized critical turning point (CTP) based hierarchical decision making and planning method, which enables safe and efficient planning and decision making of autonomous vehicles. The high-level candidate-paths planner takes the road map information and generates CTPs using a parameterized CTP extraction model which is proposed and verified by naturalistic driving data. CTP is a novel concept and the corresponding CTP model is used to generate behavior-oriented paths that adapt to various intersections. These modifications help to assure the high searching efficiency of the planning process, and in the meanwhile, enable human-like driving behavior of the autonomous vehicle. The low-level planner formulates the decision-making task to a POMDP problem which considers the uncertainties of the agent in the intersections. The POMDP problem is then solved with a Monte Carlo tree search (MCTS)-based framework to select proper candidate paths and decide the actions on that path. The proposed framework that uses CTPs is tested in several critical scenarios and has out-performed the methods of not using CTPs. The framework has shown the ability to adapt to various shapes of intersections with different numbers of road lanes and different width of median strips, and finishes the left turns while keeping proper safety distances. The uses of the CTP concept which is proposed through human-driving left-turning behaviors, enables the framework to perform human-like behaviors that is easier to be speculated by the other agents of the intersection, which improves the safety of the ego vehicle too. The framework is also capable of personalized modification of the desired real-time performance and the corresponding stability. The use of the POMDP model which considers the unknown intentions of the surrounding vehicles has also enabled the framework to provide commute-efficient two-dimensional planning and decision-making. In all, the proposed framework enables the ego vehicle to perform less conservative and human-like actions while considering the potential of crashes in real-time, which not only improves the commute-efficiency, but also enables urban driving autonomous vehicles to naturally integrate into scenarios with human-driven vehicles in a friendly manne

    Towards Learning Feasible Hierarchical Decision-Making Policies in Urban Autonomous Driving

    Get PDF
    Modern learning-based algorithms, powered by advanced deep structured neural nets, have multifacetedly facilitated automated driving platforms, spanning from scene characterization and perception to low-level control and state estimation schemes. Nonetheless, urban autonomous driving is regarded as a challenging application for machine learning (ML) and artificial intelligence (AI) since the learnt driving policies must handle complex multi-agent driving scenarios with indeterministic intentions of road participants. In the case of unsignalized intersections, automating the decision-making process at these safety-critical environments entails comprehending numerous layers of abstractions associated with learning robust driving behaviors to allow the vehicle to drive safely and efficiently. Based on our in-depth investigation, we discern that an efficient, yet safe, decision-making scheme for navigating real-world unsignalized intersections does not exist yet. The state-of-the-art schemes lacked practicality to handle real-life complex scenarios as they utilize Low-fidelity vehicle dynamic models which makes them incapable of simulating the real dynamic motion in real-life driving applications. In addition, the conservative behavior of autonomous vehicles, which often overreact to threats which have low likelihood, degrades the overall driving quality and jeopardizes safety. Hence, enhancing driving behavior is essential to attain agile, yet safe, traversing maneuvers in such multi-agent environments. Therefore, the main goal of conducting this PhD research is to develop high-fidelity learning-based frameworks to enhance the autonomous decision-making process at these safety-critical environments. We focus this PhD dissertation on three correlated and complementary research challenges. In our first research challenge, we conduct an in-depth and comprehensive survey on the state-of-the-art learning-based decision-making schemes with the objective of identifying the main shortcomings and potential research avenues. Based on the research directions concluded, we propose, in Problem II and Problem III, novel learning-based frameworks with the objective of enhancing safety and efficiency at different decision-making levels. In Problem II, we develop a novel sensor-independent state estimation for a safety-critical system in urban driving using deep learning techniques. A neural inference model is developed and trained via deep-learning training techniques to obtain accurate state estimates using indirect measurements of vehicle dynamic states and powertrain states. In Problem III, we propose a novel hierarchical reinforcement learning-based decision-making architecture for learning left-turn policies at four-way unsignalized intersections with feasibility guarantees. The proposed technique involves an integration of two main decision-making layers; a high-level learning-based behavioral planning layer which adopts soft actor-critic principles to learn high-level, non-conservative yet safe, driving behaviors, and a motion planning layer that uses low-level Model Predictive Control (MPC) principles to ensure feasibility of the two-dimensional left-turn maneuver. The high-level layer generates reference signals of velocity and yaw angle for the ego vehicle taking into account safety and collision avoidance with the intersection vehicles, whereas the low-level planning layer solves an optimization problem to track these reference commands considering several vehicle dynamic constraints and ride comfort

    Motion Planning for Autonomous Vehicles in Partially Observable Environments

    Get PDF
    Unsicherheiten, welche aus Sensorrauschen oder nicht beobachtbaren Manöverintentionen anderer Verkehrsteilnehmer resultieren, akkumulieren sich in der Datenverarbeitungskette eines autonomen Fahrzeugs und führen zu einer unvollständigen oder fehlinterpretierten Umfeldrepräsentation. Dadurch weisen Bewegungsplaner in vielen Fällen ein konservatives Verhalten auf. Diese Dissertation entwickelt zwei Bewegungsplaner, welche die Defizite der vorgelagerten Verarbeitungsmodule durch Ausnutzung der Reaktionsfähigkeit des Fahrzeugs kompensieren. Diese Arbeit präsentiert zuerst eine ausgiebige Analyse über die Ursachen und Klassifikation der Unsicherheiten und zeigt die Eigenschaften eines idealen Bewegungsplaners auf. Anschließend befasst sie sich mit der mathematischen Modellierung der Fahrziele sowie den Randbedingungen, welche die Sicherheit gewährleisten. Das resultierende Planungsproblem wird mit zwei unterschiedlichen Methoden in Echtzeit gelöst: Zuerst mit nichtlinearer Optimierung und danach, indem es als teilweise beobachtbarer Markov-Entscheidungsprozess (POMDP) formuliert und die Lösung mit Stichproben angenähert wird. Der auf nichtlinearer Optimierung basierende Planer betrachtet mehrere Manöveroptionen mit individuellen Auftrittswahrscheinlichkeiten und berechnet daraus ein Bewegungsprofil. Er garantiert Sicherheit, indem er die Realisierbarkeit einer zufallsbeschränkten Rückfalloption gewährleistet. Der Beitrag zum POMDP-Framework konzentriert sich auf die Verbesserung der Stichprobeneffizienz in der Monte-Carlo-Planung. Erstens werden Informationsbelohnungen definiert, welche die Stichproben zu Aktionen führen, die eine höhere Belohnung ergeben. Dabei wird die Auswahl der Stichproben für das reward-shaped Problem durch die Verwendung einer allgemeinen Heuristik verbessert. Zweitens wird die Kontinuität in der Reward-Struktur für die Aktionsauswahl ausgenutzt und dadurch signifikante Leistungsverbesserungen erzielt. Evaluierungen zeigen, dass mit diesen Planern große Erfolge in Fahrversuchen und Simulationsstudien mit komplexen Interaktionsmodellen erreicht werden

    The Data Science Design Manual

    Get PDF

    Formal concept matching and reinforcement learning in adaptive information retrieval

    Get PDF
    The superiority of the human brain in information retrieval (IR) tasks seems to come firstly from its ability to read and understand the concepts, ideas or meanings central to documents, in order to reason out the usefulness of documents to information needs, and secondly from its ability to learn from experience and be adaptive to the environment. In this work we attempt to incorporate these properties into the development of an IR model to improve document retrieval. We investigate the applicability of concept lattices, which are based on the theory of Formal Concept Analysis (FCA), to the representation of documents. This allows the use of more elegant representation units, as opposed to keywords, in order to better capture concepts/ideas expressed in natural language text. We also investigate the use of a reinforcement leaming strategy to learn and improve document representations, based on the information present in query statements and user relevance feedback. Features or concepts of each document/query, formulated using FCA, are weighted separately with respect to the documents they are in, and organised into separate concept lattices according to a subsumption relation. Furthen-nore, each concept lattice is encoded in a two-layer neural network structure known as a Bidirectional Associative Memory (BAM), for efficient manipulation of the concepts in the lattice representation. This avoids implementation drawbacks faced by other FCA-based approaches. Retrieval of a document for an information need is based on concept matching between concept lattice representations of a document and a query. The learning strategy works by making the similarity of relevant documents stronger and non-relevant documents weaker for each query, depending on the relevance judgements of the users on retrieved documents. Our approach is radically different to existing FCA-based approaches in the following respects: concept formulation; weight assignment to object-attribute pairs; the representation of each document in a separate concept lattice; and encoding concept lattices in BAM structures. Furthermore, in contrast to the traditional relevance feedback mechanism, our learning strategy makes use of relevance feedback information to enhance document representations, thus making the document representations dynamic and adaptive to the user interactions. The results obtained on the CISI, CACM and ASLIB Cranfield collections are presented and compared with published results. In particular, the performance of the system is shown to improve significantly as the system learns from experience.The School of Computing, University of Plymouth, UK

    Enhanced Pump Schedule Optimization For Large Water Distribution Networks To Maximize Environmental And Economic Benefits

    Get PDF
    For more than four decades researchers tried to develop optimization method and tools to reduce electricity consumption of pump stations of water distribution systems. Based on this ongoing research trend, about a decade ago, some commercial pump operation optimization software introduced to the market. Using metaheuristic and evolutionary techniques (e.g. Genetic Algorithm) make some commercial and research tools able to optimize the electricity cost of small water distribution systems (WDS). Still reducing the environmental footprint of these systems and dealing with large and complicated water distribution system is a challenge. In this study, we aimed to develop a multiobjective optimization tool (PEPSO) for reducing electricity cost and pollution emission (associated with energy consumption) of pump stations of WDSs. PEPSO designed to have a user-friendly graphical interface besides the state of art internal functions and procedures that lets users define and run customized optimization scenarios for even medium and large size WDSs. A customized version of non-dominated sorting genetic algorithm II is used as the core optimizer algorithm. EPANET toolkit is used as the hydraulic solver of PEPSO. In addition to the EPANET toolkit, a module is developed for training and using an artificial neural network instead of the high fidelity hydraulic model to speed up the optimization process. A unique measure that is called “Undesirability” is also introduced and used to help PEPSO in finding the promising path of optimization and making sure that the final results are desirable and practical. PEPSO is tested for optimizing the detailed hydraulic model of WDS of Monroe city, MI, USA and skeletonized hydraulic model of WDS of Richmond, UK. The various features of PEPSO are tested under 8 different scenarios, and its results are compared with results of Darwin Scheduler (a well-known commercial software in this field). The test results showed that in a reasonable amount of time, PEPSO is able to optimize and provide logical results for a medium size WDS model with 13 pumps and thousands of system components under different scenarios. It also is concluded that this tool in many aspects can provide better results in comparison with the famous commercial optimization tool in the market
    corecore