230 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    AI: Limits and Prospects of Artificial Intelligence

    Get PDF
    The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Summer/Fall 2023

    Get PDF

    Machine learning in portfolio management

    Get PDF
    Financial markets are difficult learning environments. The data generation process is time-varying, returns exhibit heavy tails and signal-to-noise ratio tends to be low. These contribute to the challenge of applying sophisticated, high capacity learning models in financial markets. Driven by recent advances of deep learning in other fields, we focus on applying deep learning in a portfolio management context. This thesis contains three distinct but related contributions to literature. First, we consider the problem of neural network training in a time-varying context. This results in a neural network that can adapt to a data generation process that changes over time. Second, we consider the problem of learning in noisy environments. We propose to regularise the neural network using a supervised autoencoder and show that this improves the generalisation performance of the neural network. Third, we consider the problem of quantifying forecast uncertainty in time-series with volatility clustering. We propose a unified framework for the quantification of forecast uncertainty that results in uncertainty estimates that closely match actual realised forecast errors in cryptocurrencies and U.S. stocks

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Deep Reinforcement Learning and Game Theoretic Monte Carlo Decision Process for Safe and Efficient Lane Change Maneuver and Speed Management

    Get PDF
    Predicting the states of the surrounding traffic is one of the major problems in automated driving. Maneuvers such as lane change, merge, and exit management could pose challenges in the absence of intervehicular communication and can benefit from driver behavior prediction. Predicting the motion of surrounding vehicles and trajectory planning need to be computationally efficient for real-time implementation. This dissertation presents a decision process model for real-time automated lane change and speed management in highway and urban traffic. In lane change and merge maneuvers, it is important to know how neighboring vehicles will act in the imminent future. Human driver models, probabilistic approaches, rule-base techniques, and machine learning approach have addressed this problem only partially as they do not focus on the behavioral features of the vehicles. The main goal of this research is to develop a fast algorithm that predicts the future states of the neighboring vehicles, runs a fast decision process, and learns the regretfulness and rewardfulness of the executed decisions. The presented algorithm is developed based on level-K game theory to model and predict the interaction between the vehicles. Using deep reinforcement learning, this algorithm encodes and memorizes the past experiences that are recurrently used to reduce the computations and speed up motion planning. Also, we use Monte Carlo Tree Search (MCTS) as an effective tool that is employed nowadays for fast planning in complex and dynamic game environments. This development leverages the computation power efficiently and showcases promising outcomes for maneuver planning and predicting the environment’s dynamics. In the absence of traffic connectivity that may be due to either passenger’s choice of privacy or the vehicle’s lack of technology, this development can be extended and employed in automated vehicles for real-world and practical applications

    The universe without us: a history of the science and ethics of human extinction

    Get PDF
    This dissertation consists of two parts. Part I is an intellectual history of thinking about human extinction (mostly) within the Western tradition. When did our forebears first imagine humanity ceasing to exist? Have people always believed that human extinction is a real possibility, or were some convinced that this could never happen? How has our thinking about extinction evolved over time? Why do so many notable figures today believe that the probability of extinction this century is higher than ever before in our 300,000-year history on Earth? Exploring these questions takes readers from the ancient Greeks, Persians, and Egyptians, through the 18th-century Enlightenment, past scientific breakthroughs of the 19th century like thermodynamics and evolutionary theory, up to the Atomic Age, the rise of modern environmentalism in the 1970s, and contemporary fears about climate change, global pandemics, and artificial general intelligence (AGI). Part II is a history of Western thinking about the ethical and evaluative implications of human extinction. Would causing or allowing our extinction be morally right or wrong? Would our extinction be good or bad, better or worse compared to continuing to exist? For what reasons? Under which conditions? Do we have a moral obligation to create future people? Would past “progress” be rendered meaningless if humanity were to die out? Does the fact that we might be unique in the universe—the only “rational” and “moral” creatures—give us extra reason to ensure our survival? I place these questions under the umbrella of Existential Ethics, tracing the development of this field from the early 1700s through Mary Shelley’s 1826 novel The Last Man, the gloomy German pessimists of the latter 19th century, and post-World War II reflections on nuclear “omnicide,” up to current-day thinkers associated with “longtermism” and “antinatalism.” In the dissertation, I call the first history “History #1” and the second “History #2.” A main thesis of Part I is that Western thinking about human extinction can be segmented into five distinction periods, each of which corresponds to a unique “existential mood.” An existential mood arises from a particular set of answers to fundamental questions about the possibility, probability, etiology, and so on, of human extinction. I claim that the idea of human extinction first appeared among the ancient Greeks, but was eclipsed for roughly 1,500 years with the rise of Christianity. A central contention of Part II is that philosophers have thus far conflated six distinct types of “human extinction,” each of which has its own unique ethical and evaluative implications. I further contend that it is crucial to distinguish between the process or event of Going Extinct and the state or condition of Being Extinct, which one should see as orthogonal to the six types of extinction that I delineate. My aim with the second part of the book is to not only trace the history of Western thinking about the ethics of annihilation, but lay the theoretical groundwork for future research on the topic. I then outline my own views within “Existential Ethics,” which combine ideas and positions to yield a novel account of the conditions under which our extinction would be bad, and why there is a sense in which Being Extinct might be better than Being Extant, or continuing to exist
    corecore