56 research outputs found

    Distributed Fleet Management in Noisy Environments via Model-Predictive Control

    Get PDF
    This object is the reproducibility package for the paper Distributed Fleet Management in Noisy Environments via Model-Predictive Control accepted for publication at ICAPS '22. The package contains the software for executing the experiments, the data presented in the paper, examples of Uppaal models, and scripts for redoing the experiments presented in the paper

    Efficient Analysis and Synthesis of Complex Quantitative Systems

    Get PDF

    Abstract Dependency Graphs for Model Verification

    Get PDF

    Optimering af signalstyring i realtid:Intelligent styring af signalregulerede kryds ved anvendelse af maskinlæring og objektdetektering

    Get PDF
    Artiklen præsenterer nye principper for styring af signalregulerede kryds. Ved anvendelse af maskinlæring og objektdetektering som erstatning for punktdetektering og samordning er udviklet en kontroller til signal- regulerede kryds, der ved mikrosimulering i VISSIM viser mellem 30% og 50%’s reduktion i middelforsinkel- ser, kølængder og antal stop i 4 samordnede kryds på Hobrovej i Aalborg. Brændstofforbruget og samlet rejsetid på den samordnede strækning er i simuleringsstudiet reduceret med omkring 20%

    Approximating Euclidean by Imprecise Markov Decision Processes

    Full text link
    Euclidean Markov decision processes are a powerful tool for modeling control problems under uncertainty over continuous domains. Finite state imprecise, Markov decision processes can be used to approximate the behavior of these infinite models. In this paper we address two questions: first, we investigate what kind of approximation guarantees are obtained when the Euclidean process is approximated by finite state approximations induced by increasingly fine partitions of the continuous state space. We show that for cost functions over finite time horizons the approximations become arbitrarily precise. Second, we use imprecise Markov decision process approximations as a tool to analyse and validate cost functions and strategies obtained by reinforcement learning. We find that, on the one hand, our new theoretical results validate basic design choices of a previously proposed reinforcement learning approach. On the other hand, the imprecise Markov decision process approximations reveal some inaccuracies in the learned cost functions
    corecore