2,199 research outputs found

    Stability and Complexity of Minimising Probabilistic Automata

    Full text link
    We consider the state-minimisation problem for weighted and probabilistic automata. We provide a numerically stable polynomial-time minimisation algorithm for weighted automata, with guaranteed bounds on the numerical error when run with floating-point arithmetic. Our algorithm can also be used for "lossy" minimisation with bounded error. We show an application in image compression. In the second part of the paper we study the complexity of the minimisation problem for probabilistic automata. We prove that the problem is NP-hard and in PSPACE, improving a recent EXPTIME-result.Comment: This is the full version of an ICALP'14 pape

    Quantitative multi-objective verification for probabilistic systems

    Get PDF
    We present a verification framework for analysing multiple quantitative objectives of systems that exhibit both nondeterministic and stochastic behaviour. These systems are modelled as probabilistic automata, enriched with cost or reward structures that capture, for example, energy usage or performance metrics. Quantitative properties of these models are expressed in a specification language that incorporates probabilistic safety and liveness properties, expected total cost or reward, and supports multiple objectives of these types. We propose and implement an efficient verification framework for such properties and then present two distinct applications of it: firstly, controller synthesis subject to multiple quantitative objectives; and, secondly, quantitative compositional verification. The practical applicability of both approaches is illustrated with experimental results from several large case studies

    Generalised additive multiscale wavelet models constructed using particle swarm optimisation and mutual information for spatio-temporal evolutionary system representation

    Get PDF
    A new class of generalised additive multiscale wavelet models (GAMWMs) is introduced for high dimensional spatio-temporal evolutionary (STE) system identification. A novel two-stage hybrid learning scheme is developed for constructing such an additive wavelet model. In the first stage, a new orthogonal projection pursuit (OPP) method, implemented using a particle swarm optimisation(PSO) algorithm, is proposed for successively augmenting an initial coarse wavelet model, where relevant parameters of the associated wavelets are optimised using a particle swarm optimiser. The resultant network model, obtained in the first stage, may however be a redundant model. In the second stage, a forward orthogonal regression (FOR) algorithm, implemented using a mutual information method, is then applied to refine and improve the initially constructed wavelet model. The proposed two-stage hybrid method can generally produce a parsimonious wavelet model, where a ranked list of wavelet functions, according to the capability of each wavelet to represent the total variance in the desired system output signal is produced. The proposed new modelling framework is applied to real observed images, relative to a chemical reaction exhibiting a spatio-temporal evolutionary behaviour, and the associated identification results show that the new modelling framework is applicable and effective for handling high dimensional identification problems of spatio-temporal evolution sytems

    Distributed Verification of Rare Properties using Importance Splitting Observers

    Get PDF
    Rare properties remain a challenge for statistical model checking (SMC) due to the quadratic scaling of variance with rarity. We address this with a variance reduction framework based on lightweight importance splitting observers. These expose the model-property automaton to allow the construction of score functions for high performance algorithms. The confidence intervals defined for importance splitting make it appealing for SMC, but optimising its performance in the standard way makes distribution inefficient. We show how it is possible to achieve equivalently good results in less time by distributing simpler algorithms. We first explore the challenges posed by importance splitting and present an algorithm optimised for distribution. We then define a specific bounded time logic that is compiled into memory-efficient observers to monitor executions. Finally, we demonstrate our framework on a number of challenging case studies

    Simulating city growth by using the cellular automata algorithm

    Get PDF
    The objective of this thesis is to develop and implement a Cellular Automata (CA) algorithm to simulate urban growth process. It attempts to satisfy the need to predict the future shape of a city, the way land uses sprawl in the surroundings of that city and its population. Salonica city in Greece is selected as a case study to simulate its urban growth. Cellular automaton (CA) based models are increasingly used to investigate cities and urban systems. Sprawling cities may be considered as complex adaptive systems, and this warrants use of methodology that can accommodate the space-time dynamics of many interacting entities. Automata tools are well-suited for representation of such systems. By means of illustrating this point, the development of a model for simulating the sprawl of land uses such as commercial and residential and calculating the population who will reside in the city is discussed

    Decoherence in Discrete Quantum Walks

    Full text link
    We present an introduction to coined quantum walks on regular graphs, which have been developed in the past few years as an alternative to quantum Fourier transforms for underpinning algorithms for quantum computation. We then describe our results on the effects of decoherence on these quantum walks on a line, cycle and hypercube. We find high sensitivity to decoherence, increasing with the number of steps in the walk, as the particle is becoming more delocalised with each step. However, the effect of a small amount of decoherence can be to enhance the properties of the quantum walk that are desirable for the development of quantum algorithms, such as fast mixing times to uniform distributions.Comment: 15 pages, Springer LNP latex style, submitted to Proceedings of DICE 200

    Calibration and Validation of A Shared space Model: A Case Study

    Get PDF
    Shared space is an innovative streetscape design that seeks minimum separation between vehicle traffic and pedestrians. Urban design is moving toward space sharing as a means of increasing the community texture of street surroundings. Its unique features aim to balance priorities and allow cars and pedestrians to coexist harmoniously without the need to dictate behavior. There is, however, a need for a simulation tool to model future shared space schemes and to help judge whether they might represent suitable alternatives to traditional street layouts. This paper builds on the authors’ previously published work in which a shared space microscopic mixed traffic model based on the social force model (SFM) was presented, calibrated, and evaluated with data from the shared space link typology of New Road in Brighton, United Kingdom. Here, the goal is to explore the transferability of the authors’ model to a similar shared space typology and investigate the effect of flow and ratio of traffic modes. Data recorded from the shared space scheme of Exhibition Road, London, were collected and analyzed. The flow and speed of cars and segregation between pedestrians and cars are greater on Exhibition Road than on New Road. The rule-based SFM for shared space modeling is calibrated and validated with the real data. On the basis of the results, it can be concluded that shared space schemes are context dependent and that factors such as the infrastructural design of the environment and the flow and speed of pedestrians and vehicles affect the willingness to share space
    • …
    corecore