114,946 research outputs found

    Spatial stochasticity and non-continuum effects in gas flows

    Get PDF
    We investigate the relationship between spatial stochasticity and non-continuum effects in gas flows. A kinetic model for a dilute gas is developed using strictly a stochastic molecular model reasoning, without primarily referring to either the Liouville or the Boltzmann equations for dilute gases. The kinetic equation, a stochastic version of the well-known deterministic Boltzmann equation for dilute gas, is then associated with a set of macroscopic equations for the case of a monatomic gas. Tests based on a heat conduction configuration and sound wave dispersion show that spatial stochasticity can explain some non-continuum effects seen in gases

    Do I need to fix a failed component now, or can I wait until tomorrow?

    Get PDF
    We investigate how predictive event-based modelling can inform operational decision making in complex systems with component failures. By relating the status of components to service availability, and using stochastic temporal logic reasoning, we quantify the risk of service failure now, and in the future, after a given elapsed time. Decisions can then be taken according to those risks. We demonstrate the approach through application to an industrial case study system in which component failures are sensed and monitored. The system has been deployed for some time. A novel aspect is we calibrate the model(s) according to inferences over historical field data, thus the results of our reasoning can inform decision making in the actual deployed system

    Equilibria-based Probabilistic Model Checking for Concurrent Stochastic Games

    Get PDF
    Probabilistic model checking for stochastic games enables formal verification of systems that comprise competing or collaborating entities operating in a stochastic environment. Despite good progress in the area, existing approaches focus on zero-sum goals and cannot reason about scenarios where entities are endowed with different objectives. In this paper, we propose probabilistic model checking techniques for concurrent stochastic games based on Nash equilibria. We extend the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) to allow reasoning about players with distinct quantitative goals, which capture either the probability of an event occurring or a reward measure. We present algorithms to synthesise strategies that are subgame perfect social welfare optimal Nash equilibria, i.e., where there is no incentive for any players to unilaterally change their strategy in any state of the game, whilst the combined probabilities or rewards are maximised. We implement our techniques in the PRISM-games tool and apply them to several case studies, including network protocols and robot navigation, showing the benefits compared to existing approaches

    A Subsampling Line-Search Method with Second-Order Results

    Full text link
    In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization such as a trust region or a line search. Using subsampled function values is particularly challenging for the latter strategy, which relies upon multiple evaluations. On top of that all, there has been an increasing interest for nonconvex formulations of data-related problems, such as training deep learning models. For such instances, one aims at developing methods that converge to second-order stationary points quickly, i.e., escape saddle points efficiently. This is particularly delicate to ensure when one only accesses subsampled approximations of the objective and its derivatives. In this paper, we describe a stochastic algorithm based on negative curvature and Newton-type directions that are computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model, and for a sufficiently large sample, a similar amount of reduction holds for the true objective. By using probabilistic reasoning, we can then obtain worst-case complexity guarantees for our framework, leading us to discuss appropriate notions of stationarity in a subsampling context. Our analysis encompasses the deterministic regime, and allows us to identify sampling requirements for second-order line-search paradigms. As we illustrate through real data experiments, these worst-case estimates need not be satisfied for our method to be competitive with first-order strategies in practice

    Equilibria-based probabilistic model checking for concurrent stochastic games

    Get PDF
    Probabilistic model checking for stochastic games enables formal verification of systems that comprise competing or collaborating entities operating in a stochastic environment. Despite good progress in the area, existing approaches focus on zero-sum goals and cannot reason about scenarios where entities are endowed with different objectives. In this paper, we propose probabilistic model checking techniques for concurrent stochastic games based on Nash equilibria. We extend the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) to allow reasoning about players with distinct quantitative goals, which capture either the probability of an event occurring or a reward measure. We present algorithms to synthesise strategies that are subgame perfect social welfare optimal Nash equilibria, i.e., where there is no incentive for any players to unilaterally change their strategy in any state of the game, whilst the combined probabilities or rewards are maximised. We implement our techniques in the PRISM-games tool and apply them to several case studies, including network protocols and robot navigation, showing the benefits compared to existing approaches

    Modeling of investment strategies in stocks markets: an approach from multi agent based simulation and fuzzy logic

    Get PDF
    This paper presents a simulation model of a complex system, in this case a financial market, using a Multi-Agent Based Simulation approach. Such model takes into account microlevel aspects like the Continuous Double Auction mechanism, which is widely used within stock markets, as well as investor agents reasoning who participate looking for profits. To model such reasoning several variables were considered including general stocks information like profitability and volatility, but also some agent’s aspects like their risk tendency. All these variables are incorporated throughout a fuzzy logic approach trying to represent in a faithful manner the kind of reasoning that nonexpert investors have, including a stochastic component in order to model human factors

    Automated verification of concurrent stochastic games

    Get PDF
    We present automatic verifcation techniques for concurrent stochastic multi-player games (CSGs) with rewards. To express properties of such models, we adapt the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards), originally introduced for the simpler model of turn-based games, which enables quantitative reasoning about the ability of coalitions of players to achieve goals related to the probability of an event or reward measures. We propose and implement a modelling approach and model checking algorithms for property verifcation and strategy synthesis of CSGs, as an extension of PRISMgames. We evaluate the performance, scalability and applicability of our techniques on case studies from domains such as security, networks and finance, showing that we can analyse systems with probabilistic, cooperative and competitive behaviour between concurrent components, including many scenarios that cannot be analysed with turn-based models

    Environmental data stream mining through a case-based stochastic learning approach

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Environmental data stream mining is an open challenge for Data Science. Common methods used are static because they analyze a static set of data, and provide static data-driven models. Environmental systems are dynamic and generate a continuous data stream. Dynamic methods coping with the temporal nature of data must be provided in Data Science. Our proposal is to model each environmental information unit, timely generated, as a new case/experience in a Case-Based Reasoning (CBR) system. This contribution aims to incrementally build and manage a Dynamic Adaptive Case Library (DACL). In this paper, a stochastic method for the learning of new cases and management of prototypes to create and manage the DACL in an incremental way is introduced. This stochastic method works with two main moments. An evaluation of the method has been carried using a data stream of air quality of the city of Obregon, Sonora. México, with good results. In addition, other datasets have been mined to ensure the generality of the approach.Peer ReviewedPostprint (author's final draft

    Exploiting Modal Logic to Express Performance Measures

    Get PDF
    Abstract. Stochastic process algebras such as PEPA provide ample support for the component-based construction of models. Tools compute the numerical solution of these models; however, the stochastic process algebra methodology has lacked support for the specification and calculation of complex performance measures. In this paper we present a stochastic modal logic which can aid the construction of a reward structure over the model. We discuss its relationship to the underlying theory of PEPA. We also present a performance specification language which supports high level reasoning about PEPA models, and allows queries about their equilibrium behaviour. The meaning of the specification language has its foundations in the stochastic modal logic. We describe the implementation of the logic within the PEPA Workbench and a case study is presented to illustrate the approach.
    corecore