101,910 research outputs found

    Active network management for electrical distribution systems: problem formulation, benchmark, and approximate solution

    Full text link
    With the increasing share of renewable and distributed generation in electrical distribution systems, Active Network Management (ANM) becomes a valuable option for a distribution system operator to operate his system in a secure and cost-effective way without relying solely on network reinforcement. ANM strategies are short-term policies that control the power injected by generators and/or taken off by loads in order to avoid congestion or voltage issues. Advanced ANM strategies imply that the system operator has to solve large-scale optimal sequential decision-making problems under uncertainty. For example, decisions taken at a given moment constrain the future decisions that can be taken and uncertainty must be explicitly accounted for because neither demand nor generation can be accurately forecasted. We first formulate the ANM problem, which in addition to be sequential and uncertain, has a nonlinear nature stemming from the power flow equations and a discrete nature arising from the activation of power modulation signals. This ANM problem is then cast as a stochastic mixed-integer nonlinear program, as well as second-order cone and linear counterparts, for which we provide quantitative results using state of the art solvers and perform a sensitivity analysis over the size of the system, the amount of available flexibility, and the number of scenarios considered in the deterministic equivalent of the stochastic program. To foster further research on this problem, we make available at http://www.montefiore.ulg.ac.be/~anm/ three test beds based on distribution networks of 5, 33, and 77 buses. These test beds contain a simulator of the distribution system, with stochastic models for the generation and consumption devices, and callbacks to implement and test various ANM strategies

    Process algebra for performance evaluation

    Get PDF
    This paper surveys the theoretical developments in the field of stochastic process algebras, process algebras where action occurrences may be subject to a delay that is determined by a random variable. A huge class of resource-sharing systems – like large-scale computers, client–server architectures, networks – can accurately be described using such stochastic specification formalisms. The main emphasis of this paper is the treatment of operational semantics, notions of equivalence, and (sound and complete) axiomatisations of these equivalences for different types of Markovian process algebras, where delays are governed by exponential distributions. Starting from a simple actionless algebra for describing time-homogeneous continuous-time Markov chains, we consider the integration of actions and random delays both as a single entity (like in known Markovian process algebras like TIPP, PEPA and EMPA) and as separate entities (like in the timed process algebras timed CSP and TCCS). In total we consider four related calculi and investigate their relationship to existing Markovian process algebras. We also briefly indicate how one can profit from the separation of time and actions when incorporating more general, non-Markovian distributions

    HMM based scenario generation for an investment optimisation problem

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2012 Springer-Verlag.The Geometric Brownian motion (GBM) is a standard method for modelling financial time series. An important criticism of this method is that the parameters of the GBM are assumed to be constants; due to this fact, important features of the time series, like extreme behaviour or volatility clustering cannot be captured. We propose an approach by which the parameters of the GBM are able to switch between regimes, more precisely they are governed by a hidden Markov chain. Thus, we model the financial time series via a hidden Markov model (HMM) with a GBM in each state. Using this approach, we generate scenarios for a financial portfolio optimisation problem in which the portfolio CVaR is minimised. Numerical results are presented.This study was funded by NET ACE at OptiRisk Systems

    Hybrid performance modelling of opportunistic networks

    Get PDF
    We demonstrate the modelling of opportunistic networks using the process algebra stochastic HYPE. Network traffic is modelled as continuous flows, contact between nodes in the network is modelled stochastically, and instantaneous decisions are modelled as discrete events. Our model describes a network of stationary video sensors with a mobile ferry which collects data from the sensors and delivers it to the base station. We consider different mobility models and different buffer sizes for the ferries. This case study illustrates the flexibility and expressive power of stochastic HYPE. We also discuss the software that enables us to describe stochastic HYPE models and simulate them.Comment: In Proceedings QAPL 2012, arXiv:1207.055

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    MOLNs: A cloud platform for interactive, reproducible and scalable spatial stochastic computational experiments in systems biology using PyURDME

    Full text link
    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools, a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments

    Generating a Performance Stochastic Model from UML Specifications

    Full text link
    Since its initiation by Connie Smith, the process of Software Performance Engineering (SPE) is becoming a growing concern. The idea is to bring performance evaluation into the software design process. This suitable methodology allows software designers to determine the performance of software during design. Several approaches have been proposed to provide such techniques. Some of them propose to derive from a UML (Unified Modeling Language) model a performance model such as Stochastic Petri Net (SPN) or Stochastic process Algebra (SPA) models. Our work belongs to the same category. We propose to derive from a UML model a Stochastic Automata Network (SAN) in order to obtain performance predictions. Our approach is more flexible due to the SAN modularity and its high resemblance to UML' state-chart diagram
    corecore