22,459 research outputs found

    Minimizing Running Costs in Consumption Systems

    Full text link
    A standard approach to optimizing long-run running costs of discrete systems is based on minimizing the mean-payoff, i.e., the long-run average amount of resources ("energy") consumed per transition. However, this approach inherently assumes that the energy source has an unbounded capacity, which is not always realistic. For example, an autonomous robotic device has a battery of finite capacity that has to be recharged periodically, and the total amount of energy consumed between two successive charging cycles is bounded by the capacity. Hence, a controller minimizing the mean-payoff must obey this restriction. In this paper we study the controller synthesis problem for consumption systems with a finite battery capacity, where the task of the controller is to minimize the mean-payoff while preserving the functionality of the system encoded by a given linear-time property. We show that an optimal controller always exists, and it may either need only finite memory or require infinite memory (it is decidable in polynomial time which of the two cases holds). Further, we show how to compute an effective description of an optimal controller in polynomial time. Finally, we consider the limit values achievable by larger and larger battery capacity, show that these values are computable in polynomial time, and we also analyze the corresponding rate of convergence. To the best of our knowledge, these are the first results about optimizing the long-run running costs in systems with bounded energy stores.Comment: 32 pages, corrections of typos and minor omission

    Accelerating Universe from Extra Spatial Dimension

    Full text link
    We present a simple higher dimensional FRW type of model where the acceleration is apparently caused by the presence of the extra dimensions. Assuming an ansatz in the form of the deceleration parameter we get a class of solutions some of which shows the desirable feature of dimensional reduction as well as reasonably good physical properties of matter. Interestingly we do not have to invoke an extraneous scalar field or a cosmological constant to account for this acceleration. One argues that the terms containing the higher dimensional metric coefficients produces an extra negative pressure that apparently drives the inflation of the 4D space with an accelerating phase. It is further found that in line with the physical requirements our model admits of a decelerating phase in the early era along with an accelerating phase at present.Further the models asymptotically mimic a steady state type of universe although it starts from a big type of singularity. Correspondence to Wesson's induced matter theory is also briefly discussed and in line with it it is argued that the terms containing the higher dimensional metric coefficients apparently creates a negative pressure which drives the inflation of the 3-space with an accelerating phase.Comment: 0

    The Linear Model under Mixed Gaussian Inputs: Designing the Transfer Matrix

    Full text link
    Suppose a linear model y = Hx + n, where inputs x, n are independent Gaussian mixtures. The problem is to design the transfer matrix H so as to minimize the mean square error (MSE) when estimating x from y. This problem has important applications, but faces at least three hurdles. Firstly, even for a fixed H, the minimum MSE (MMSE) has no analytical form. Secondly, the MMSE is generally not convex in H. Thirdly, derivatives of the MMSE w.r.t. H are hard to obtain. This paper casts the problem as a stochastic program and invokes gradient methods. The study is motivated by two applications in signal processing. One concerns the choice of error-reducing precoders; the other deals with selection of pilot matrices for channel estimation. In either setting, our numerical results indicate improved estimation accuracy - markedly better than those obtained by optimal design based on standard linear estimators. Some implications of the non-convexities of the MMSE are noteworthy, yet, to our knowledge, not well known. For example, there are cases in which more pilot power is detrimental for channel estimation. This paper explains why

    Value Iteration for Long-run Average Reward in Markov Decision Processes

    Full text link
    Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks

    Decidability Results for Multi-objective Stochastic Games

    Full text link
    We study stochastic two-player turn-based games in which the objective of one player is to ensure several infinite-horizon total reward objectives, while the other player attempts to spoil at least one of the objectives. The games have previously been shown not to be determined, and an approximation algorithm for computing a Pareto curve has been given. The major drawback of the existing algorithm is that it needs to compute Pareto curves for finite horizon objectives (for increasing length of the horizon), and the size of these Pareto curves can grow unboundedly, even when the infinite-horizon Pareto curve is small. By adapting existing results, we first give an algorithm that computes the Pareto curve for determined games. Then, as the main result of the paper, we show that for the natural class of stopping games and when there are two reward objectives, the problem of deciding whether a player can ensure satisfaction of the objectives with given thresholds is decidable. The result relies on intricate and novel proof which shows that the Pareto curves contain only finitely many points. As a consequence, we get that the two-objective discounted-reward problem for unrestricted class of stochastic games is decidable.Comment: 35 page

    Quantitative multi-objective verification for probabilistic systems

    Get PDF
    We present a verification framework for analysing multiple quantitative objectives of systems that exhibit both nondeterministic and stochastic behaviour. These systems are modelled as probabilistic automata, enriched with cost or reward structures that capture, for example, energy usage or performance metrics. Quantitative properties of these models are expressed in a specification language that incorporates probabilistic safety and liveness properties, expected total cost or reward, and supports multiple objectives of these types. We propose and implement an efficient verification framework for such properties and then present two distinct applications of it: firstly, controller synthesis subject to multiple quantitative objectives; and, secondly, quantitative compositional verification. The practical applicability of both approaches is illustrated with experimental results from several large case studies
    • …
    corecore