2,072 research outputs found
Realizable strategies in continuous-time Markov decision processes
For the Borel model of the continuous-time Markov decision process, we introduce a wide class of control strategies. In a particular case, such strategies transform to the standard relaxed strategies, intensively studied in the last decade. In another special case, if one restricts to another special subclass of the general strategies, the model transforms to the semi-Markov decision process. Further, we show that the relaxed strategies are not realizable. For the constrained optimal control problem with total expected costs, we describe the sufficient class of realizable strategies, the so-called Poisson-related strategies. Finally, we show that, for solving the formulated optimal control problems, one can use all the tools developed earlier for the classical discrete-time Markov decision processes
Optimized Bacteria are Environmental Prediction Engines
Experimentalists have observed phenotypic variability in isogenic bacteria
populations. We explore the hypothesis that in fluctuating environments this
variability is tuned to maximize a bacterium's expected log growth rate,
potentially aided by epigenetic markers that store information about past
environments. We show that, in a complex, memoryful environment, the maximal
expected log growth rate is linear in the instantaneous predictive
information---the mutual information between a bacterium's epigenetic markers
and future environmental states. Hence, under resource constraints, optimal
epigenetic markers are causal states---the minimal sufficient statistics for
prediction. This is the minimal amount of information about the past needed to
predict the future as well as possible. We suggest new theoretical
investigations into and new experiments on bacteria phenotypic bet-hedging in
fluctuating complex environments.Comment: 7 pages, 1 figure;
http://csc.ucdavis.edu/~cmg/compmech/pubs/obepe.ht
What\u27s in a Name? The Matrix as an Introduction to Mathematics
In my classes on the nature of scientific thought, I have often used the movie The Matrix (1999) to illustrate how evidence shapes the reality we perceive (or think we perceive). As a mathematician and self-confessed science fiction fan, I usually field questions related to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object?
Of course, there are many possible explanations for this, each of which probably contributed a little to the naming decision. First off, it sounds cool and mysterious. That much is clear, and it may be that this reason is the most heavily weighted of them all. However, a quick look at the definitions of the word reveals deeper possibilities for the meaning of the movie’s title. Consider the following definitions related to different fields of study taken from Wikipedia on January 4, 2010:
• Matrix (mathematics), a mathematical object generally represented as an array of numbers.
• Matrix (biology), with numerous meanings, often referring to a biological material where specialized structures are formed or embedded.
• Matrix (archeology), the soil or sediment surrounding a dig site.
• Matrix (geology), the fine grains between larger grains in igneous or sedimentary rocks.
• Matrix (chemistry), a continuous solid phase in which particles (atoms, molecules, ions, etc.) are embedded.
All of these point to an essential commonality: a matrix is an underlying structure in which other objects are embedded. This is to be expected, I suppose, given that the word is derived from the Latin word referring to the womb — something in which all of us are embedded at the beginning of our existence. And so mathematicians, being the Latin scholars we are, have adapted the term: a mathematical matrix has quantities (usually numbers, but they could be almost anything) embedded in it. A biological matrix has cell components embedded in it. A geological matrix has grains of rock embedded in it. And so on. So a second reason for the cool name is that we are talking, in the movie, about a computer system generating a virtual reality in which human beings are embedded (literally, since they are lying down in pods). Thus, the computer program forms a literal matrix, one that bears an intentional likeness to a womb.
However, there are other ways to connect the idea of a matrix to the film’s premise. These explanations operate on a higher level and are explicitly relevant to the mathematical definition of a matrix as well as to the events in the trilogy of Matrix movies. They are related to computer graphics, Markov chains, and network theory. This essay will explore each of these in turn, and discuss their application to either the events in the film’s story-line or to the making of the movie itself
Mean-Payoff Optimization in Continuous-Time Markov Chains with Parametric Alarms
Continuous-time Markov chains with alarms (ACTMCs) allow for alarm events
that can be non-exponentially distributed. Within parametric ACTMCs, the
parameters of alarm-event distributions are not given explicitly and can be
subject of parameter synthesis. An algorithm solving the -optimal
parameter synthesis problem for parametric ACTMCs with long-run average
optimization objectives is presented. Our approach is based on reduction of the
problem to finding long-run average optimal strategies in semi-Markov decision
processes (semi-MDPs) and sufficient discretization of parameter (i.e., action)
space. Since the set of actions in the discretized semi-MDP can be very large,
a straightforward approach based on explicit action-space construction fails to
solve even simple instances of the problem. The presented algorithm uses an
enhanced policy iteration on symbolic representations of the action space. The
soundness of the algorithm is established for parametric ACTMCs with
alarm-event distributions satisfying four mild assumptions that are shown to
hold for uniform, Dirac and Weibull distributions in particular, but are
satisfied for many other distributions as well. An experimental implementation
shows that the symbolic technique substantially improves the efficiency of the
synthesis algorithm and allows to solve instances of realistic size.Comment: This article is a full version of a paper accepted to the Conference
on Quantitative Evaluation of SysTems (QEST) 201
On reducing a constrained gradual-impulsive control problem for a jump Markov model to a model with gradual control only
In this paper we consider a gradual-impulsive control problem for continuous-time Markov decision processes (CTMDPs) with total cost criteria and constraints. We develop a simple and useful method, which reduces the concerned problem to a standard CTMDP problem with gradual control only. This allows us to derive straightforwardly and under a minimal set of conditions the optimality results (sufficient classes of control policies, as well as the existence of stationary optimal policies) for the original constrained gradual-impulsive control problem
- …