705 research outputs found

    Two adaptive rejection sampling schemes for probability density functions log-convex tails

    Get PDF
    Monte Carlo methods are often necessary for the implementation of optimal Bayesian estimators. A fundamental technique that can be used to generate samples from virtually any target probability distribution is the so-called rejection sampling method, which generates candidate samples from a proposal distribution and then accepts them or not by testing the ratio of the target and proposal densities. The class of adaptive rejection sampling (ARS) algorithms is particularly interesting because they can achieve high acceptance rates. However, the standard ARS method can only be used with log-concave target densities. For this reason, many generalizations have been proposed. In this work, we investigate two different adaptive schemes that can be used to draw exactly from a large family of univariate probability density functions (pdf's), not necessarily log-concave, possibly multimodal and with tails of arbitrary concavity. These techniques are adaptive in the sense that every time a candidate sample is rejected, the acceptance rate is improved. The two proposed algorithms can work properly when the target pdf is multimodal, with first and second derivatives analytically intractable, and when the tails are log-convex in a infinite domain. Therefore, they can be applied in a number of scenarios in which the other generalizations of the standard ARS fail. Two illustrative numerical examples are shown

    On the Generalized Ratio of Uniforms as a Combination of Transformed Rejection and Extended Inverse of Density Sampling

    Get PDF
    Documento depositado en el repositorio arXiv.org. Versión: arXiv:1205.0482v6 [stat.CO]In this work we investigate the relationship among three classical sampling techniques: the inverse of density (Khintchine's theorem), the transformed rejection (TR) and the generalized ratio of uniforms (GRoU). Given a monotonic probability density function (PDF), we show that the transformed area obtained using the generalized ratio of uniforms method can be found equivalently by applying the transformed rejection sampling approach to the inverse function of the target density. Then we provide an extension of the classical inverse of density idea, showing that it is completely equivalent to the GRoU method for monotonic densities. Although we concentrate on monotonic probability density functions (PDFs), we also discuss how the results presented here can be extended to any non-monotonic PDF that can be decomposed into a collection of intervals where it is monotonically increasing or decreasing. In this general case, we show the connections with transformations of certain random variables and the generalized inverse PDF with the GRoU technique. Finally, we also introduce a GRoU technique to handle unbounded target densities

    From phenomenological modelling of anomalous diffusion through continuous-time random walks and fractional calculus to correlation analysis of complex systems

    Get PDF
    This document contains more than one topic, but they are all connected in ei- ther physical analogy, analytic/numerical resemblance or because one is a building block of another. The topics are anomalous diffusion, modelling of stylised facts based on an empirical random walker diffusion model and null-hypothesis tests in time series data-analysis reusing the same diffusion model. Inbetween these topics are interrupted by an introduction of new methods for fast production of random numbers and matrices of certain types. This interruption constitutes the entire chapter on random numbers that is purely algorithmic and was inspired by the need of fast random numbers of special types. The sequence of chapters is chrono- logically meaningful in the sense that fast random numbers are needed in the first topic dealing with continuous-time random walks (CTRWs) and their connection to fractional diffusion. The contents of the last four chapters were indeed produced in this sequence, but with some temporal overlap. While the fast Monte Carlo solution of the time and space fractional diffusion equation is a nice application that sped-up hugely with our new method we were also interested in CTRWs as a model for certain stylised facts. Without knowing economists [80] reinvented what physicists had subconsciously used for decades already. It is the so called stylised fact for which another word can be empirical truth. A simple example: The diffusion equation gives a probability at a certain time to find a certain diffusive particle in some position or indicates concentration of a dye. It is debatable if probability is physical reality. Most importantly, it does not describe the physical system completely. Instead, the equation describes only a certain expectation value of interest, where it does not matter if it is of grains, prices or people which diffuse away. Reality is coded and “averaged” in the diffusion constant. Interpreting a CTRW as an abstract microscopic particle motion model it can solve the time and space fractional diffusion equation. This type of diffusion equation mimics some types of anomalous diffusion, a name usually given to effects that cannot be explained by classic stochastic models. In particular not by the classic diffusion equation. It was recognised only recently, ca. in the mid 1990s, that the random walk model used here is the abstract particle based counterpart for the macroscopic time- and space-fractional diffusion equation, just like the “classic” random walk with regular jumps ±∆x solves the classic diffusion equation. Both equations can be solved in a Monte Carlo fashion with many realisations of walks. Interpreting the CTRW as a time series model it can serve as a possible null- hypothesis scenario in applications with measurements that behave similarly. It may be necessary to simulate many null-hypothesis realisations of the system to give a (probabilistic) answer to what the “outcome” is under the assumption that the particles, stocks, etc. are not correlated. Another topic is (random) correlation matrices. These are partly built on the previously introduced continuous-time random walks and are important in null- hypothesis testing, data analysis and filtering. The main ob jects encountered in dealing with these matrices are eigenvalues and eigenvectors. The latter are car- ried over to the following topic of mode analysis and application in clustering. The presented properties of correlation matrices of correlated measurements seem to be wasted in contemporary methods of clustering with (dis-)similarity measures from time series. Most applications of spectral clustering ignores information and is not able to distinguish between certain cases. The suggested procedure is sup- posed to identify and separate out clusters by using additional information coded in the eigenvectors. In addition, random matrix theory can also serve to analyse microarray data for the extraction of functional genetic groups and it also suggests an error model. Finally, the last topic on synchronisation analysis of electroen- cephalogram (EEG) data resurrects the eigenvalues and eigenvectors as well as the mode analysis, but this time of matrices made of synchronisation coefficients of neurological activity

    A systems engineering analysis of energy economy options for the DDG-51 class of U.S. Naval ships

    Get PDF
    The SECNAV has identified an ambitious set of goals for the Navy's energy programs. The authors addressed DoN energy surety, economy, and ecology goals, scoped the problem to focus on the economy aspect of the DoN's energy goal, and further bounded the analysis to energy economy of the DDG-51 class of surface combatants which appeared to be an area with potentially high return on investment. The team determined that if energy was conserved or better utilized then the triad of SECNAV goals for energy surety, economy and ecology was positively addressed. This report documents a method to assess energy consumption that could be used to make trade-offs for current and future ships. Eight subsystems, along with fuel type, were researched for alternative solutions, with eight of nine subsystem alternatives resulting as "more cost effective." By implementing the optimal recommendations from our team findings and using the fully burdened cost of fuel, we estimate that the DDG-51 program could save 1.9Mperyearpership.Forafleetof50ships,thistranslatestoasavingsof1 .9M per year per ship. For a fleet of 50 ships, this translates to a savings of 9 50M over ten years.http://archive.org/details/asystemsengineer109456950Approved for public release; distribution is unlimited

    Novel schemes for adaptive rejection sampling

    Get PDF
    We address the problem of generating random samples from a target probability distribution, with density function ₒ, using accept/reject methods. An "accept/reject sampler" (or, simply, a "rejection sampler") is an algorithm that first draws a random variate from a proposal distribution with density (where ≠ ₒ, in general) and then performs a test to determine whether the variate can be accepted as a sample from the target distribution or not. If we apply the algorithm repeatedly until we accept times, then we obtain a collection of independent and identically distributed (i.i.d.) samples from the distribution with density ₒ. The goal of the present work is to devise and analyze adaptive rejection samplers that can be applied to generate i.i.d. random variates from the broadest possible class of probability distributions. Adaptive rejection sampling algorithms typically construct a sequence of proposal functions ₒ, ₁,... ₁..., such that (a) it is easy to draw i.i.d. samples from them and (b) they converge, in some way, to the density ₒ of the target probability distribution. When surveying the literature, it is simple to identify several such methods but virtually all of them present severe limitations in the class of target densities,ₒ, for which they can be applied. The "standard" adaptive rejection sampler by Gilks and Wild, for instance, only works when ₒ is strictly log-concave. Through Chapters 3, 4 and 5 we introduce a new methodology for adaptive rejection sampling that can be used with a broad family of target probability densities (including, e.g., multimodal functions) and subsumes Gilks and Wild's method as a particular case. We discuss several variations of the main algorithm that enable, e.g., sampling from some particularly "difficult" distributions (for instance, cases where ₒ has log-convex tails and in nite support) or yield "automatic" software implementations using little analytical information about the target density ₒ. Several numerical examples, including comparisons with some of the most relevant techniques in the literature, are also shown in Chapter 6. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------En este trabajo abordamos el problema de generar muestras aleatorias de una distribución de probabilidad objetivo, con densidad ₒ , empleando métodos de aceptación/rechazo. En un algoritmo de aceptación/rechazo, primero se genera una realización aleatoria de una distribución tentativa con densidad (donde ≠ ₒ , en general) y a continuación se realiza un test para determinar si la muestra se puede aceptar como proveniente de la distribución objetivo o no. Si se aplica el algoritmo repetidamente hasta aceptar veces, obtenemos una colección de muestras independientes y idénticamente distribuidas (i.i.d.) de la distribución con densidad ₒ. El objetivo del trabajo es proponer y analizar nuevos m étodos de aceptación/rechazo adaptativos que pueden ser aplicados para generar muestras i.i.d. de la clase más amplia posible de distribuciones de probabilidad. Los algoritmos de aceptación/rechazo adaptativos suelen construir una secuencia de funciones tentativas ₒ, ₁,... ₁..., tales que (a) sea fácil generar muestras i.i.d. a partir de ellas y (b) converjan, de manera adecuada, hacia la densidad ₒ de la distribución objetivo. Al revisar la literatura, es sencillo identificar varios métodos de este tipo pero todos ellos presentan limitaciones importantes en cuanto a las clases de densidades objetivo a las que se pueden aplicar. El método original de Gilks y Wild, por ejemplo, sólo funciona si ₒ es estrictamente log-cóncava. En los Capí tulos 3, 4 y 5 presentamos una nueva metodología para muestreo adaptativo por aceptación/rechazo que se puede utilizar con una amplia clase de densidades de probabilidad objetivo (incluyendo, por ejemplo, funciones multimodales) y comprende al método de Gilks y Wild como un caso particular. Se discuten diversas variaciones del algoritmo principal que facilitan, por ejemplo, el muestreo de algunas distribuciones particularmente "difíciles" (e.g., casos en los que ₒ tiene colas log-convexas y con soporte infinito) o una implementación software prácticamente "automática", en el sentido de que necesitamos poca información analítica acerca de la función ₒ. En el Capítulo 6 mostramos varios ejemplos numéricos, incluyendo comparaciones con algunas de las técnicas máas relevantes que se pueden encontrar en la literatura

    Bactrian Gold: Challenges and Hope for Private-Sector Development in Afghanistan

    Get PDF
    Based on interviews with Afghanistan's business and economic stakeholders about developing the country's private sector, outlines obstacles to business growth, including security, corruption, and infrastructure; their implications; and recommendations
    corecore