1,606 research outputs found

    Confidence limits of evolutionary synthesis models. IV Moving forward to a probabilistic formulation

    Get PDF
    Synthesis models predict the integrated properties of stellar populations. Several problems exist in this field, mostly related to the fact that integrated properties are distributed. To date, this aspect has been either ignored (as in standard synthesis models, which are inherently deterministic) or interpreted phenomenologically (as in Monte Carlo simulations, which describe distributed properties rather than explain them). We approach population synthesis as a problem in probability theory, in which stellar luminosities are random variables extracted from the stellar luminosity distribution function (sLDF). We derive the population LDF (pLDF) for clusters of any size from the sLDF, obtaining the scale relations that link the sLDF to the pLDF. We recover the predictions of standard synthesis models, which are shown to compute the mean of the sLDF. We provide diagnostic diagrams and a simplified recipe for testing the statistical richness of observed clusters, thereby assessing whether standard synthesis models can be safely used or a statistical treatment is mandatory. We also recover the predictions of Monte Carlo simulations, with the additional bonus of being able to interpret them in mathematical and physical terms. We give examples of problems that can be addressed through our probabilistic formalism. Though still under development, ours is a powerful approach to population synthesis. In an era of resolved observations and pipelined analyses of large surveys, this paper is offered as a signpost in the field of stellar populations.Comment: Accepted by A&A. Substantially modified with respect to the 1st draft. 26 pages, 14 fig

    A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations

    Get PDF
    We present a weakest-precondition-style calculus for reasoning about the expected values (pre-expectations) of \emph{mixed-sign unbounded} random variables after execution of a probabilistic program. The semantics of a while-loop is well-defined as the limit of iteratively applying a functional to a zero-element just as in the traditional weakest pre-expectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always well-defined, even if the expected values do not exist. We show that the calculus is sound, allows for compositional reasoning, and present an invariant-based approach for reasoning about pre-expectations of loops

    Stochastic abstraction of programs: towards performance-driven development

    Get PDF
    Distributed computer systems are becoming increasingly prevalent, thanks to modern technology, and this leads to significant challenges for the software developers of these systems. In particular, in order to provide a certain service level agreement with users, the performance characteristics of the system are critical. However, developers today typically consider performance only in the later stages of development, when it may be too late to make major changes to the design. In this thesis, we propose a performance driven approach to development — based around tool support that allows developers to use performance modelling techniques, while still working at the level of program code. There are two central themes to the thesis. The first is to automatically relate performance models to program code. We define the Simple Imperative Remote Invocation Language (SIRIL), and provide a probabilistic semantics that interprets a program as a Markov chain. To make such an interpretation both computable and efficient, we develop an abstract interpretation of the semantics, from which we can derive a Performance Evaluation Process Algebra (PEPA) model of the system. This is based around abstracting the domain of variables to truncated multivariate normal measures. The second theme of the thesis is to analyse large performance models by means of compositional abstraction. We use two abstraction techniques based on aggregation of states — abstract Markov chains, and stochastic bounds — and apply both of them compositionally to PEPA models. This allows us to model check properties in the three-valued Continuous Stochastic Logic (CSL), on abstracted models. We have implemented an extension to the Eclipse plug-in for PEPA, which provides a graphical interface for specifying which states in the model to aggregate, and for performing the model checking

    Risk Analysis in Investment Appraisal

    Get PDF
    The methodology and uses of Monte-Carlo simulation technique are presented as applied to the analysis and assessment of risk in the evaluation of investment projects. The importance of risk analysis in investment appraisal is highlighted and the stages in the process introduced. The results generated by a risk analysis application are interpreted, including the investment decision criteria and measures of risk based on the expected value concept. Conclusions are drawn regarding the usefulness and limitations of risk analysis in investment appraisal.risk analysis; investment appraisal; Monte Carlo simulation; project evaluation; measures of risk; investment decision criteria

    Template-Based Static Posterior Inference for Bayesian Probabilistic Programming

    Full text link
    In Bayesian probabilistic programming, a central problem is to estimate the normalised posterior distribution (NPD) of a probabilistic program with conditioning. Prominent approximate approaches to address this problem include Markov chain Monte Carlo and variational inference, but neither can generate guaranteed outcomes within limited time. Moreover, most existing formal approaches that perform exact inference for NPD are restricted to programs with closed-form solutions or bounded loops/recursion. A recent work (Beutner et al., PLDI 2022) derived guaranteed bounds for NPD over programs with unbounded recursion. However, as this approach requires recursion unrolling, it suffers from the path explosion problem. Furthermore, previous approaches do not consider score-recursive probabilistic programs that allow score statements inside loops, which is non-trivial and requires careful treatment to ensure the integrability of the normalising constant in NPD. In this work, we propose a novel automated approach to derive bounds for NPD via polynomial templates. Our approach can handle probabilistic programs with unbounded while loops and continuous distributions with infinite supports. The novelties in our approach are three-fold: First, we use polynomial templates to circumvent the path explosion problem from recursion unrolling; Second, we derive a novel multiplicative variant of Optional Stopping Theorem that addresses the integrability issue in score-recursive programs; Third, to increase the accuracy of the derived bounds via polynomial templates, we propose a novel technique of truncation that truncates a program into a bounded range of program values. Experiments over a wide range of benchmarks demonstrate that our approach is time-efficient and can derive bounds for NPD that are comparable with (or tighter than) the recursion-unrolling approach (Beutner et al., PLDI 2022)

    Risk Analysis in Investment Appraisal

    Get PDF
    This paper was prepared for the purpose of presenting the methodology and uses of the Monte Carlo simulation technique as applied in the evaluation of investment projects to analyse and assess risk. The first part of the paper highlights the importance of risk analysis in investment appraisal. The second part presents the various stages in the application of the risk analysis process. The third part examines the interpretation of the results generated by a risk analysis application including investment decision criteria and various measures of risk based on the expected value concept. The final part draws some conclusions regarding the usefulness and limitations of risk analysis in investment appraisal.Risk Analysis; Monte Carlo simulation; Investment appraisal; Project analysis; Forecasting and simulation; Business administration

    Assessing productive efficiency of banks using integrated Fuzzy-DEA and bootstrapping:a case of Mozambican banks

    Get PDF
    Performance analysis has become a vital part of the management practices in the banking industry. There are numerous applications using DEA models to estimate efficiency in banking, and most of them assume that inputs and outputs are known with absolute precision. Here, we propose new Fuzzy-DEA α-level models to assess underlying uncertainty. Further, bootstrap truncated regressions with fixed factors are used to measure the impact of each model on the efficiency scores and to identify the most relevant contextual variables on efficiency. The proposed models have been demonstrated using an application in Mozambican banks to handle the underlying uncertainty. Findings reveal that fuzziness is predominant over randomness in interpreting the results. In addition, fuzziness can be used by decision-makers to identify missing variables to help in interpreting the results. Price of labor, price of capital, and market-share were found to be the significant factors in measuring bank efficiency. Managerial implications are addressed

    Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware

    Get PDF
    Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.This research was supported in part by the National Science Foundation (Grants CCF-0905244, CCF-1036241, CCF-1138967, CCF-1138967, and IIS-0835652), the United States Department of Energy (Grant DE-SC0008923), and DARPA (Grants FA8650-11-C-7192, FA8750-12-2-0110)
    corecore