688 research outputs found

    Parameter dependent optimal thresholds, indifference levels and inverse optimal stopping problems

    Full text link
    Consider the classic infinite-horizon problem of stopping a one-dimensional diffusion to optimise between running and terminal rewards and suppose we are given a parametrised family of such problems. We provide a general theory of parameter dependence in infinite-horizon stopping problems for which threshold strategies are optimal. The crux of the approach is a supermodularity condition which guarantees that the family of problems is indexable by a set valued map which we call the indifference map. This map is a natural generalisation of the allocation (Gittins) index, a classical quantity in the theory of dynamic allocation. Importantly, the notion of indexability leads to a framework for inverse optimal stopping problems

    General Stopping Behaviors of Naive and Non-Committed Sophisticated Agents, with Application to Probability Distortion

    Full text link
    We consider the problem of stopping a diffusion process with a payoff functional that renders the problem time-inconsistent. We study stopping decisions of naive agents who reoptimize continuously in time, as well as equilibrium strategies of sophisticated agents who anticipate but lack control over their future selves' behaviors. When the state process is one dimensional and the payoff functional satisfies some regularity conditions, we prove that any equilibrium can be obtained as a fixed point of an operator. This operator represents strategic reasoning that takes the future selves' behaviors into account. We then apply the general results to the case when the agents distort probability and the diffusion process is a geometric Brownian motion. The problem is inherently time-inconsistent as the level of distortion of a same event changes over time. We show how the strategic reasoning may turn a naive agent into a sophisticated one. Moreover, we derive stopping strategies of the two types of agent for various parameter specifications of the problem, illustrating rich behaviors beyond the extreme ones such as "never-stopping" or "never-starting"

    SEQUENTIAL METHODS FOR NON-PARAMETRIC HYPOTHESIS TESTING

    Get PDF
    In today’s world, many applications are characterized by the availability of large amounts of complex-structured data. It is not always possible to fit the data to predefined models or distributions. Model dependent signal processing approaches are often susceptible to mismatches between the data and the assumed model. In cases where the data does not conform to the assumed model, providing sufficient performance guarantees becomes a challenging task. Therefore, it is important to devise methods that are model-independent, robust, provide sufficient performance guarantees for the task at hand and, at the same time, are simple to implement. The goal of this dissertation is to develop such algorithms for two-sided sequential binary hypothesis testing. In this dissertation, we propose two algorithms for sequential non-parametric hypothesis testing. The proposed algorithms are based on the random distortion testing (RDT) framework. The RDT framework addresses the problem of testing whether a random signal observed in additive noise deviates by more than a specified tolerance from a fixed model. The data-based approach is non-parametric in the sense that the underlying signal distributions under each hypothesis are assumed to be unknown. Importantly, we show that the proposed algorithms are not only robust but also provide performance guarantees in the non-asymptotic regimes in contrast to the popular non-parametric likelihood ratio based approaches which provide only asymptotic performance guarantees. In the first part of the dissertation, we develop a sequential algorithm SeqRDT. We first introduce a few mild assumptions required to control the error probabilities of the algorithm. We then analyze the asymptotic properties of the algorithm along with the behavior of the thresholds. Finally, we derive the upper bounds on the probabilities of false alarm (PFA) and missed detection (PMD) and demonstrate how to choose the algorithm parameters such that PFA and PMD can be guaranteed to stay below pre-specified levels. Specifically, we present two ways to design the algorithm: We first introduce the notion of a buffer and show that with the help of a few mild assumptions we can choose an appropriate buffer size such that PFA and PMD can be controlled. Later, we eliminate the buffer by introducing additional parameters and show that with the choice of appropriate parameters we can still control the probabilities of error of the algorithm. In the second part of the dissertation, we propose a truncated (finite horizon) algorithm, TSeqRDT, for the two-sided binary hypothesis testing problem. We first present the optimal fixed-sample-size (FSS) test for the hypothesis testing problem and present a few important preliminary results required to design the truncated algorithm. Similar, to the non-truncated case we first analyze the properties of the thresholds and then derive the upper bounds on PFA and PMD. We then choose the thresholds such that the proposed algorithm not only guarantees the error probabilities to be below pre-specified levels but at the same time makes a decision faster on average compared to its optimal FSS counterpart. We show that the truncated algorithm requires fewer assumptions on the signal model compared to the non-truncated case. We also derive bounds on the average stopping times of the algorithm. Importantly, we study the trade-off between the stopping time and the error probabilities of the algorithm and propose a method to choose the algorithm parameters. Finally, via numerical simulations, we compare the performance of T-SeqRDT and SeqRDT to sequential probability ratio test (SPRT) and composite sequential probability ratio tests. We also show the robustness of the proposed approaches compared to the standard likelihood ratio based approaches

    Reference-Dependent Preferences

    Get PDF
    In this chapter, we present theories and applications of reference-dependent preferences. We provide some historical perspective, but also move quickly to the current research frontier, focusing on developments in reference dependence over the last 20 years. We present a number of worked examples to highlight the broad applicability of reference dependence. While our primary focus is gain–loss utility, we also provide a short treatment of probability weighting and its links to reference dependence

    Stochastic Analysis in Finance and Insurance

    Get PDF
    [no abstract available

    Sequential Sampling Equilibrium

    Full text link
    This paper introduces an equilibrium framework based on sequential sampling in which players face strategic uncertainty over their opponents' behavior and acquire informative signals to resolve it. Sequential sampling equilibrium delivers a disciplined model featuring an endogenous distribution of choices, beliefs, and decision times, that not only rationalizes well-known deviations from Nash equilibrium, but also makes novel predictions supported by existing data. It grounds a relationship between empirical learning and strategic sophistication, and generates stochastic choice through randomness inherent to sampling, without relying on indifference or choice mistakes. Further, it provides a rationale for Nash equilibrium when sampling costs vanish

    The autocorrelated Bayesian sampler : a rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times

    Get PDF
    Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the “Bayesian brain” operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise

    Techniques for automated parameter estimation in computational models of probabilistic systems

    Get PDF
    The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complex stochastic systems such that the resultant model meets user-specified behavioral specifications. These algorithms are designed to operate on probabilistic systems – systems that, in general, behave differently under identical conditions. The algorithms work using an approach that combines formal verification and mathematical optimization to explore a model\u27s parameter space. The problem of determining whether a model instantiated with a given set of parameter values satisfies the desired specification is first defined using formal verification terminology, and then reformulated in terms of statistical hypothesis testing. Parameter space exploration involves determining the outcome of the hypothesis testing query for each parameter point and is guided using simulated annealing. The first algorithm uses the sequential probability ratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithm uses an approach based on Bayesian statistical model checking (BSMC). The SPRT-based parameter synthesis algorithm was used to validate that a given model of glucose-insulin metabolism has the capability of representing diabetic behavior by synthesizing values of three parameters that ensure that the glucose-insulin subsystem spends at least 20 minutes in a diabetic scenario. The BSMC-based algorithm was used to discover the values of parameters in a physiological model of the acute inflammatory response that guarantee a set of desired clinical outcomes. These two applications demonstrate how our algorithms use formal verification, statistical hypothesis testing and mathematical optimization to automatically synthesize parameters of complex probabilistic models in order to meet user-specified behavioral propertie

    Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification

    Get PDF
    Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide
    • …
    corecore