240,187 research outputs found

    On the method of typical bounded differences

    Full text link
    Concentration inequalities are fundamental tools in probabilistic combinatorics and theoretical computer science for proving that random functions are near their means. Of particular importance is the case where f(X) is a function of independent random variables X=(X_1, ..., X_n). Here the well known bounded differences inequality (also called McDiarmid's or Hoeffding-Azuma inequality) establishes sharp concentration if the function f does not depend too much on any of the variables. One attractive feature is that it relies on a very simple Lipschitz condition (L): it suffices to show that |f(X)-f(X')| \leq c_k whenever X,X' differ only in X_k. While this is easy to check, the main disadvantage is that it considers worst-case changes c_k, which often makes the resulting bounds too weak to be useful. In this paper we prove a variant of the bounded differences inequality which can be used to establish concentration of functions f(X) where (i) the typical changes are small although (ii) the worst case changes might be very large. One key aspect of this inequality is that it relies on a simple condition that (a) is easy to check and (b) coincides with heuristic considerations why concentration should hold. Indeed, given an event \Gamma that holds with very high probability, we essentially relax the Lipschitz condition (L) to situations where \Gamma occurs. The point is that the resulting typical changes c_k are often much smaller than the worst case ones. To illustrate its application we consider the reverse H-free process, where H is 2-balanced. We prove that the final number of edges in this process is concentrated, and also determine its likely value up to constant factors. This answers a question of Bollob\'as and Erd\H{o}s.Comment: 25 page

    Herbert Simon's decision-making approach: Investigation of cognitive processes in experts

    Get PDF
    This is a post print version of the article. The official published can be obtained from the links below - PsycINFO Database Record (c) 2010 APA, all rights reserved.Herbert Simon's research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon's approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman's biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon's approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment

    The patchy Method for the Infinite Horizon Hamilton-Jacobi-Bellman Equation and its Accuracy

    Get PDF
    We introduce a modification to the patchy method of Navasca and Krener for solving the stationary Hamilton Jacobi Bellman equation. The numerical solution that we generate is a set of polynomials that approximate the optimal cost and optimal control on a partition of the state space. We derive an error bound for our numerical method under the assumption that the optimal cost is a smooth strict Lyupanov function. The error bound is valid when the number of subsets in the partition is not too large.Comment: 50 pages, 5 figure

    Convergence of large deviation estimators

    Full text link
    We study the convergence of statistical estimators used in the estimation of large deviation functions describing the fluctuations of equilibrium, nonequilibrium, and manmade stochastic systems. We give conditions for the convergence of these estimators with sample size, based on the boundedness or unboundedness of the quantity sampled, and discuss how statistical errors should be defined in different parts of the convergence region. Our results shed light on previous reports of 'phase transitions' in the statistics of free energy estimators and establish a general framework for reliably estimating large deviation functions from simulation and experimental data and identifying parameter regions where this estimation converges.Comment: 13 pages, 6 figures. v2: corrections focusing the paper on large deviations; v3: minor corrections, close to published versio

    Computing the entropy of user navigation in the web

    Get PDF
    Navigation through the web, colloquially known as "surfing", is one of the main activities of users during web interaction. When users follow a navigation trail they often tend to get disoriented in terms of the goals of their original query and thus the discovery of typical user trails could be useful in providing navigation assistance. Herein, we give a theoretical underpinning of user navigation in terms of the entropy of an underlying Markov chain modelling the web topology. We present a novel method for online incremental computation of the entropy and a large deviation result regarding the length of a trail to realize the said entropy. We provide an error analysis for our estimation of the entropy in terms of the divergence between the empirical and actual probabilities. We then indicate applications of our algorithm in the area of web data mining. Finally, we present an extension of our technique to higher-order Markov chains by a suitable reduction of a higher-order Markov chain model to a first-order one
    • …
    corecore