1,904 research outputs found

    Julian Ernst Besag, 26 March 1945 -- 6 August 2010, a biographical memoir

    Full text link
    Julian Besag was an outstanding statistical scientist, distinguished for his pioneering work on the statistical theory and analysis of spatial processes, especially conditional lattice systems. His work has been seminal in statistical developments over the last several decades ranging from image analysis to Markov chain Monte Carlo methods. He clarified the role of auto-logistic and auto-normal models as instances of Markov random fields and paved the way for their use in diverse applications. Later work included investigations into the efficacy of nearest neighbour models to accommodate spatial dependence in the analysis of data from agricultural field trials, image restoration from noisy data, and texture generation using lattice models.Comment: 26 pages, 14 figures; minor revisions, omission of full bibliograph

    Generalized belief change with imprecise probabilities and graphical models

    Get PDF
    We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored

    The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime

    Full text link
    We propose a novel technique for analyzing adaptive sampling called the {\em Simulator}. Our approach differs from the existing methods by considering not how much information could be gathered by any fixed sampling strategy, but how difficult it is to distinguish a good sampling strategy from a bad one given the limited amount of data collected up to any given time. This change of perspective allows us to match the strength of both Fano and change-of-measure techniques, without succumbing to the limitations of either method. For concreteness, we apply our techniques to a structured multi-arm bandit problem in the fixed-confidence pure exploration setting, where we show that the constraints on the means imply a substantial gap between the moderate-confidence sample complexity, and the asymptotic sample complexity as δ→0\delta \to 0 found in the literature. We also prove the first instance-based lower bounds for the top-k problem which incorporate the appropriate log-factors. Moreover, our lower bounds zero-in on the number of times each \emph{individual} arm needs to be pulled, uncovering new phenomena which are drowned out in the aggregate sample complexity. Our new analysis inspires a simple and near-optimal algorithm for the best-arm and top-k identification, the first {\em practical} algorithm of its kind for the latter problem which removes extraneous log factors, and outperforms the state-of-the-art in experiments

    On the Hausdorff dimension of the Rauzy gasket

    Full text link
    In this paper, we prove that the Hausdorff dimension of the Rauzy gasket is less than 2. By this result, we answer a question addressed by Pierre Arnoux. Also, this question is a very particular case of the conjecture stated by S.P. Novikov and A. Ya. Maltsev in 2003.Comment: 23 pages, 5 figure

    Estimation of the Rate-Distortion Function

    Full text link
    Motivated by questions in lossy data compression and by theoretical considerations, we examine the problem of estimating the rate-distortion function of an unknown (not necessarily discrete-valued) source from empirical data. Our focus is the behavior of the so-called "plug-in" estimator, which is simply the rate-distortion function of the empirical distribution of the observed data. Sufficient conditions are given for its consistency, and examples are provided to demonstrate that in certain cases it fails to converge to the true rate-distortion function. The analysis of its performance is complicated by the fact that the rate-distortion function is not continuous in the source distribution; the underlying mathematical problem is closely related to the classical problem of establishing the consistency of maximum likelihood estimators. General consistency results are given for the plug-in estimator applied to a broad class of sources, including all stationary and ergodic ones. A more general class of estimation problems is also considered, arising in the context of lossy data compression when the allowed class of coding distributions is restricted; analogous results are developed for the plug-in estimator in that case. Finally, consistency theorems are formulated for modified (e.g., penalized) versions of the plug-in, and for estimating the optimal reproduction distribution.Comment: 18 pages, no figures [v2: removed an example with an error; corrected typos; a shortened version will appear in IEEE Trans. Inform. Theory
    • …
    corecore