26 research outputs found

    Maximum pseudo-likelihood estimator for nearest-neighbours Gibbs point processes

    Full text link
    This paper is devoted to the estimation of a vector parametrizing an energy function associated to some "Nearest-Neighbours" Gibbs point process, via the pseudo-likelihood method. We present some convergence results concerning this estimator, that is strong consistency and asymptotic normality, when only a single realization is observed. Sufficient conditions are expressed in terms of the local energy function and are verified on some examples.Comment: 29 pages - 2 figure

    A cross-validation-based statistical theory for point processes

    Get PDF
    Motivated by cross-validation’s general ability to reduce overfitting and mean square error, we develop a cross-validation-based statistical theory for general point processes. It is based on the combination of two novel concepts for general point processes: cross-validation and prediction errors. Our cross-validation approach uses thinning to split a point process/pattern into pairs of training and validation sets, while our prediction errors measure discrepancy between two point processes. The new statistical approach, which may be used to model different distributional characteristics, exploits the prediction errors to measure how well a given model predicts validation sets using associated training sets. Having indicated that our new framework generalizes many existing statistical approaches, we then establish different theoretical properties for it, including large sample properties. We further recognize that non-parametric intensity estimation is an instance of Papangelou conditional intensity estimation, which we exploit to apply our new statistical theory to kernel intensity estimation. Using independent thinning-based cross-validation, we numerically show that the new approach substantially outperforms the state of the art in bandwidth selection. Finally, we carry out intensity estimation for a dataset in forestry (Euclidean domain) and a dataset in neurology (linear network)

    A non-parametric measure of spatial interaction in point patterns

    Full text link

    Quermass-interaction processes: Conditions for stability

    Get PDF
    We consider a class of random point and germ-grain processes, obtained using a rather natural weighting procedure. Given a Poisson point process, on each point one places a grain, a (possibly random) compact convex set. Let Ξ be the union of all grains. One can now construct new processes whose density is derived from an exponential of a linear combination of quermass functionals of Ξ. If only the area functional is used, then the area-interaction point process is recovered. New point processes arise if we include the perimeter length functional, or the Euler functional (number of components minus number of holes). The main question addressed by the paper is that of when the resulting point process is well-defined: geometric arguments are used to establish conditions for the point process to be stable in the sense of Ruelle

    Statistical procedures for spatial point pattern recognition

    Get PDF
    Spatial structures in the form of point patterns arise in many different contexts, and in most of them the key goal concerns the detection and recognition of the underlying spatial pattern. Particularly interesting is the case of pattern analysis with replicated data in two or more experimental groups. This paper compares design-based and model-based approaches to the analysis of this kind of spatial data. Basic questions about pattern detection concern estimating the properties of the underlying spatial point process within each experimental group, and comparing the properties between groups. The paper discusses how either approach can be implemented in the specific context of a single-factor replicated experiment and uses simulations to show how the model-based approach can be more efficient when the underlying model assumptions hold, but potentially misleading otherwise

    Statistical procedures for spatial point pattern recognition

    Get PDF
    Spatial structures in the form of point patterns arise in many different contexts, and in most of them the key goal concerns the detection and recognition of the underlying spatial pattern. Particularly interesting is the case of pattern analysis with replicated data in two or more experimental groups. This paper compares design-based and model-based approaches to the analysis of this kind of spatial data. Basic questions about pattern detection concern estimating the properties of the underlying spatial point process within each experimental group, and comparing the properties between groups. The paper discusses how either approach can be implemented in the specific context of a single-factor replicated experiment and uses simulations to show how the model-based approach can be more efficient when the underlying model assumptions hold, but potentially misleading otherwise

    Probabilistic Image Models and their Massively Parallel Architectures : A Seamless Simulation- and VLSI Design-Framework Approach

    Get PDF
    Algorithmic robustness in real-world scenarios and real-time processing capabilities are the two essential and at the same time contradictory requirements modern image-processing systems have to fulfill to go significantly beyond state-of-the-art systems. Without suitable image processing and analysis systems at hand, which comply with the before mentioned contradictory requirements, solutions and devices for the application scenarios of the next generation will not become reality. This issue would eventually lead to a serious restraint of innovation for various branches of industry. This thesis presents a coherent approach to the above mentioned problem. The thesis at first describes a massively parallel architecture template and secondly a seamless simulation- and semiconductor-technology-independent design framework for a class of probabilistic image models, which are formulated on a regular Markovian processing grid. The architecture template is composed of different building blocks, which are rigorously derived from Markov Random Field theory with respect to the constraints of \it massively parallel processing \rm and \it technology independence\rm. This systematic derivation procedure leads to many benefits: it decouples the architecture characteristics from constraints of one specific semiconductor technology; it guarantees that the derived massively parallel architecture is in conformity with theory; and it finally guarantees that the derived architecture will be suitable for VLSI implementations. The simulation-framework addresses the unique hardware-relevant simulation needs of MRF based processing architectures. Furthermore the framework ensures a qualified representation for simulation of the image models and their massively parallel architectures by means of their specific simulation modules. This allows for systematic studies with respect to the combination of numerical, architectural, timing and massively parallel processing constraints to disclose novel insights into MRF models and their hardware architectures. The design-framework rests upon a graph theoretical approach, which offers unique capabilities to fulfill the VLSI demands of massively parallel MRF architectures: the semiconductor technology independence guarantees a technology uncommitted architecture for several design steps without restricting the design space too early; the design entry by means of behavioral descriptions allows for a functional representation without determining the architecture at the outset; and the topology-synthesis simplifies and separates the data- and control-path synthesis. Detailed results discussed in the particular chapters together with several additional results collected in the appendix will further substantiate the claims made in this thesis

    An Extension of a Classical Technique with Applications to Gibbs Point Process Statistics

    Get PDF
    In statistics, Rao-Blackwellization is a well-known technique to improve estimators by removing ancillary information which does not help toward making inference on the parameter of interest. The present thesis reveals this concept as an inverse problem that is often ill-posed. That means, the Rao-Blackwellization generally fails to be continuous with respect to a semi-norm that measures the amount of some ancillary part of an estimator. However, if the underlying statistical model is misspecified, inference cannot go beyond that inaccuracy and hence requires a corresponding continuous surrogate for the Rao-Blackwellization. We therefore propose regularizations of the mentioned ill-posed Rao-Blackwell inverse problem and eventually, we introduce and analyze the concept of regularized Rao-Blackwellization. In classical examples, this new concept leads to new estimators and also to new interpretations of existing ones. For more complex statistical models, like several ones in Gibbs point process statistics, regularized Rao-Blackwellizations can be computed at least approximately. A simulation study where we consider the Lennard-Jones model demonstrates the computational feasibility and the benefit of these results, especially in constructing parametric bootstrap confidence regions on the basis of the maximum likelihood estimator.2021-06-2

    Malleja ja menetelmiä puiden tilajärjestyksen analysoimiseksi.

    Get PDF
    corecore