418,387 research outputs found

    Quasi-variances in Xlisp-Stat and on the web

    Get PDF
    The most common summary of a fitted statistical model, a list of parameter estimates and standard errors, does not give the precision of estimated combinations of the parameters, such as differences or ratios. For this, covariances also are needed; but space constraints typically mean that the full covariance matrix cannot routinely be reported. In the important case of parameters associated with the discrete levels of an experimental factor or with a categorical classifying variable, the identifiable parameter combinations are linear contrasts. The QV Calculator computes "quasi-variances" which may be used as an alternative summary of the precision of the estimated parameters. The summary based on quasi-variances is simple and permits good approximation of the standard error of any desired contrast. The idea of such a summary has been suggested by Ridout (1989) and, under the name "floating absolute risk", by Easton, Peto & Babiker (1991). It applies to a wide variety of statistical models, including linear and nonlinear regressions, generalized-linear and GEE models, Cox proportional-hazard models for survival data, generalized additive models, etc. The QV Calculator is written in Xlisp-Stat (Tierney,'90) and can be used either directly by users who have access to Xlisp-Stat or through a web interface by those who do not. The user either supplies the covariance matrix for the effect parameters of interest, or, if using Xlisp-Stat directly, can generate that matrix by interaction with a model object.

    Beyond co-localization: inferring spatial interactions between sub-cellular structures from microscopy images

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Sub-cellular structures interact in numerous direct and indirect ways in order to fulfill cellular functions. While direct molecular interactions crucially <it>depend </it>on spatial proximity, other interactions typically <it>result in </it>spatial correlations between the interacting structures. Such correlations are the target of microscopy-based co-localization analysis, which can provide hints of potential interactions. Two complementary approaches to co-localization analysis can be distinguished: intensity correlation methods capitalize on pattern discovery, whereas object-based methods emphasize detection power.</p> <p>Results</p> <p>We first reinvestigate the classical co-localization measure in the context of spatial point pattern analysis. This allows us to unravel the set of implicit assumptions inherent to this measure and to identify potential confounding factors commonly ignored. We generalize object-based co-localization analysis to a statistical framework involving spatial point processes. In this framework, <it>interactions are understood as position co-dependencies in the observed localization patterns</it>. The framework is based on a model of effective pairwise interaction potentials and the specification of a null hypothesis for the expected pattern in the absence of interaction. Inferred interaction potentials thus reflect all significant effects that are not explained by the null hypothesis. Our model enables the use of a wealth of well-known statistical methods for analyzing experimental data, as demonstrated on synthetic data and in a case study considering virus entry into live cells. We show that the classical co-localization measure typically under-exploits the information contained in our data.</p> <p>Conclusions</p> <p>We establish a connection between co-localization and spatial interaction of sub-cellular structures by formulating the object-based interaction analysis problem in a spatial statistics framework based on nearest-neighbor distance distributions. We provide generic procedures for inferring interaction strengths and quantifying their relative statistical significance from sets of discrete objects as provided by image analysis methods. Within our framework, an interaction potential can either refer to a phenomenological or a mechanistic model of a physico-chemical interaction process. This increased flexibility in designing and testing different hypothetical interaction models can be used to quantify the parameters of a specific interaction model or may catalyze the discovery of functional relations.</p

    Learning object behaviour models

    Get PDF
    The human visual system is capable of interpreting a remarkable variety of often subtle, learnt, characteristic behaviours. For instance we can determine the gender of a distant walking figure from their gait, interpret a facial expression as that of surprise, or identify suspicious behaviour in the movements of an individual within a car-park. Machine vision systems wishing to exploit such behavioural knowledge have been limited by the inaccuracies inherent in hand-crafted models and the absence of a unified framework for the perception of powerful behaviour models. The research described in this thesis attempts to address these limitations, using a statistical modelling approach to provide a framework in which detailed behavioural knowledge is acquired from the observation of long image sequences. The core of the behaviour modelling framework is an optimised sample-set representation of the probability density in a behaviour space defined by a novel temporal pattern formation strategy. This representation of behaviour is both concise and accurate and facilitates the recognition of actions or events and the assessment of behaviour typicality. The inclusion of generative capabilities is achieved via the addition of a learnt stochastic process model, thus facilitating the generation of predictions and realistic sample behaviours. Experimental results demonstrate the acquisition of behaviour models and suggest a variety of possible applications, including automated visual surveillance, object tracking, gesture recognition, and the generation of realistic object behaviours within animations, virtual worlds, and computer generated film sequences. The utility of the behaviour modelling framework is further extended through the modelling of object interaction. Two separate approaches are presented, and a technique is developed which, using learnt models of joint behaviour together with a stochastic tracking algorithm, can be used to equip a virtual object with the ability to interact in a natural way. Experimental results demonstrate the simulation of a plausible virtual partner during interaction between a user and the machine

    Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models

    Get PDF
    Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions. Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation

    Probabilistic and geometric shape based segmentation methods.

    Get PDF
    Image segmentation is one of the most important problems in image processing, object recognition, computer vision, medical imaging, etc. In general, the objective of the segmentation is to partition the image into the meaningful areas using the existing (low level) information in the image and prior (high level) information which can be obtained using a number of features of an object. As stated in [1,2], the human vision system aims to extract and use as much information as possible in the image including but not limited to the intensity, possible motion of the object (in sequential images), spatial relations (interaction) as the existing information, and the shape of the object which is learnt from the experience as the prior information. The main objective of this dissertation is to couple the prior information with the existing information since the machine vision system cannot predict the prior information unless it is given. To label the image into meaningful areas, the chosen information is modelled to fit progressively in each of the regions by an optimization process. The intensity and spatial interaction (as the existing information) and shape (as the prior information) are modeled to obtain the optimum segmentation in this study. The intensity information is modelled using the Gaussian distribution. Spatial interaction that describes the relation between neighboring pixels/voxels is modelled by assuming that the pixel intensity depends on the intensities of the neighboring pixels. The shape model is obtained using occurrences of histogram of training shape pixels or voxels. The main objective is to capture the shape variation of the object of interest. Each pixel in the image will have three probabilities to be an object and a background class based on the intensity, spatial interaction, and shape models. These probabilistic values will guide the energy (cost) functionals in the optimization process. This dissertation proposes segmentation frameworks which has the following properties: i) original to solve some of the existing problems, ii) robust under various segmentation challenges, and iii) fast enough to be used in the real applications. In this dissertation, the models are integrated into different methods to obtain the optimum segmentation: 1) variational (can be considered as the spatially continuous), and 2) statistical (can be considered as the spatially discrete) methods. The proposed segmentation frameworks start with obtaining the initial segmentation using the intensity / spatial interaction models. The shape model, which is obtained using the training shapes, is registered to the image domain. Finally, the optimal segmentation is obtained using the optimization of the energy functionals. Experiments show that the use of the shape prior improves considerably the accuracy of the alternative methods which use only existing or both information in the image. The proposed methods are tested on the synthetic and clinical images/shapes and they are shown to be robust under various noise levels, occlusions, and missing object information. Vertebral bodies (VBs) in clinical computed tomography (CT) are segmented using the proposed methods to help the bone mineral density measurements and fracture analysis in bones. Experimental results show that the proposed solutions eliminate some of the existing problems in the VB segmentation. One of the most important contributions of this study is to offer a segmentation framework which can be suitable to the clinical works

    mgm: Estimating Time-Varying Mixed Graphical Models in High-Dimensional Data

    Get PDF
    We present the R-package mgm for the estimation of k-order Mixed Graphical Models (MGMs) and mixed Vector Autoregressive (mVAR) models in high-dimensional data. These are a useful extensions of graphical models for only one variable type, since data sets consisting of mixed types of variables (continuous, count, categorical) are ubiquitous. In addition, we allow to relax the stationarity assumption of both models by introducing time-varying versions MGMs and mVAR models based on a kernel weighting approach. Time-varying models offer a rich description of temporally evolving systems and allow to identify external influences on the model structure such as the impact of interventions. We provide the background of all implemented methods and provide fully reproducible examples that illustrate how to use the package

    BayesX: Analysing Bayesian structured additive regression models

    Get PDF
    There has been much recent interest in Bayesian inference for generalized additive and related models. The increasing popularity of Bayesian methods for these and other model classes is mainly caused by the introduction of Markov chain Monte Carlo (MCMC) simulation techniques which allow the estimation of very complex and realistic models. This paper describes the capabilities of the public domain software BayesX for estimating complex regression models with structured additive predictor. The program extends the capabilities of existing software for semiparametric regression. Many model classes well known from the literature are special cases of the models supported by BayesX. Examples are Generalized Additive (Mixed) Models, Dynamic Models, Varying Coefficient Models, Geoadditive Models, Geographically Weighted Regression and models for space-time regression. BayesX supports the most common distributions for the response variable. For univariate responses these are Gaussian, Binomial, Poisson, Gamma and negative Binomial. For multicategorical responses, both multinomial logit and probit models for unordered categories of the response as well as cumulative threshold models for ordered categories may be estimated. Moreover, BayesX allows the estimation of complex continuous time survival and hazardrate models

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg
    corecore