1,169 research outputs found

    A statistical approach for array CGH data analysis

    Get PDF
    BACKGROUND: Microarray-CGH experiments are used to detect and map chromosomal imbalances, by hybridizing targets of genomic DNA from a test and a reference sample to sequences immobilized on a slide. These probes are genomic DNA sequences (BACs) that are mapped on the genome. The signal has a spatial coherence that can be handled by specific statistical tools. Segmentation methods seem to be a natural framework for this purpose. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose BACs share the same relative copy number on average. We model a CGH profile by a random Gaussian process whose distribution parameters are affected by abrupt changes at unknown coordinates. Two major problems arise : to determine which parameters are affected by the abrupt changes (the mean and the variance, or the mean only), and the selection of the number of segments in the profile. RESULTS: We demonstrate that existing methods for estimating the number of segments are not well adapted in the case of array CGH data, and we propose an adaptive criterion that detects previously mapped chromosomal aberrations. The performances of this method are discussed based on simulations and publicly available data sets. Then we discuss the choice of modeling for array CGH data and show that the model with a homogeneous variance is adapted to this context. CONCLUSIONS: Array CGH data analysis is an emerging field that needs appropriate statistical tools. Process segmentation and model selection provide a theoretical framework that allows precise biological interpretations. Adaptive methods for model selection give promising results concerning the estimation of the number of altered regions on the genome

    The predictors of glucose screening: the contribution of risk perception

    Get PDF
    BACKGROUND: The prevention of type 2 diabetes is a challenge for health institutions. Periodic blood glucose screening in subjects at risk for developing diabetes may be necessary to implement preventive measures in patients prior to the manifestation of the disease and to efficiently diagnose diabetes. Not only medical aspects, but also psychological and social factors, such as the perception of risk (the individuals’ judgment of the likelihood of experiencing an adverse event) influence healthy or preventive behaviors. It is still unknown if risk perception can have an effect on health behaviors aimed at reducing the risk of diabetes (glucose screening). The objective of study was to identify factors that influence glucose screening frequency. METHODS: Eight hundred randomized interviews, which were stratified by socioeconomic level, were performed in Mexico City. We evaluated the perception of risk of developing diabetes, family history, health status and socioeconomic variables and their association with glucose screening frequency. RESULTS: Of the study participants, 55.6% had not had their glucose levels measured in the last year, whereas 32.8% of the subjects reported having monitored their glucose levels one to three times per year and 11.5% had their levels monitored four or more times per year. Risk perception was significantly associated with the frequency of blood glucose screening. Having a first-degree relative with diabetes, being older than 45 years and belonging to a middle socioeconomic level increased the probability of subjects seeing a doctor for glucose screening. CONCLUSIONS: Glucose screening is a complex behavior that involves the subjects’ perception of threat, defined as feeling vulnerable to the development of diabetes, which is determined by the subject’s environment and his previous experience with diabetes

    Performance in population models for count data, part II: a new SAEM algorithm.

    Get PDF
    International audienceAnalysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 [ http://wwwpage-meetingorg/?abstract=1372 ]). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009)

    Modeling of Large Pharmacokinetic Data Using Nonlinear Mixed-Effects: A Paradigm Shift in Veterinary Pharmacology. A Case Study With Robenacoxib in Cats

    Get PDF
    The objective of this study was to model the pharmacokinetics (PKs) of robenacoxib in cats using a nonlinear mixed‐effects (NLME) approach, leveraging all available information collected from cats receiving robenacoxib s.c. and/or i.v.: 47 densely sampled laboratory cats and 36 clinical cats sparsely sampled preoperatively. Data from both routes were modeled sequentially using Monolix 4.3.2. Influence of parameter correlations and available covariates (age, gender, bodyweight, and anesthesia) on population parameter estimates were evaluated by using multiple samples from the posterior distribution of the random effects. A bicompartmental disposition model with simultaneous zero and first‐order absorption best described robenacoxib PKs in blood. Clearance was 0.502 L/kg/h and the bioavailability was high (78%). The absorption constant point estimate (Ka = 0.68 h−1) was lower than beta (median, 1.08 h−1), unveiling flip‐flop kinetics. No dosing adjustment based on available covariates information is advocated. This modeling work constitutes the first application of NLME in a large feline population

    2011 space odyssey: spatialization as a mechanism to code order allows a close encounter between memory expertise and classic immediate memory studies

    Get PDF
    International audienceIn 2011 van Dijk and Fias with an innovative working memory paradigm showed for the first time that words to-be-remembered, presented sequentially at the center of a screen acquired a new spatial dimension: the first words of the sequence acquired a left spatial value while the last words acquired a right spatial value. In this article, we argue that this spatialization which putatively underpins how order is coded in immediate memory 1 allows bridging the domain of memory expertise with classic immediate memory studies. After briefly reviewing the mechanisms for coding order in immediate memory and the recent studies pointing toward spatialization as an explanatory mechanism, we will pinpoint similar mechanisms that are known to exist in memory expertise, particularly in the method of loci. We will terminate by analyzing what these similarities can tell us about expertise

    Between-Subject and Within-Subject Model Mixtures for Classifying HIV Treatment Response

    Get PDF
    We present a method for using longitudinal data  to classify individuals into clinically-relevant population subgroups. This is achieved by treating ``subgroup'' as a categorical covariate whose value is unknown for each individual, and predicting its value using mixtures of models that represent ``typical'' longitudinal data from each subgroup.  Under a nonlinear mixed effects model framework, two types of model mixtures are presented, both of which have their advantages. Following illustrative simulations, longitudinal viral load data for HIV-positive patients is used to predict whether they are responding -- completely, partially or not at all -- to a new drug treatment

    Bayesian off-line detection of multiple change-points corrupted by multiplicative noise : application to SAR image edge detection

    Get PDF
    This paper addresses the problem of Bayesian off-line change-point detection in synthetic aperture radar images. The minimum mean square error and maximum a posteriori estimators of the changepoint positions are studied. Both estimators cannot be implemented because of optimization or integration problems. A practical implementation using Markov chain Monte Carlo methods is proposed. This implementation requires a priori knowledge of the so-called hyperparameters. A hyperparameter estimation procedure is proposed that alleviates the requirement of knowing the values of the hyperparameters. Simulation results on synthetic signals and synthetic aperture radar images are presented

    Robust curvature extrema detection based on new numerical derivation

    Get PDF
    International audienceExtrema of curvature are useful key points for different image analysis tasks. Indeed, polygonal approximation or arc decomposition methods used often these points to initialize or to improve their algorithms. Several shape-based image retrieval methods focus also their descriptors on key points. This paper is focused on the detection of extrema of curvature points for a raster-to-vector-conversion framework. We propose an original adaptation of an approach used into nonlinear control for fault-diagnosis and fault-tolerant control based on algebraic derivation and which is robust to noise. The experimental results are promising and show the robustness of the approach when the contours are bathed into a high level speckled noise
    • 

    corecore