426,209 research outputs found

    Linear and nonlinear filtering in mathematical finance: a review

    Get PDF
    Copyright @ The Authors 2010This paper presents a review of time series filtering and its applications in mathematical finance. A summary of results of recent empirical studies with market data are presented for yield curve modelling and stochastic volatility modelling. The paper also outlines different approaches to filtering of nonlinear time series

    Monitoring and evaluation of family interventions: information on families supported to March 2010 (Research report DFE-RR044)

    Get PDF
    "This report updates and builds on the previous research by presenting and analysing FIIS [Family intervention Information system] data provided by family intervention staff up to and including 31 March 2010. The report is primarily based on simple descriptive statistics which provide a summary of the quantitative evidence. In addition statistical modelling (logistic regression) was used to look at the factors associated with successful and unsuccessful outcomes." - Page 14

    Supervised Blockmodelling

    Full text link
    Collective classification models attempt to improve classification performance by taking into account the class labels of related instances. However, they tend not to learn patterns of interactions between classes and/or make the assumption that instances of the same class link to each other (assortativity assumption). Blockmodels provide a solution to these issues, being capable of modelling assortative and disassortative interactions, and learning the pattern of interactions in the form of a summary network. The Supervised Blockmodel provides good classification performance using link structure alone, whilst simultaneously providing an interpretable summary of network interactions to allow a better understanding of the data. This work explores three variants of supervised blockmodels of varying complexity and tests them on four structurally different real world networks.Comment: Workshop on Collective Learning and Inference on Structured Data 201

    An Overview of Methods in the Analysis of Dependent ordered catagorical Data: Assumptions and Implications

    Get PDF
    Subjective assessments of pain, quality of life, ability etc. measured by rating scales and questionnaires are common in clinical research. The resulting responses are categorical with an ordered structure and the statistical methods must take account of this type of data structure. In this paper we give an overview of methods for analysis of dependent ordered categorical data and a comparison of standard models and measures with nonparametric augmented rank measures proposed by Svensson. We focus on assumptions and issues behind model specifications and data as well as implications of the methods. First we summarise some fundamental models for categorical data and two main approaches for repeated ordinal data; marginal and cluster-specific models. We then describe models and measures for application in agreement studies and finally give a summary of the approach of Svensson. The paper concludes with a summary of important aspects.Dependent ordinal data; GEE; GLMM; Logit; modelling

    R Package wgaim: QTL Analysis in Bi-Parental Populations Using Linear Mixed Models

    Get PDF
    The wgaim (whole genome average interval mapping) package developed in the R system for statistical computing (R Development Core Team 2011) builds on linear mixed modelling techniques by incorporating a whole genome approach to detecting significant quantitative trait loci (QTL) in bi-parental populations. Much of the sophistication is inherited through the well established linear mixed modelling package ASReml-R (Butler et al. 2009). As wgaim uses an extension of interval mapping to incorporate the whole genome into the analysis, functions are provided which allow conversion of genetic data objects created with the qtl package of Broman and Wu (2010) available in R. Results of QTL analyses are available using summary and print methods as well as diagnostic summaries of the selection method. In addition, the package features a flexible linkage map plotting function that can be easily manipulated to provide an aesthetic viewable genetic map. As a visual summary, QTL obtained from one or more models can also be added to the linkage map.

    Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review

    Get PDF
    Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses

    Population variability in animal health: Influence on dose-exposure-response relationships: Part II: Modelling and simulation

    Get PDF
    During the 2017 Biennial meeting, the American Academy of Veterinary Pharmacology and Therapeutics hosted a 1‐day session on the influence of population variability on dose‐exposure‐response relationships. In Part I, we highlighted some of the sources of population variability. Part II provides a summary of discussions on modelling and simulation tools that utilize existing pharmacokinetic data, can integrate drug physicochemical characteristics with species physiological characteristics and dosing information or that combine observed with predicted and in vitro information to explore and describe sources of variability that may influence the safe and effective use of veterinary pharmaceuticals
    corecore