863 research outputs found
General Design Bayesian Generalized Linear Mixed Models
Linear mixed models are able to handle an extraordinary range of
complications in regression-type analyses. Their most common use is to account
for within-subject correlation in longitudinal data analysis. They are also the
standard vehicle for smoothing spatial count data. However, when treated in
full generality, mixed models can also handle spline-type smoothing and closely
approximate kriging. This allows for nonparametric regression models (e.g.,
additive models and varying coefficient models) to be handled within the mixed
model framework. The key is to allow the random effects design matrix to have
general structure; hence our label general design. For continuous response
data, particularly when Gaussianity of the response is reasonably assumed,
computation is now quite mature and supported by the R, SAS and S-PLUS
packages. Such is not the case for binary and count responses, where
generalized linear mixed models (GLMMs) are required, but are hindered by the
presence of intractable multivariate integrals. Software known to us supports
special cases of the GLMM (e.g., PROC NLMIXED in SAS or glmmML in R) or relies
on the sometimes crude Laplace-type approximation of integrals (e.g., the SAS
macro glimmix or glmmPQL in R). This paper describes the fitting of general
design generalized linear mixed models. A Bayesian approach is taken and Markov
chain Monte Carlo (MCMC) is used for estimation and inference. In this
generalized setting, MCMC requires sampling from nonstandard distributions. In
this article, we demonstrate that the MCMC package WinBUGS facilitates sound
fitting of general design Bayesian generalized linear mixed models in practice.Comment: Published at http://dx.doi.org/10.1214/088342306000000015 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Generalised linear mixed model analysis via sequential Monte Carlo sampling
We present a sequential Monte Carlo sampler algorithm for the Bayesian analysis of generalised linear mixed models (GLMMs). These models support a variety of interesting regression-type analyses, but performing inference is often extremely difficult, even when using the Bayesian approach combined with Markov chainMonte Carlo (MCMC). The SequentialMonte Carlo sampler (SMC) is a new and generalmethod for producing samples from posterior distributions. In thisarticle we demonstrate use of the SMC method for performing inference for GLMMs. We demonstrate the effectiveness of the method on both simulated and real data, and find that sequential Monte Carlo is a competitive alternative to the available MCMC techniques. © 2008, Institute of Mathematical Statistics. All rights reserved
Quasi-Monte Carlo for Highly Structured Generalised Response Models
Highly structured generalised response models, such as generalised linear mixed models and generalised linear models for time series regression, have become an indispensable vehicle for data analysis and inference in many areas of application. However, their use in practice is hindered by high-dimensional intractable integrals. Quasi-Monte Carlo (QMC) is a dynamic research area in the general problem of high-dimensional numerical integration, although its potential for statistical applications is yet to be fully explored. We survey recent research in QMC, particularly lattice rules, and report on its application to highly structured generalised response models. New challenges for QMC are identified and new methodologies are developed. QMC methods are seen to provide significant improvements compared with ordinary Monte Carlo methods
Citizen Science 2.0 : Data Management Principles to Harness the Power of the Crowd
Citizen science refers to voluntary participation by the general public in scientific endeavors. Although citizen science has a long tradition, the rise of online communities and user-generated web content has the potential to greatly expand its scope and contributions. Citizens spread across a large area will collect more information than an individual researcher can. Because citizen scientists tend to make observations about areas they know well, data are likely to be very detailed. Although the potential for engaging citizen scientists is extensive, there are challenges as well. In this paper we consider one such challenge – creating an environment in which non-experts in a scientific domain can provide appropriate and accurate data regarding their observations. We describe the problem in the context of a research project that includes the development of a website to collect citizen-generated data on the distribution of plants and animals in a geographic region. We propose an approach that can improve the quantity and quality of data collected in such projects by organizing data using instance-based data structures. Potential implications of this approach are discussed and plans for future research to validate the design are described
Product oriented modelling and Interoperability issues
International audienceThe consideration of Product information or Knowledge management, product traceability or genealogy, and product life cycle management implies new strategies and approaches to manage flows of information that relate to flows of material managed in shop floor level. Moreover, throughout product lifecycle coordination needs to be established between reality in the physical world (physical view) and the virtual world handled by manufacturing information systems (informational view). This paper presents a product oriented modelling and a product oriented interoperability approach based on the use of the “Holon” modelling concept as a means for the synchronisation of both physical view and informational views. The Zachman framework is afterwards used as a guideline to establish product oriented interoperability between enterprise systems
A Quantitative Methodology for Identifying Evolvable Space Systems
1st AIAA Space Exploration Conference
January 2005, Orlando, FL.With the growing emphasis on spiral development, a system’s ability to evolve is
becoming increasingly critical. This is especially true in systems designed for the exploration
of space. While returning to the Moon is widely regarded as the next step in space
exploration, our journey does not end there. Therefore, the technologies, vehicles, and
systems created for near-term lunar missions should be selected and designed with the
future in mind. Intelligently selecting evolvable systems requires a method for quantitatively
measuring evolvability and a procedure for comparing these measurements. This paper
provides a brief discussion of a quantitative methodology for evaluating space system
evolvability and an in-depth application of this methodology to an example case study
Information driven evaluation of data hiding algorithms
Abstract. Privacy is one of the most important properties an information system must satisfy. A relatively new trend shows that classical access control techniques are not sufficient to guarantee privacy when datamining techniques are used. Privacy Preserving Data Mining (PPDM) algorithms have been recently introduced with the aim of modifying the database in such a way to prevent the discovery of sensible information. Due to the large amount of possible techniques that can be used to achieve this goal, it is necessary to provide some standard evaluation metrics to determine the best algorithms for a specific application or context. Currently, however, there is no common set of parameters that can be used for this purpose. This paper explores the problem of PPDM algorithm evaluation, starting from the key goal of preserving of data quality. To achieve such goal, we propose a formal definition of data quality specifically tailored for use in the context of PPDM algorithms, a set of evaluation parameters and an evaluation algorithm. The resulting evaluation core process is then presented as a part of a more general three step evaluation framework, taking also into account other aspects of the algorithm evaluation such as efficiency, scalability and level of privacy.
Development and psychometric properties of knee-specific body-perception questionnaire in people with knee osteoarthritis: The Fremantle Knee Awareness Questionnaire
Background: Recent systematic reviews have demonstrated that pain associated with knee osteoarthritis (OA) is a complex phenomenon that involves various contributors. People with knee OA exhibit symptoms of impaired body-perception, including reduced tactile acuity, impairments in limb laterality recognition, and degraded proprioceptive acuity. The Fremantle Back Awareness Questionnaire (FreBAQ) was developed to assess body-perception specific to the back in people with chronic low back pain. The aim of this study was to develop and assess the psychometric properties of a knee-specific version of the FreBAQ-J (FreKAQ-J), determine whether people with knee pain experience perceptual impairments and investigate the relationship between disturbed self-perception and clinical status.
Methods: Sixty-five people with knee OA completed the FreKAQ-J. A subset of the participants completed the FreKAQ-J again two-weeks later. Rasch analysis was used to assess item order, targeting, category ordering, unidimensionality, person fit, internal consistency, and differential item functioning. Validity was investigated by examining the relationship between the FreKAQ-J and clinical valuables.
Results: The FreKAQ-J had acceptable internal consistency, unidimensionality, good test-retest reliability, and was functional on the category rating scale. The FreKAQ-J was significantly correlated with pain in motion, disability, pain-related catastrophizing, fear of movement, and anxiety symptomatology.
Conclusions: We developed FreKAQ-J by modifying the FreBAQ-J. The FreKAQ-J fits the Rasch measurement model well and is suitable for use in people with knee OA. Altered body perception may be worth evaluating when managing people with knee OA
- …