1,294 research outputs found
General Design Bayesian Generalized Linear Mixed Models
Linear mixed models are able to handle an extraordinary range of
complications in regression-type analyses. Their most common use is to account
for within-subject correlation in longitudinal data analysis. They are also the
standard vehicle for smoothing spatial count data. However, when treated in
full generality, mixed models can also handle spline-type smoothing and closely
approximate kriging. This allows for nonparametric regression models (e.g.,
additive models and varying coefficient models) to be handled within the mixed
model framework. The key is to allow the random effects design matrix to have
general structure; hence our label general design. For continuous response
data, particularly when Gaussianity of the response is reasonably assumed,
computation is now quite mature and supported by the R, SAS and S-PLUS
packages. Such is not the case for binary and count responses, where
generalized linear mixed models (GLMMs) are required, but are hindered by the
presence of intractable multivariate integrals. Software known to us supports
special cases of the GLMM (e.g., PROC NLMIXED in SAS or glmmML in R) or relies
on the sometimes crude Laplace-type approximation of integrals (e.g., the SAS
macro glimmix or glmmPQL in R). This paper describes the fitting of general
design generalized linear mixed models. A Bayesian approach is taken and Markov
chain Monte Carlo (MCMC) is used for estimation and inference. In this
generalized setting, MCMC requires sampling from nonstandard distributions. In
this article, we demonstrate that the MCMC package WinBUGS facilitates sound
fitting of general design Bayesian generalized linear mixed models in practice.Comment: Published at http://dx.doi.org/10.1214/088342306000000015 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Prevalence, characteristics and management of headache experienced by people with schizophrenia and schizoaffective disorder: a cross sectional cohort study
Objective: Headache is the most common type of pain reported by people with schizophrenia. This study aimed to establish prevalence, characteristics and management of these headache.
Method: One-hundred participants with schizophrenia/schizoaffective disorder completed a reliable and valid headache questionnaire. Two clinicians independently classified each headache as migraine (MH), tension-type (TTH), cervicogenic (CGH) or other (OH).
Results: The twelve-month prevalence of headache (57%) was higher than the general population (46%) with no evidence of a relationship between psychiatric clinical characteristics and presence of headache. Prevalence of CGH (5%) and MH (18%) was comparable to the general population. TTH (16%) had a lower prevalence and 19% of participant’s experienced OH. No-one with MH was prescribed migraine specific medication, no-one with CGH and TTH received best-practice treatment
Conclusion: Headache is a common complaint in people with schizophrenia/schizoaffective disorder with most fitting recognised diagnostic criteria for which effective interventions are available. No-one in this sample was receiving best-practice care for their headache
Physiotherapy students\u27 perceptions and experiences of clinical prediction rules
Objectives: Clinical reasoning can be difficult to teach to pre-professional physiotherapy students due to their lack of clinical experience. It may be that tools such as clinical prediction rules (CPRs) could aid the process, but there has been little investigation into their use in physiotherapy clinical education. This study aimed to determine the perceptions and experiences of physiotherapy students regarding CPRs, and whether they are learning about CPRs on clinical placement.
Design: Cross-sectional survey using a paper-based questionnaire.
Participants: Final year pre-professional physiotherapy students (n=371, response rate 77%) from five universities across five states of Australia.
Results: Sixty percent of respondents had not heard of CPRs, and a further 19% had not clinically used CPRs. Only 21% reported using CPRs, and of these nearly three-quarters were rarely, if ever, learning about CPRs in the clinical setting. However most of those who used CPRs (78%) believed CPRs assisted in the development of clinical reasoning skills and none (0%) was opposed to the teaching of CPRs to students. The CPRs most commonly recognised and used by students were those for determining the need for an X-ray following injuries to the ankle and foot (67%), and for identifying deep venous thrombosis (63%).
Conclusions: The large majority of students in this sample knew little, if anything, about CPRs and few had learned about, experienced or practiced them on clinical placement. However, students who were aware of CPRs found them helpful for their clinical reasoning and were in favour of learning more about them
Self reported aggravating activities do not demonstrate a consistent directional pattern in chronic non specific low back pain patients: An observational study
Question: Do the self-reported aggravating activities of chronic non-specific low back pain
patients demonstrate a consistent directional pattern? Design: Cross-sectional observational
study. Participants: 240 chronic non specific low back pain patients. Outcome measure: We
invited experienced clinicians to classify each of the three self-nominated aggravating
activities from the Patient Specific Functional Scale by the direction of lumbar spine
movement. Patients were described as demonstrating a directional pattern if all nominated
activities moved the spine into the same direction. Analyses were undertaken to determine if
the proportion of patients demonstrating a directional pattern was greater than would be
expected by chance. Results: In some patients, all tasks did move the spine into the same
direction, but this proportion did not differ from chance (p = 0.328). There were no clinical or
demographic differences between those who displayed a directional pattern and those who did
not (all p > 0.05). Conclusion: Using patient self-reported aggravating activities we were
unable to demonstrate the existence of a consistent pattern of adverse movement in patients
with chronic non-specific low back pain
A Paradox in Bland-Altman Analysis and a Bernoulli Approach
A reliable method of measurement is important in various scientific areas. When a new method of measurement is developed, it should be tested against a standard method that is currently in use. Bland and Altman proposed limits of agreement (LOA) to compare two methods of measurement under the normality assumption. Recently, a sample size formula has been proposed for hypothesis testing to compare two methods of measurement. In the hypothesis testing, the null hypothesis states that the two methods do not satisfy a pre-specified acceptable degree of agreement. Carefully considering the interpretation of the LOA, we argue that there are cases of an acceptable degree of agreement inside the null parameter space. We refer to this subset as the paradoxical parameter space in this article. To address this paradox, we apply a Bernoulli approach to modify the null parameter space and to relax the normality assumption on the data. Using simulations, we demonstrate that the change in statistical power is not negligible when the true parameter values are inside or near the paradoxical parameter space. In addition, we demonstrate an application of the sequential probability ratio test to allow researchers to draw a conclusion with a smaller sample size and to reduce the study time
Sequential Testing in Reliability and Validity Studies With Repeated Measurements per Subject
In medical, health, and sports sciences, researchers desire a device with high reliability and validity. This article focuses on reliability and validity studies with n subjects and m ≥ 2 repeated measurements per subject. High statistical power can be achieved by increasing n or m, and increasing m is often easier than increasing n in practice unless m is too high to result in systematic bias. The sequential probability ratio test (SPRT) is a useful statistical method which can conclude a null hypothesis H0 or an alternative hypothesis H1 with 50% of the required sample size of a non-sequential test on average. The traditional SPRT requires the likelihood function for each observed random variable, and it can be a practical burden for evaluating the likelihood ratio after each observation of a subject. Instead, m observed random variables per subject can be transformed into a test statistic which has a known sampling distribution under H0 and under H1. This allows us to formulate a SPRT based on a sequence of test statistics. In this article, three types of study are considered: reliability of a device, reliability of a device relative to a criterion device, and validity of a device relative to a criterion device. Using SPRT for testing the reliability of a device, for small m, results in an average sample size of about 50% of the fixed sample size for a non-sequential test. For comparing a device to criterion, the average sample size approaches to 60% approximately as m increases. The SPRT tolerates violation of normality assumption for validity study, but it does not for reliability study
A Tutorial of Bland Altman Analysis in A Bayesian Framework
There are two schools of thought in statistical analysis, frequentist, and Bayesian. Though the two approaches produce similar estimations and predictions in large-sample studies, their interpretations are different. Bland Altman analysis is a statistical method that is widely used for comparing two methods of measurement. It was originally proposed under a frequentist framework, and it has not been used under a Bayesian framework despite the growing popularity of Bayesian analysis. It seems that the mathematical and computational complexity narrows access to Bayesian Bland Altman analysis. In this article, we provide a tutorial of Bayesian Bland Altman analysis. One approach we suggest is to address the objective of Bland Altman analysis via the posterior predictive distribution. We can estimate the probability of an acceptable degree of disagreement (fixed a priori) for the difference between two future measurements. To ease mathematical and computational complexity, an interface applet is provided with a guideline
Sequential Data-Adaptive Bandwidth Selection by Cross-Validation for Nonparametric Prediction
We consider the problem of bandwidth selection by cross-validation from a
sequential point of view in a nonparametric regression model. Having in mind
that in applications one often aims at estimation, prediction and change
detection simultaneously, we investigate that approach for sequential kernel
smoothers in order to base these tasks on a single statistic. We provide
uniform weak laws of large numbers and weak consistency results for the
cross-validated bandwidth. Extensions to weakly dependent error terms are
discussed as well. The errors may be {\alpha}-mixing or L2-near epoch
dependent, which guarantees that the uniform convergence of the cross
validation sum and the consistency of the cross-validated bandwidth hold true
for a large class of time series. The method is illustrated by analyzing
photovoltaic data.Comment: 26 page
alphaPDE: A New Multivariate Technique for Parameter Estimation
We present alphaPDE, a new multivariate analysis technique for parameter
estimation. The method is based on a direct construction of joint probability
densities of known variables and the parameters to be estimated. We show how
posterior densities and best-value estimates are then obtained for the
parameters of interest by a straightforward manipulation of these densities.
The method is essentially non-parametric and allows for an intuitive graphical
interpretation. We illustrate the method by outlining how it can be used to
estimate the mass of the top quark, and we explain how the method is applied to
an ensemble of events containing background.Comment: 11 pages, published versio
- …