7,783 research outputs found
Web-based physiotherapy for people affected by multiple sclerosis: a single blind, randomized controlled feasibility study
Objective:
To examine the feasibility of a trial to evaluate web-based physiotherapy compared to a standard home exercise programme in people with multiple sclerosis.
Design:
Multi-centre, randomized controlled, feasibility study.
Setting:
Three multiple sclerosis out-patient centres.
Participants:
A total of 90 people with multiple sclerosis (Expanded Disability Status Scale 4–6.5).
Interventions:
Participants were randomized to a six-month individualized, home exercise programme delivered via web-based physiotherapy (n = 45; intervention) or a sheet of exercises (n = 45; active comparator).
Outcome measures:
Outcome measures (0, three, six and nine months) included adherence, two-minute walk test, 25 foot walk, Berg Balance Scale, physical activity and healthcare resource use. Interviews were undertaken with 24 participants and 3 physiotherapists.
Results:
Almost 25% of people approached agreed to take part. No intervention-related adverse events were recorded. Adherence was 40%–63% and 53%–71% in the intervention and comparator groups. There was no difference in the two-minute walk test between groups at baseline (Intervention-80.4(33.91)m, Comparator-70.6(31.20)m) and no change over time (at six-month Intervention-81.6(32.75)m, Comparator-74.8(36.16)m. There were no significant changes over time in other outcome measures except the EuroQol-5 Dimension at six months which decreased in the active comparator group. For a difference of 8(17.4)m in two-minute walk test between groups, 76 participants/group would be required (80% power, P > 0.05) for a future randomized controlled trial.
Conclusion:
No changes were found in the majority of outcome measures over time. This study was acceptable and feasible by participants and physiotherapists. An adequately powered study needs 160 participants
Measuring diet in primary school children aged 8-11 years: validation of the Child and Diet Evaluation Tool (CADET) with an emphasis on fruit and vegetable intake.
Background/Objectives:The Child And Diet Evaluation Tool (CADET) is a 24-h food diary that measures the nutrition intake of children aged 3-7 years, with a focus on fruit and vegetable consumption. Until now CADET has not been used to measure nutrient intake of children aged 8-11 years. To ensure that newly assigned portion sizes for this older age group were valid, participants were asked to complete the CADET diary (the school and home food diary) concurrently with a 1-day weighed record. Subjects/Methods:A total of 67 children with a mean age of 9.3 years (s.d.: ± 1.4, 51% girls) participated in the study. Total fruit and vegetable intake in grams and other nutrients were extracted to compare the mean intakes from the CADET diary and Weighed record using t-tests and Pearson's r correlations. Bland-Altman analysis was also conducted to assess the agreement between the two methods. Results: Correlations comparing the CADET diary to the weighed record were high for fruit, vegetables and combined fruit and vegetables (r=0.7). The results from the Bland-Altman plots revealed a mean difference of 54 g (95% confidence interval: -88, 152) for combined fruit and vegetables intake. CADET is the only tool recommended by the National Obesity Observatory that has been validated in a UK population and provides nutrient level data on children's diets. Conclusions:The results from this study conclude that CADET can provide high-quality nutrient data suitable for evaluating intervention studies now for children aged 3-11 years with a focus on fruit and vegetable intake
Cluster randomised trials in the medical literature: two bibliometric surveys
Background: Several reviews of published cluster randomised trials have reported that about half did not take clustering into account in the analysis, which was thus incorrect and potentially misleading. In this paper I ask whether cluster randomised trials are increasing in both number and quality of reporting. Methods: Computer search for papers on cluster randomised trials since 1980, hand search of trial reports published in selected volumes of the British Medical Journal over 20 years. Results: There has been a large increase in the numbers of methodological papers and of trial reports using the term 'cluster random' in recent years, with about equal numbers of each type of paper. The British Medical Journal contained more such reports than any other journal. In this journal there was a corresponding increase over time in the number of trials where subjects were randomised in clusters. In 2003 all reports showed awareness of the need to allow for clustering in the analysis. In 1993 and before clustering was ignored in most such trials. Conclusion: Cluster trials are becoming more frequent and reporting is of higher quality. Perhaps statistician pressure works
Sample size calculations for cluster randomised controlled trials with a fixed number of clusters
Background\ud
Cluster randomised controlled trials (CRCTs) are frequently used in health service evaluation. Assuming an average cluster size, required sample sizes are readily computed for both binary and continuous outcomes, by estimating a design effect or inflation factor. However, where the number of clusters are fixed in advance, but where it is possible to increase the number of individuals within each cluster, as is frequently the case in health service evaluation, sample size formulae have been less well studied. \ud
\ud
Methods\ud
We systematically outline sample size formulae (including required number of randomisation units, detectable difference and power) for CRCTs with a fixed number of clusters, to provide a concise summary for both binary and continuous outcomes. Extensions to the case of unequal cluster sizes are provided. \ud
\ud
Results\ud
For trials with a fixed number of equal sized clusters (k), the trial will be feasible provided the number of clusters is greater than the product of the number of individuals required under individual randomisation () and the estimated intra-cluster correlation (). So, a simple rule is that the number of clusters () will be sufficient provided: \ud
\ud
> x \ud
\ud
Where this is not the case, investigators can determine the maximum available power to detect the pre-specified difference, or the minimum detectable difference under the pre-specified value for power. \ud
\ud
Conclusions\ud
Designing a CRCT with a fixed number of clusters might mean that the study will not be feasible, leading to the notion of a minimum detectable difference (or a maximum achievable power), irrespective of how many individuals are included within each cluster. \ud
\u
Basic tasks of sentiment analysis
Subjectivity detection is the task of identifying objective and subjective
sentences. Objective sentences are those which do not exhibit any sentiment.
So, it is desired for a sentiment analysis engine to find and separate the
objective sentences for further analysis, e.g., polarity detection. In
subjective sentences, opinions can often be expressed on one or multiple
topics. Aspect extraction is a subtask of sentiment analysis that consists in
identifying opinion targets in opinionated text, i.e., in detecting the
specific aspects of a product or service the opinion holder is either praising
or complaining about
Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abundant in
machine learning. Some examples include regression models with norm constraints
(e.g., Lasso), probit, many copula models, and latent Dirichlet allocation
(LDA). Bayesian inference involving probability distributions confined to
constrained domains could be quite challenging for commonly used sampling
algorithms. In this paper, we propose a novel augmentation technique that
handles a wide range of constraints by mapping the constrained domain to a
sphere in the augmented space. By moving freely on the surface of this sphere,
sampling algorithms handle constraints implicitly and generate proposals that
remain within boundaries when mapped back to the original space. Our proposed
method, called {Spherical Augmentation}, provides a mathematically natural and
computationally efficient framework for sampling from constrained probability
distributions. We show the advantages of our method over state-of-the-art
sampling algorithms, such as exact Hamiltonian Monte Carlo, using several
examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian
bridge regression, reconstruction of quantized stationary Gaussian process, and
LDA for topic modeling.Comment: 41 pages, 13 figure
Paradoxical effects of Worrisome Thoughts Suppression: the influence of depressive mood
Thought suppression increases the persistence of unwanted idiosyncratic worries
thoughts when individuals try to suppress them. The failure of suppression may
contribute to the development and maintenance of emotional disorders. Depressive
people seem particulary prone to engage in unsuccessful mental control strategies such
as thought suppression. Worry has been reported to be elevated in depressed individuals
and a dysphoric mood may also contribute for the failure of suppression. No studies
examine, however, the suppression of worisome thoughts in individuals with depressive
symptoms. To investigate the suppression effects of worrisome thoughts, 46
participants were selected according to the cut-off score of a depressive
symptomatology scale and they were divided in two groups (subclinical and nonclinical
group). All the individuals took part in an experimental paradigm of thought
suppression. The results of the mixed factorial analysis of variance revealed an
increased frequency of worrisome thoughts during the suppression phase on depending
of the depressive symptoms. These findings confirm that depressive mood can reduce
the success of suppression.info:eu-repo/semantics/publishedVersio
Smc5/6: a link between DNA repair and unidirectional replication?
Of the three structural maintenance of chromosome (SMC) complexes, two directly regulate chromosome dynamics. The third, Smc5/6, functions mainly in homologous recombination and in completing DNA replication. The literature suggests that Smc5/6 coordinates DNA repair, in part through post-translational modification of uncharacterized target proteins that can dictate their subcellular localization, and that Smc5/6 also functions to establish DNA-damage-dependent cohesion. A nucleolar-specific Smc5/6 function has been proposed because Smc5/6 yeast mutants display penetrant phenotypes of ribosomal DNA (rDNA) instability. rDNA repeats are replicated unidirectionally. Here, we propose that unidirectional replication, combined with global Smc5/6 functions, can explain the apparent rDNA specificity
Statistical methodologies to pool across multiple intervention studies
Combining and analyzing data from heterogeneous randomized controlled trials of complex multiple-component intervention studies, or discussing them in a systematic review, is not straightforward. The present article describes certain issues to be considered when combining data across studies, based on discussions in an NIH-sponsored workshop on pooling issues across studies in consortia (see Belle et al. in Psychol Aging, 18(3):396–405, 2003). Several statistical methodologies are described and their advantages and limitations are explored. Whether weighting the different studies data differently, or via employing random effects, one must recognize that different pooling methodologies may yield different results. Pooling can be used for comprehensive exploratory analyses of data from RCTs and should not be viewed as replacing the standard analysis plan for each study. Pooling may help to identify intervention components that may be more effective especially for subsets of participants with certain behavioral characteristics. Pooling, when supported by statistical tests, can allow exploratory investigation of potential hypotheses and for the design of future interventions
Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol
<b>Background</b> Understanding implementation processes is key to ensuring that complex interventions in healthcare are taken up in practice and thus maximize intended benefits for service provision and (ultimately) care to patients. Normalization Process Theory (NPT) provides a framework for understanding how a new intervention becomes part of normal practice. This study aims to develop and validate simple generic tools derived from NPT, to be used to improve the implementation of complex healthcare interventions.<p></p>
<b>Objectives</b> The objectives of this study are to: develop a set of NPT-based measures and formatively evaluate their use for identifying implementation problems and monitoring progress; conduct preliminary evaluation of these measures across a range of interventions and contexts, and identify factors that affect this process; explore the utility of these measures for predicting outcomes; and develop an online users’ manual for the measures.<p></p>
<b>Methods</b> A combination of qualitative (workshops, item development, user feedback, cognitive interviews) and quantitative (survey) methods will be used to develop NPT measures, and test the utility of the measures in six healthcare intervention settings.<p></p>
<b>Discussion</b> The measures developed in the study will be available for use by those involved in planning, implementing, and evaluating complex interventions in healthcare and have the potential to enhance the chances of their implementation, leading to sustained changes in working practices
- …
