5,734 research outputs found
Compliance with anthelmintic treatment in the neglected tropical diseases control programmes: a systematic review
PRISMA checklist. (PDF 223Ă‚Â kb
The Detection of Damage and the Measurement of Strain within Composites by Means of Embedded Optical Fiber Sensors
Structurally integrated fiber optic sensors hold the promise of improved quality control of composites and “real-time, in-service” monitoring of the loads to which they are subjected and any damage they may sustain. This could reduce overdesign and increase confidence in their use by improving both safety and their economics especially in terms of inspection and maintenance, Figure 1. This would be particularly relevant to the Aerospace Industry where any weight saving has a multiplier effect. The technology of imbedding arrays of optical fiber sensors within advanced composite material structures during their fabrication essentially provides materials with “optical nerves”. Improved quality control would be achieved by monitoring the internal of composites during their manufacture. Also since “in-service” monitoring of structural loads and structural integrity would permit weaknesses to be indicated before they became critical, longer periods could be allowed between costly inspections. When the system is taken out of service for such an inspection, a shorter downtime might be expected since the built-in sensors would have already indicated sites of weakness and their rate of deterioration. A recent overview of fiber optic based “Smart Structures” has been prepared by the author[1]
Recommended from our members
The impact of low energy proton damage on the operational characteristics of EPIC-MOS CCDs
The University of Tübingen 3.5 MeV Van de Graaf accelerator facility was used to investigate the effect of low energy protons on the performance of the European Photon Imaging Camera (EPIC), metal–oxide semiconductor (MOS), charge coupled devices (CCDs). Two CCDs were irradiated in different parts of their detecting areas using different proton spectra and dose rates. Iron-55 was the calibration source in all cases and was used to measure any increases in charge transfer inefficiency (CTI) and spectral resolution of the CCDs. Additional changes in the CCD bright pixel table and changes in the low X-ray energy response of the device were examined.
The Monte Carlo code Stopping Range of Ions in Matter (SRIM) was used to model the effect of a 10 MeV equivalent fluence of protons interacting with the CCD. Since the non-ionising energy loss (NIEL) function could not be applied effectively at such low proton energies. From the 10 MeV values, the expected CTI degradation could be calculated and then compared to the measured CTI changes
Epidemiological surveys of, and research on, soil-transmitted helminths in Southeast Asia: a systematic review
PRISMA checklist, full list of search terms and Supporting Figure 1. (DOCX 1462Ă‚Â kb
MultiBUGS: A Parallel Implementation of the BUGS Modeling Framework for Faster Bayesian Inference
MultiBUGS is a new version of the general-purpose Bayesian modeling software BUGS
that implements a generic algorithm for parallelizing Markov chain Monte Carlo (MCMC)
algorithms to speed up posterior inference of Bayesian models. The algorithm parallelizes evaluation of the product-form likelihoods formed when a parameter has many
children in the directed acyclic graph (DAG) representation; and parallelizes sampling
of conditionally-independent sets of parameters. A heuristic algorithm is used to decide
which approach to use for each parameter and to apportion computation across computational cores. This enables MultiBUGS to automatically parallelize the broad range of
statistical models that can be fitted using BUGS-language software, making the dramatic
speed-ups of modern multi-core computing accessible to applied statisticians, without
requiring any experience of parallel programming. We demonstrate the use of MultiBUGS on simulated data designed to mimic a hierarchical e-health linked-data study
of methadone prescriptions including 425,112 observations and 20,426 random effects.
Posterior inference for the e-health model takes several hours in existing software, but
MultiBUGS can perform inference in only 28 minutes using 48 computational core
What is required in terms of mass drug administration to interrupt the transmission of schistosome parasites in regions of endemic infection?
Supplementary information. (PDF 445 kb
Label-invariant models for the analysis of meta-epidemiological data.
Rich meta-epidemiological data sets have been collected to explore associations between intervention effect estimates and study-level characteristics. Welton et al proposed models for the analysis of meta-epidemiological data, but these models are restrictive because they force heterogeneity among studies with a particular characteristic to be at least as large as that among studies without the characteristic. In this paper we present alternative models that are invariant to the labels defining the 2 categories of studies. To exemplify the methods, we use a collection of meta-analyses in which the Cochrane Risk of Bias tool has been implemented. We first investigate the influence of small trial sample sizes (less than 100 participants), before investigating the influence of multiple methodological flaws (inadequate or unclear sequence generation, allocation concealment, and blinding). We fit both the Welton et al model and our proposed label-invariant model and compare the results. Estimates of mean bias associated with the trial characteristics and of between-trial variances are not very sensitive to the choice of model. Results from fitting a univariable model show that heterogeneity variance is, on average, 88% greater among trials with less than 100 participants. On the basis of a multivariable model, heterogeneity variance is, on average, 25% greater among trials with inadequate/unclear sequence generation, 51% greater among trials with inadequate/unclear blinding, and 23% lower among trials with inadequate/unclear allocation concealment, although the 95% intervals for these ratios are very wide. Our proposed label-invariant models for meta-epidemiological data analysis facilitate investigations of between-study heterogeneity attributable to certain study characteristics
Current practice in the diagnosis and management of sarcopenia and frailty – results from a UK-wide survey
Objectives: Despite a rising clinical and research profile, there is limited information about how frailty and sarcopenia are diagnosed and managed in clinical practice. Our objective was to build a picture of current practice by conducting a survey of UK healthcare professionals.
Methods: We surveyed healthcare professionals in NHS organisations, using a series of four questionnaires. These focussed on the diagnosis and management of sarcopenia, and the diagnosis and management of frailty in acute medical units, community settings and surgical units.
Results: Response rates ranged from 49/177 (28%) organisations for the sarcopenia questionnaire to 104/177 (59%) for the surgical unit questionnaire. Less than half of responding organisations identified sarcopenia; few made the diagnosis using a recognised algorithm or offered resistance training. The commonest tools used to identify frailty were the Rockwood Clinical Frailty Scale or presence of a frailty syndrome. Comprehensive Geriatric Assessment was offered by the majority of organisations, but this included exercise therapy in less than half of cases, and medication review in only one-third to two-thirds of cases.
Conclusions: Opportunities exist to improve consistency of diagnosis and delivery of evidence-based interventions for both sarcopenia and frailty
The Impact of Study Size on Meta-analyses: Examination of Underpowered Studies in Cochrane Reviews
Background: Most meta-analyses include data from one or more small studies that, individually, do not have power to
detect an intervention effect. The relative influence of adequately powered and underpowered studies in published metaanalyses
has not previously been explored. We examine the distribution of power available in studies within meta-analyses
published in Cochrane reviews, and investigate the impact of underpowered studies on meta-analysis results.
Methods and Findings: For 14,886 meta-analyses of binary outcomes from 1,991 Cochrane reviews, we calculated power
per study within each meta-analysis. We defined adequate power as $50% power to detect a 30% relative risk reduction. In
a subset of 1,107 meta-analyses including 5 or more studies with at least two adequately powered and at least one
underpowered, results were compared with and without underpowered studies. In 10,492 (70%) of 14,886 meta-analyses, all
included studies were underpowered; only 2,588 (17%) included at least two adequately powered studies. 34% of the metaanalyses
themselves were adequately powered. The median of summary relative risks was 0.75 across all meta-analyses
(inter-quartile range 0.55 to 0.89). In the subset examined, odds ratios in underpowered studies were 15% lower (95% CI
11% to 18%, P,0.0001) than in adequately powered studies, in meta-analyses of controlled pharmacological trials; and 12%
lower (95% CI 7% to 17%, P,0.0001) in meta-analyses of controlled non-pharmacological trials. The standard error of the
intervention effect increased by a median of 11% (inter-quartile range 21% to 35%) when underpowered studies were
omitted; and between-study heterogeneity tended to decrease.
Conclusions: When at least two adequately powered studies are available in meta-analyses reported by Cochrane reviews,
underpowered studies often contribute little information, and could be left out if a rapid review of the evidence is required.
However, underpowered studies made up the entirety of the evidence in most Cochrane reviews
- …