15,884 research outputs found

    Does self-monitoring reduce blood pressure? Meta-analysis with meta-regression of randomized controlled trials

    Get PDF
    Introduction. Self-monitoring of blood pressure (BP) is an increasingly common part of hypertension management. The objectives of this systematic review were to evaluate the systolic and diastolic BP reduction, and achievement of target BP, associated with self-monitoring. Methods. MEDLINE, Embase, Cochrane database of systematic reviews, database of abstracts of clinical effectiveness, the health technology assessment database, the NHS economic evaluation database, and the TRIP database were searched for studies where the intervention included self-monitoring of BP and the outcome was change in office/ambulatory BP or proportion with controlled BP. Two reviewers independently extracted data. Meta-analysis using a random effects model was combined with meta-regression to investigate heterogeneity in effect sizes. Results. A total of 25 eligible randomized controlled trials (RCTs) (27 comparisons) were identified. Office systolic BP (20 RCTs, 21 comparisons, 5,898 patients) and diastolic BP (23 RCTs, 25 comparisons, 6,038 patients) were significantly reduced in those who self-monitored compared to usual care (weighted mean difference (WMD) systolic −3.82 mmHg (95% confidence interval −5.61 to −2.03), diastolic −1.45 mmHg (−1.95 to −0.94)). Self-monitoring increased the chance of meeting office BP targets (12 RCTs, 13 comparisons, 2,260 patients, relative risk = 1.09 (1.02 to 1.16)). There was significant heterogeneity between studies for all three comparisons, which could be partially accounted for by the use of additional co-interventions. Conclusion. Self-monitoring reduces blood pressure by a small but significant amount. Meta-regression could only account for part of the observed heterogeneity

    Evaluating the impact of physical activity apps and wearables: interdisciplinary review

    Get PDF
    Background: Although many smartphone apps and wearables have been designed to improve physical activity, their rapidly evolving nature and complexity present challenges for evaluating their impact. Traditional methodologies, such as randomized controlled trials (RCTs), can be slow. To keep pace with rapid technological development, evaluations of mobile health technologies must be efficient. Rapid alternative research designs have been proposed, and efficient in-app data collection methods, including in-device sensors and device-generated logs, are available. Along with effectiveness, it is important to measure engagement (ie, users’ interaction and usage behavior) and acceptability (ie, users’ subjective perceptions and experiences) to help explain how and why apps and wearables work. Objectives: This study aimed to (1) explore the extent to which evaluations of physical activity apps and wearables: employ rapid research designs; assess engagement, acceptability, as well as effectiveness; use efficient data collection methods; and (2) describe which dimensions of engagement and acceptability are assessed. Method: An interdisciplinary scoping review using 8 databases from health and computing sciences. Included studies measured physical activity, and evaluated physical activity apps or wearables that provided sensor-based feedback. Results were analyzed using descriptive numerical summaries, chi-square testing, and qualitative thematic analysis. Results: A total of 1829 abstracts were screened, and 858 articles read in full. Of 111 included studies, 61 (55.0%) were published between 2015 and 2017. Most (55.0%, 61/111) were RCTs, and only 2 studies (1.8%) used rapid research designs: 1 single-case design and 1 multiphase optimization strategy. Other research designs included 23 (22.5%) repeated measures designs, 11 (9.9%) nonrandomized group designs, 10 (9.0%) case studies, and 4 (3.6%) observational studies. Less than one-third of the studies (32.0%, 35/111) investigated effectiveness, engagement, and acceptability together. To measure physical activity, most studies (90.1%, 101/111) employed sensors (either in-device [67.6%, 75/111] or external [23.4%, 26/111]). RCTs were more likely to employ external sensors (accelerometers: P=.005). Studies that assessed engagement (52.3%, 58/111) mostly used device-generated logs (91%, 53/58) to measure the frequency, depth, and length of engagement. Studies that assessed acceptability (57.7%, 64/111) most often used questionnaires (64%, 42/64) and/or qualitative methods (53%, 34/64) to explore appreciation, perceived effectiveness and usefulness, satisfaction, intention to continue use, and social acceptability. Some studies (14.4%, 16/111) assessed dimensions more closely related to usability (ie, burden of sensor wear and use, interface complexity, and perceived technical performance). Conclusions: The rapid increase of research into the impact of physical activity apps and wearables means that evaluation guidelines are urgently needed to promote efficiency through the use of rapid research designs, in-device sensors and user-logs to assess effectiveness, engagement, and acceptability. Screening articles was time-consuming because reporting across health and computing sciences lacked standardization. Reporting guidelines are therefore needed to facilitate the synthesis of evidence across disciplines

    Telehealthcare for chronic obstructive pulmonary disease

    Get PDF
    BACKGROUND: Chronic obstructive pulmonary disease (COPD) is a disease of irreversible airways obstruction in which patients often suffer exacerbations. Sometimes these exacerbations need hospital care: telehealthcare has the potential to reduce admission to hospital when used to administer care to the pateint from within their own home. OBJECTIVES: To review the effectiveness of telehealthcare for COPD compared with usual face‐to‐face care. SEARCH METHODS: We searched the Cochrane Airways Group Specialised Register, which is derived from systematic searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, CINAHL, AMED, and PsycINFO; last searched January 2010. SELECTION CRITERIA: We selected randomised controlled trials which assessed telehealthcare, defined as follows: healthcare at a distance, involving the communication of data from the patient to the health carer, usually a doctor or nurse, who then processes the information and responds with feedback regarding the management of the illness. The primary outcomes considered were: number of exacerbations, quality of life as recorded by the St George's Respiratory Questionnaire, hospitalisations, emergency department visits and deaths. DATA COLLECTION AND ANALYSIS: Two authors independently selected trials for inclusion and extracted data. We combined data into forest plots using fixed‐effects modelling as heterogeneity was low (I(2) < 40%). MAIN RESULTS: Ten trials met the inclusion criteria. Telehealthcare was assessed as part of a complex intervention, including nurse case management and other interventions. Telehealthcare was associated with a clinically significant increase in quality of life in two trials with 253 participants (mean difference ‐6.57 (95% confidence interval (CI) ‐13.62 to 0.48); minimum clinically significant difference is a change of ‐4.0), but the confidence interval was wide. Telehealthcare showed a significant reduction in the number of patients with one or more emergency department attendances over 12 months; odds ratio (OR) 0.27 (95% CI 0.11 to 0.66) in three trials with 449 participants, and the OR of having one or more admissions to hospital over 12 months was 0.46 (95% CI 0.33 to 0.65) in six trials with 604 participants. There was no significant difference in the OR for deaths over 12 months for the telehealthcare group as compared to the usual care group in three trials with 503 participants; OR 1.05 (95% CI 0.63 to 1.75). AUTHORS' CONCLUSIONS: Telehealthcare in COPD appears to have a possible impact on the quality of life of patients and the number of times patients attend the emergency department and the hospital. However, further research is needed to clarify precisely its role since the trials included telehealthcare as part of more complex packages

    Efficacy of interventions that use apps to improve diet, physical activity and sedentary behaviour : a systematic review

    Get PDF
    Background: Health and fitness applications (apps) have gained popularity in interventions to improve diet, physical activity and sedentary behaviours but their efficacy is unclear. This systematic review examined the efficacy of interventions that use apps to improve diet, physical activity and sedentary behaviour in children and adults. Methods: Systematic literature searches were conducted in five databases to identify papers published between 2006 and 2016. Studies were included if they used a smartphone app in an intervention to improve diet, physical activity and/or sedentary behaviour for prevention. Interventions could be stand-alone interventions using an app only, or multi-component interventions including an app as one of several intervention components. Outcomes measured were changes in the health behaviours and related health outcomes (i.e., fitness, body weight, blood pressure, glucose, cholesterol, quality of life). Study inclusion and methodological quality were independently assessed by two reviewers. Results: Twenty-seven studies were included, most were randomised controlled trials (n = 19; 70%). Twenty-three studies targeted adults (17 showed significant health improvements) and four studies targeted children (two demonstrated significant health improvements). Twenty-one studies targeted physical activity (14 showed significant health improvements), 13 studies targeted diet (seven showed significant health improvements) and five studies targeted sedentary behaviour (two showed significant health improvements). More studies (n = 12; 63%) of those reporting significant effects detected between-group improvements in the health behaviour or related health outcomes, whilst fewer studies (n = 8; 42%) reported significant within-group improvements. A larger proportion of multi-component interventions (8 out of 13; 62%) showed significant between-group improvements compared to stand-alone app interventions (5 out of 14; 36%). Eleven studies reported app usage statistics, and three of them demonstrated that higher app usage was associated with improved health outcomes. Conclusions: This review provided modest evidence that app-based interventions to improve diet, physical activity and sedentary behaviours can be effective. Multi-component interventions appear to be more effective than standalone app interventions, however, this remains to be confirmed in controlled trials. Future research is needed on the optimal number and combination of app features, behaviour change techniques, and level of participant contact needed to maximise user engagement and intervention efficacy

    Are decision trees a feasible knowledge representation to guide extraction of critical information from randomized controlled trial reports?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper proposes the use of decision trees as the basis for automatically extracting information from published randomized controlled trial (RCT) reports. An exploratory analysis of RCT abstracts is undertaken to investigate the feasibility of using decision trees as a semantic structure. Quality-of-paper measures are also examined.</p> <p>Methods</p> <p>A subset of 455 abstracts (randomly selected from a set of 7620 retrieved from Medline from 1998 – 2006) are examined for the quality of RCT reporting, the identifiability of RCTs from abstracts, and the completeness and complexity of RCT abstracts with respect to key decision tree elements. Abstracts were manually assigned to 6 sub-groups distinguishing whether they were primary RCTs versus other design types. For primary RCT studies, we analyzed and annotated the reporting of intervention comparison, population assignment and outcome values. To measure completeness, the frequencies by which complete intervention, population and outcome information are reported in abstracts were measured. A qualitative examination of the reporting language was conducted.</p> <p>Results</p> <p>Decision tree elements are manually identifiable in the majority of primary RCT abstracts. 73.8% of a random subset was primary studies with a single population assigned to two or more interventions. 68% of these primary RCT abstracts were structured. 63% contained pharmaceutical interventions. 84% reported the total number of study subjects. In a subset of 21 abstracts examined, 71% reported numerical outcome values.</p> <p>Conclusion</p> <p>The manual identifiability of decision tree elements in the abstract suggests that decision trees could be a suitable construct to guide machine summarisation of RCTs. The presence of decision tree elements could also act as an indicator for RCT report quality in terms of completeness and uniformity.</p

    Strategies designed to help healthcare professionals to recruit participants to research studies.

    Get PDF
    BACKGROUND: Identifying and approaching eligible participants for recruitment to research studies usually relies on healthcare professionals. This process is sometimes hampered by deliberate or inadvertent gatekeeping that can introduce bias into patient selection. OBJECTIVES: Our primary objective was to identify and assess the effect of strategies designed to help healthcare professionals to recruit participants to research studies. SEARCH METHODS: We performed searches on 5 January 2015 in the following electronic databases: Cochrane Methodology Register, CENTRAL, MEDLINE, EMBASE, CINAHL, British Nursing Index, PsycINFO, ASSIA and Web of Science (SSCI, SCI-EXPANDED) from 1985 onwards. We checked the reference lists of all included studies and relevant review articles and did citation tracking through Web of Science for all included studies. SELECTION CRITERIA: We selected all studies that evaluated a strategy to identify and recruit participants for research via healthcare professionals and provided pre-post comparison data on recruitment rates. DATA COLLECTION AND ANALYSIS: Two review authors independently screened search results for potential eligibility, read full papers, applied the selection criteria and extracted data. We calculated risk ratios for each study to indicate the effect of each strategy. MAIN RESULTS: Eleven studies met our eligibility criteria and all were at medium or high risk of bias. Only five studies gave the total number of participants (totalling 7372 participants). Three studies used a randomised design, with the others using pre-post comparisons. Several different strategies were investigated. Four studies examined the impact of additional visits or information for the study site, with no increases in recruitment demonstrated. Increased recruitment rates were reported in two studies that used a dedicated clinical recruiter, and five studies that introduced an automated alert system for identifying eligible participants. The studies were embedded into trials evaluating care in oncology mainly but also in emergency departments, diabetes and lower back pain. AUTHORS' CONCLUSIONS: There is no strong evidence for any single strategy to help healthcare professionals to recruit participants in research studies. Additional visits or information did not appear to increase recruitment by healthcare professionals. The most promising strategies appear to be those with a dedicated resource (e.g. a clinical recruiter or automated alert system) for identifying suitable participants that reduced the demand on healthcare professionals, but these were assessed in studies at high risk of bias.We would like to acknowledge the support of the Methodology theme of theCancer ExperiencesCollaborative (CECo), who have supported this review.This is the final published version. It first appeared at http://onlinelibrary.wiley.com/doi/10.1002/14651858.MR000036.pub2/abstract

    Analysis of platelet-rich plasma extraction variations in platelet and blood components between 4 common commercial kits

    Get PDF
    Background: Platelet-rich plasma (PRP) has been extensively used as a treatment in tissue healing in tendinopathy, muscle injury, and osteoarthritis. However, there is variation in methods of extraction, and this produces different types of PRP. Purpose: To determine the composition of PRP obtained from 4 commercial separation kits, which would allow assessment of current classification systems used in cross-study comparisons. Study Design: Controlled laboratory study. Methods: Three normal adults each donated 181 mL of whole blood, some of which served as a control and the remainder of which was processed through 4 PRP separation kits: GPS III (Biomet Biologics), Smart-Prep2 (Harvest Terumo), Magellan (Arteriocyte Medical Systems), and ACP (Device Technologies). The resultant PRP was tested for platelet count, red blood cell count, and white blood cell count, including differential in a commercial pathology laboratory. Glucose and pH measurements were obtained from a blood gas autoanalyzer machine. Results: Three kits taking samples from the “buffy coat layer” were found to have greater concentrations of platelets (3-6 times baseline), while 1 kit taking samples from plasma was found to have platelet concentrations of only 1.5 times baseline. The same 3 kits produced an increased concentration of white blood cells (3-6 times baseline); these consisted of neutrophils, leukocytes, and monocytes. This represents high concentrations of platelets and white blood cells. A small drop in pH was thought to relate to the citrate used in the sample preparation. Interestingly, an unexpected increase in glucose concentrations, with 3 to 6 times greater than baseline levels, was found in all samples. Conclusion:This study reveals the variation of blood components, including platelets, red blood cells, leukocytes, pH, and glucose in PRP extractions. The high concentrations of cells are important, as the white blood cell count in PRP samples has frequently been ignored, being considered insignificant. The lack of standardization of PRP preparation for clinical use has contributed at least in part to the varying clinical efficacy in PRP use. Clinical Relevance: The variation of platelet and other blood component concentrations between commercial PRP kits may affect clinical treatment outcomes. There is a need for standardization of PRP for clinical use

    Computer-Assisted versus Oral-and-Written History Taking for the Prevention and Management of Cardiovascular Disease: a Systematic Review of the Literature

    Get PDF
    Background and objectives: CVD is an important global healthcare issue; it is the leading cause of global mortality, with an increasing incidence identified in both developed and developing countries. It is also an extremely costly disease for healthcare systems unless managed effectively. In this review we aimed to: – Assess the effect of computer-assisted versus oral-and-written history taking on the quality of collected information for the prevention and management of CVD. – Assess the effect of computer-assisted versus oral-and-written history taking on the prevention and management of CVD. Methods: Randomised controlled trials that included participants of 16 years or older at the beginning of the study, who were at risk of CVD (prevention) or were either previously diagnosed with CVD (management). We searched all major databases. We assessed risk of bias using the Cochrane Collaboration tool. Results: We identified two studies. One comparing the two methods of history-taking for the prevention of cardiovascular disease n = 75. The study shows that generally the patients in the experimental group underwent more laboratory procedures, had more biomarker readings recorded and/or were given (or had reviewed), more dietary changes than the control group. The other study compares the two methods of history-taking for the management of cardiovascular disease (n = 479). The study showed that the computerized decision aid appears to increase the proportion of patients who responded to invitations to discuss CVD prevention with their doctor. The Computer-Assisted History Taking Systems (CAHTS) increased the proportion of patients who discussed CHD risk reduction with their doctor from 24% to 40% and increased the proportion who had a specific plan to reduce their risk from 24% to 37%. Discussion: With only one study meeting the inclusion criteria, for prevention of CVD and one study for management of CVD we did not gather sufficient evidence to address all of the objectives of the review. We were unable to report on most of the secondary patient outcomes in our protocol. Conclusions: We tentatively conclude that CAHTS can provide individually-tailored information about CVD prevention. However, further primary studies are needed to confirm these findings. We cannot draw any conclusions in relation to any other clinical outcomes at this stage. There is a need to develop an evidence base to support the effective development and use of CAHTS in this area of practice. In the absence of evidence on effectiveness, the implementation of computer-assisted history taking may only rely on the clinicians’ tacit knowledge, published monographs and viewpoint articles
    • 

    corecore