11 research outputs found

    Assessment of the quality and variability of health information on chronic pain websites using the DISCERN instrument

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Internet is used increasingly by providers as a tool for disseminating pain-related health information and by patients as a resource about health conditions and treatment options. However, health information on the Internet remains unregulated and varies in quality, accuracy and readability. The objective of this study was to determine the quality of pain websites, and explain variability in quality and readability between pain websites.</p> <p>Methods</p> <p>Five key terms (pain, chronic pain, back pain, arthritis, and fibromyalgia) were entered into the Google, Yahoo and MSN search engines. Websites were assessed using the DISCERN instrument as a quality index. Grade level readability ratings were assessed using the Flesch-Kincaid Readability Algorithm. Univariate (using alpha = 0.20) and multivariable regression (using alpha = 0.05) analyses were used to explain the variability in DISCERN scores and grade level readability using potential for commercial gain, health related seals of approval, language(s) and multimedia features as independent variables.</p> <p>Results</p> <p>A total of 300 websites were assessed, 21 excluded in accordance with the exclusion criteria and 110 duplicate websites, leaving 161 unique sites. About 6.8% (11/161 websites) of the websites offered patients' commercial products for their pain condition, 36.0% (58/161 websites) had a health related seal of approval, 75.8% (122/161 websites) presented information in English only and 40.4% (65/161 websites) offered an interactive multimedia experience. In assessing the quality of the unique websites, of a maximum score of 80, the overall average DISCERN Score was 55.9 (13.6) and readability (grade level) of 10.9 (3.9). The multivariable regressions demonstrated that website seals of approval (<it>P </it>= 0.015) and potential for commercial gain (<it>P </it>= 0.189) were contributing factors to higher DISCERN scores, while seals of approval (<it>P </it>= 0.168) and interactive multimedia (<it>P </it>= 0.244) contributed to lower grade level readability, as indicated by estimates of the beta coefficients.</p> <p>Conclusion</p> <p>The overall quality of pain websites is moderate, with some shortcomings. Websites that scored high using the DISCERN questionnaire contained health related seals of approval and provided commercial solutions for pain related conditions while those with low readability levels offered interactive multimedia options and have been endorsed by health seals.</p

    Survey of professional views on sharing interim results by the Data Safety Monitoring Board (DSMB): what to share, with whom and why

    No full text
    Abstract Background Sharing interim results by the Data Safety Monitoring Board (DSMB) with non-DSMB members is an issue that can affect trial integrity. It is unclear what should be shared. This study assesses the views of professionals to understand what interim information should be shared at interim, with whom and why. Methods Conducted an online survey of members of the Society of Clinical Trials (SCT) and International Society of Clinical Biostatistics (ISCB) in 2015 asking their professional views on sharing interim results. Email was used to advertise the survey and a link in the email was provided to the online survey. Results Approximately 3136 (936 SCT members + 2200 ISCB members) members were invited. The response rate was 12% (371/3136). The majority reported the Interim Control Event Rate (IControlER) (149/237; 62.9% [95% CI, 56.7–69.0%]), Adaptive Conditional Power (ACP) (144/224; 64.3% [95% CI, 58.0%–70.6%]) and the Unconditional Conditional Power (UCP) (126/208; 60.6% [95% CI, 53.9–67.2%]) should not be shared with non-DSMB members. The majority reported that the Interim Combined Event Rate (ICombinedER) (168/262; 64.1% [95% CI, 58.0–69.9%]) should be shared with non-DSMB members particularly the steering committee (SC) because it does not unmask interim results and helps with monitoring trial progress, safety, and design assumptions. Conclusion The IControlER and ACP are unmasking of interim results and should not be shared. The UCP is a technical measure that is potentially misleading and also should not be shared. The ICombinedER is usually known by the SC and sponsor making it easy to determine group rates if the IControlER is known. Though most respondents thought the ICombinedER should be shared with the SC as it does not unmask relative effects between groups, we do not recommend sharing the ICombinedER as it is flawed measure that can have multiple interpretations possibly suggesting that one group is performing better, worse or the same as a comparator group, leading to guesses about how groups are doing relative to one another

    Sharing some interim data in trial monitoring can mislead or unmask trial investigators: A scenario-based survey of trial experts

    No full text
    Background: Sharing masked interim results by the Data Safety Monitoring Board (DSMB) with non-DSMB members is an important issue that can affect trial integrity. Our survey's objective is to collect evidence to understand how seemingly masked interim results or result extrapolations are interpreted and discuss whether these results should be shared at interim. Methods: Conducted a 6 scenario-question survey asking trial experts how they interpreted three kinds of seemingly masked interim results or result extrapolation measures (interim combined event rate, adaptive conditional power and “unconditional” conditional power). Results: Thirty-one current Consolidated Standards of Reporting Trials group affiliates were invited for survey participation (February 2015). Response rate: 71.0% (22/31). About half, 52.6% (95% CI: 28.9%–74.0%), (10/19), correctly indicated that the interim combined event rate can be interpreted in three ways (drug X doing better than placebo, worse than placebo or the same) if shared at interim. The majority, 72.2% (95% CI: 46.5%–89.7%), (13/18), correctly indicated that the adaptive conditional power suggests relative treatment group effects. The majority, 53.3% (95% CI: 26.6%–77.0%), (8/15), incorrectly indicated that the “unconditional” conditional power suggests relative treatment group effects. Discussion/Conclusion: Knowledge of these three results or result extrapolation measures should not be shared outside of the DSMB at interim as they may mislead or unmask interim results, potentially introducing trial bias. For example, the interim combined event rate can be interpreted in one of three ways potentially leading to mistaken guesswork about interim results. Knowledge of the adaptive conditional power by non-DSMB members is telling of relative treatment effects thus unmasking of interim results. Keywords: Data Safety Monitoring Board (DSMB), Data Monitoring Committee (DMC), Interim result sharing, Focus group surve

    A look at the potential association between PICOT framing of a research question and the quality of reporting of analgesia RCTs

    No full text
    Abstract Background Methodologists have proposed the formation of a good research question to initiate the process of developing a research protocol that will guide the design, conduct and analysis of randomized controlled trials (RCTs), and help improve the quality of reporting such studies. Five constituents of a good research question based on the PICOT framing include: Population, Intervention, Comparator, Outcome, and Time-frame of outcome assessment. The aim of this study was to analyze if the presence a structured research question, in PICOT format, in RCTs used within a 2010 meta-analysis investigating the effectiveness of femoral nerve blocks after total knee arthroplasty, is independently associated with improved quality of reporting. Methods Twenty-three RCT reports were assessed for the quality of reporting and then examined for the presence of the five constituents of a structured research question based on PICOT framing. We created a PICOT score (predictor variable), with a possible score between 0 and 5; one point for every constituent that was included. Our outcome variable was a 14 point overall reporting quality score (OQRS) and a 3 point key methodological items score (KMIS) based on the proper reporting of allocation concealment, blinding and numbers analysed using the intention-to-treat principle. Both scores, OQRS and KMIS, are based on the Consolidated Standards for Reporting Trials (CONSORT) statement. A multivariable regression analysis was conducted to determine if PICOT score was independently associated with OQRS and KMIS. Results A completely structured PICOT score question was found in 2 of the 23 RCTs evaluated. Although not statistically significant, higher PICOT was associated with higher OQRS [IRR: 1.267; 95% confidence interval (CI): 0.984, 1.630; p = 0.066] but not KMIS (1.061 (0.515, 2.188); 0.872). These results are comparable to those from a similar study in terms of the direction and range of IRRs estimates. The results need to be interpreted cautiously due to the small sample size. Conclusions This study showed that PICOT framing of a research question in anesthesia-related RCTs is not often followed. Even though a statistically significant association with higher OQRS was not found, PICOT framing of a research question is still an important attribute within all RCTs

    The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Randomized controlled trials (RCTs) are routinely used in systematic reviews and meta-analyses that help inform healthcare and policy decision making. The proper reporting of RCTs is important because it acts as a proxy for health care providers and researchers to appraise the quality of the methodology, conduct and analysis of an RCT. The aims of this study are to analyse the overall quality of reporting in 23 RCTs that were used in a meta-analysis by assessing 3 key methodological items, and to determine factors associated with high quality of reporting. It is hypothesized that studies with larger sample sizes, that have funding reported, that are published in journals with a higher impact factor and that are in journals that have adopted or endorsed the CONSORT statement will be associated with better overall quality of reporting and reporting of key methodological items.</p> <p>Methods</p> <p>We systematically reviewed RCTs used within an anesthesiology related post-operative pain management meta-analysis. We included all of the 23 RCTs used, all of which were parallel design that addressed the use of femoral nerve block in improving outcomes after total knee arthroplasty. Data abstraction was done independently by two reviewers. The two main outcomes were: 1) 15 point overall quality of reporting score (OQRS) based on the Consolidated Standards for Reporting Trials (CONSORT) and 2) 3 point key methodological item score (KMIS) based on allocation concealment, blinding and intention-to-treat analysis.</p> <p>Results</p> <p>Twenty-three RCTs were included. The median OQRS was 9.0 (Interquartile Range = 3). A multivariable regression analysis did not show any significant association between OQRS or KMIS and our four predictor variables hypothesized to improve reporting. The direction and magnitude of our results when compared to similar studies suggest that the sample size and impact factor are associated with improved key methodological item reporting.</p> <p>Conclusions</p> <p>The quality of reporting of RCTs used within an anesthesia related meta-analysis is poor to moderate. The information gained from this study should be used by journals to register the urgency for RCTs to be clear and transparent in reporting to help make literature accessible and comparable.</p
    corecore