26 research outputs found

    Ohio First Steps for Healthy Babies: A Program Supporting Breastfeeding Practices in Ohio Birthing Hospitals

    Get PDF
    Background: Ohio First Steps for Healthy Babies (First Steps) is a free, voluntary statewide designation program coadministered by the Ohio Department of Health and the Ohio Hospital Association that promotes breastfeeding-supportive maternity practices aligned with the Baby-Friendly Hospital Initiative (BFHI).Materials and Methods: We examined Ohio birthing hospitals’ participation in First Steps, and changes in breastfeed-ing rates at hospital discharge, over the first 12 quarters of the program (July 15, 2015, to July 14, 2018) for all 110 licensed Ohio birthing hospitals. The 81 (73.6%) that achieved at least 1 step over the study period (designated as First Steps hospitals) were compared to the 29 non-First Steps hospitals, and the 17 that began participation at First Steps startup (July 15, 2015) were identified for additional analysis. Changes in breastfeeding rates were examined using a mixed effects multivariate regression model.Results: Breastfeeding increased significantly over the program period from 73.8% to 76.7% (mean 0.19% per quarter, p = .0002), but without a significant difference in breastfeeding rates between First Steps and non-First Steps hospitals. However, in a pre- and post-program analysis for the 17 hospitals that began participation at First Steps startup (excluding an additional 6 hospitals with BFHI designation), number of quarters in the program, number of steps completed, and number of births in 2015 were significantly associated with breastfeeding rates. Hospitals that completed at least 2 steps every 5 quarters in the First Steps program increased breastfeeding when compared to those not participating in the program.Conclusion: These encouraging results provide a formal evaluation of a best practices BFHI-modelled statewide program

    Learning best-practices in journalology: Course description and attendee insights into the inaugural EQUATOR Canada Publication School

    Get PDF
    Background and purpose Dissemination of research results is a key component of the research continuum and is commonly achieved through publication in peer-reviewed academic journals. However, issues of poor quality reporting in the research literature are well documented. A lack of formal training in journalology (i.e., publication science) may contribute to this problem. To help address this gap in training, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Canada Publication School was developed and facilitated by internationally-renowned faculty to train researchers and clinicians in reporting and publication best practices. This article describes the structure of the inaugural course and provides an overview of attendee evaluations and perspectives. Key highlights Attendees perceived the content of this two-day intensive course as highly informative. They noted that the course helped them learn skills that were relevant to academic publishing (e.g., using reporting guidelines in all phases of the research process; using scholarly metrics beyond the journal impact factor; open-access publication models; and engaging patients in the research process). The course provided an opportunity for researchers to share their challenges faced during the publication process and to learn skills for improving reproducibility, completeness, transparency, and dissemination of research results. There was some suggestion that this type of course should be offered and integrated into formal training and course curricula. Implications In light of the importance of academic publishing in the scientific process, there is a need to train and prepare researchers with skills in Journalology. The EQUATOR Canada Publication School provides an example of a successful program that addressed the needs of researchers across career trajectories and provided them with resources to be successful in the publication process. This approach can be used, modified, and/or adapted by curriculum developers interested in designing similar programs, and could be incorporated into academic and clinical research training programs

    Heterogeneity and gaps in reporting primary outcomes from neonatal trials

    Get PDF
    OBJECTIVES: Clear outcome reporting in clinical trials facilitates accurate interpretation and applica- tion of findings and improves evidence-informed decision-making. Standardized core outcomes for reporting neonatal trials have been developed, but little is known about how primary out- comes are reported in neonatal trials. Our aim was to identify strengths and weaknesses of pri- mary outcome reporting in recent neonatal trials. METHODS: Neonatal trials including $100 participants/arm published between 2015 and 2020 with at least 1 primary outcome from a neonatal core outcome set were eligible. Raters recruited from Cochrane Neonatal were trained to evaluate the trials’ primary outcome reporting completeness using relevant items from Consolidated Standards of Reporting Trials 2010 and Consolidated Standards of Reporting Trials-Outcomes 2022 pertaining to the reporting of the definition, selec- tion, measurement, analysis, and interpretation of primary trial outcomes. All trial reports were assessed by 3 raters. Assessments and discrepancies between raters were analyzed. RESULTS: Outcome-reporting evaluations were completed for 36 included neonatal trials by 39 raters. Levels of outcome reporting completeness were highly variable. All trials fully reported the primary outcome measurement domain, statistical methods used to compare treatment groups, and participant flow. Yet, only 28% of trials fully reported on minimal important difference, 24% on outcome data missingness, 66% on blinding of the outcome assessor, and 42% on handling of outcome multiplicity. CONCLUSIONS: Primary outcome reporting in neonatal trials often lacks key information needed for interpretability of results, knowledge synthesis, and evidence-informed decision-making in neona- tology. Use of existing outcome-reporting guidelines by trialists, journals, and peer reviewers will enhance transparent reporting of neonatal trials

    Improving outcome reporting in clinical trial reports and protocols: study protocol for the Instrument for reporting Planned Endpoints in Clinical Trials (InsPECT)

    Get PDF
    Abstract Background Inadequate and poor quality outcome reporting in clinical trials is a well-documented problem that impedes the ability of researchers to evaluate, replicate, synthesize, and build upon study findings and impacts evidence-based decision-making by patients, clinicians, and policy-makers. To facilitate harmonized and transparent reporting of outcomes in trial protocols and published reports, the Instrument for reporting Planned Endpoints in Clinical Trials (InsPECT) is being developed. The final product will provide unique InsPECT extensions to the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) and CONSORT (Consolidated Standards of Reporting Trials) reporting guidelines. Methods The InsPECT SPIRIT and CONSORT extensions will be developed in accordance with the methodological framework created by the EQUATOR (Enhancing the Quality and Transparency of Health Research Quality) Network for reporting guideline development. Development will consist of (1) the creation of an initial list of candidate outcome reporting items synthesized from expert consultations and a scoping review of existing guidance for reporting outcomes in trial protocols and reports; (2) a three-round international Delphi study to identify additional candidate items and assess candidate item importance on a 9-point Likert scale, completed by stakeholders such as trial report and protocol authors, systematic review authors, biostatisticians and epidemiologists, reporting guideline developers, clinicians, journal editors, and research ethics board representatives; and (3) an in-person expert consensus meeting to finalize the set of essential outcome reporting items for trial protocols and reports, respectively. The consensus meeting discussions will be independently facilitated and informed by the empirical evidence identified in the primary literature and through the opinions (aggregate rankings and comments) collected via the Delphi study. An integrated knowledge translation approach will be used throughout InsPECT development to facilitate implementation and dissemination, in addition to standard post-development activities. Discussion InsPECT will provide evidence-informed and consensus-based standards focused on outcome reporting in clinical trials that can be applied across diverse disease areas, study populations, and outcomes. InsPECT will support the standardization of trial outcome reporting, which will maximize trial usability, reduce bias, foster trial replication, improve trial design and execution, and ultimately reduce research waste and help improve patient outcomes

    Systematic Review: The Measurement Properties of the Children's Depression Rating Scale−Revised in Adolescents With Major Depressive Disorder

    No full text
    Objective: To systematically appraise existing evidence of the measurement properties of the Children's Depression Rating Scale−Revised (CDRS-R) in adolescents with major depressive disorder (MDD). The CDRS-R is the most commonly used scale in adolescent depression research, yet was originally designed for use in children 6 to 12 years old. Method: Seven databases were searched for studies that evaluated the measurement properties of the CDRS-R in adolescents (ages 12−18 years). Of 65 studies screened by full-text, 6 were included. Measurement properties were appraised using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) guidelines. The COSMIN minimum requirements for recommending the use of an outcome measurement instrument are (1) evidence for sufficient content validity (any level of evidence), and (2) at least low-quality evidence for sufficient internal consistency. Results: Four studies assessed an English-language version of the CDRS-R; the other 2 assessed German and Korean versions, respectively. No study assessed content validity, cross-cultural validity/measurement invariance, or measurement error of the CDRS-R in adolescents with MDD. Low-quality evidence was found for sufficient construct validity (n = 4 studies) and responsiveness (n = 2 studies) assessed via comparator instruments. Very-low−quality evidence was found for sufficient interrater reliability (n = 2 studies). The results for structural validity (n = 3 studies) and internal consistency (n = 5 studies) were inconclusive. Conclusion: It remains unclear whether the CDRS-R appropriately measures depressive symptom severity in adolescent MDD. Before use of the CDRS-R in adolescent MDD research can be recommended, evidence of sufficient psychometric properties in adolescents with MDD is needed
    corecore