22 research outputs found

    PSYX 348.01: Psychology of Family Violence

    Get PDF

    PSYX 535.01: Child Interventions

    Get PDF

    PSYX 537.01: Child Assessment

    Get PDF

    PSYX 348.01: Psychology of Family Violence

    Get PDF

    PSYX 535.01: Child Interventions

    Get PDF

    Childhood Trauma-Related Nightmares: The Relationship Between Exposure, Relaxation, and Rescripting Therapy and Cognitive Functions

    Get PDF
    Trauma experiences are, unfortunately, a common part of childhood in the United States and are connected to serious health-related concerns throughout childhood and adulthood. A primary symptom of trauma exposure and posttraumatic stress is re-experiencing, which often occurs in the form of nightmares. Though cognitive behavioral treatment (CBT) is currently the most well supported treatment model for trauma-exposure, it does not specifically address nightmares. Left untreated, trauma-related nightmares may become chronic, impairing quality and quantity of sleep, and exacerbating and perpetuating trauma symptoms. Quality sleep is a necessary element of healthy child development. Trauma experiences and inadequate sleep have been shown to negatively impact children’s cognitive functions, including memory, attention, and learning, as well as increase behavioral problems and decrease academic performance. While PTSD treatment does not typically alleviate nightmares, both Imagery Rehearsal Therapy (IRT) and Exposure, Relaxation, and Rescripting Therapy (ERRT) have been shown to reduce nightmares, improve sleep quality, and relieve PTSD symptoms within adult samples. The proposed study found limited support that an ERRT adaptation for children aged 8- to 13-years-old (ERRT-C) was related to improvement in some cognitive functioning (e.g., attention, short-term memory, processing speed, reading achievement and comprehension)

    The Society for Implementation Research Collaboration Instrument Review Project: A methodology to promote rigorous evaluation

    Get PDF
    Abstract Background Identification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project’s objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field’s most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes. Methods The seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository. Results To date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. Conclusions The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed

    Toward criteria for pragmatic measurement in implementation research and practice: a stakeholder-driven approach using concept mapping

    Get PDF
    Background: Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria’s clarity and importance. Methods: Twenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data. Findings: The 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented. Conclusions: This study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures

    Operationalizing the ‘pragmatic’ measures construct using a stakeholder feedback and a multi-method approach

    Get PDF
    Abstract Context Implementation science measures are rarely used by stakeholders to inform and enhance clinical program change. Little is known about what makes implementation measures pragmatic (i.e., practical) for use in community settings; thus, the present study’s objective was to generate a clinical stakeholder-driven operationalization of a pragmatic measures construct. Evidence acquisition The pragmatic measures construct was defined using: 1) a systematic literature review to identify dimensions of the construct using PsycINFO and PubMed databases, and 2) interviews with an international stakeholder panel (N = 7) who were asked about their perspectives of pragmatic measures. Evidence synthesis Combined results from the systematic literature review and stakeholder interviews revealed a final list of 47 short statements (e.g., feasible, low cost, brief) describing pragmatic measures, which will allow for the development of a rigorous, stakeholder-driven conceptualization of the pragmatic measures construct. Conclusions Results revealed significant overlap between terms related to the pragmatic construct in the existing literature and stakeholder interviews. However, a number of terms were unique to each methodology. This underscores the importance of understanding stakeholder perspectives of criteria measuring the pragmatic construct. These results will be used to inform future phases of the project where stakeholders will determine the relative importance and clarity of each dimension of the pragmatic construct, as well as their priorities for the pragmatic dimensions. Taken together, these results will be incorporated into a pragmatic rating system for existing implementation science measures to support implementation science and practice
    corecore