8 research outputs found
Recommended from our members
Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries
Background
Existing implementation measures developed in high-income countries may have limited appropriateness for use within low- and middle-income countries (LMIC). In response, researchers at Johns Hopkins University began developing the Mental Health Implementation Science Tools (mhIST) in 2013 to assess priority implementation determinants and outcomes across four key stakeholder groups—consumers, providers, organization leaders, and policy makers—with dedicated versions of scales for each group. These were field tested and refined in several contexts, and criterion validity was established in Ukraine. The Consumer and Provider mhIST have since grown in popularity in mental health research, outpacing psychometric evaluation. Our objective was to establish the cross-context psychometric properties of these versions and inform future revisions.
Methods
We compiled secondary data from seven studies across six LMIC—Colombia, Myanmar, Pakistan, Thailand, Ukraine, and Zambia—to evaluate the psychometric performance of the Consumer and Provider mhIST. We used exploratory factor analysis to identify dimensionality, factor structure, and item loadings for each scale within each stakeholder version. We also used alignment analysis (i.e., multi-group confirmatory factor analysis) to estimate measurement invariance and differential item functioning of the Consumer scales across the six countries.
Results
All but one scale within the Provider and Consumer versions had Cronbach’s alpha greater than 0.8. Exploratory factor analysis indicated most scales were multidimensional, with factors generally aligning with a priori subscales for the Provider version; the Consumer version has no predefined subscales. Alignment analysis of the Consumer mhIST indicated a range of measurement invariance for scales across settings (R2 0.46 to 0.77). Several items were identified for potential revision due to participant nonresponse or low or cross- factor loadings. We found only one item, which asked consumers whether their intervention provider was available when needed, to have differential item functioning in both intercept and loading.
Conclusion
We provide evidence that the Consumer and Provider versions of the mhIST are internally valid and reliable across diverse contexts and stakeholder groups for mental health research in LMIC. We recommend the instrument be revised based on these analyses and future research examine instrument utility by linking measurement to other outcomes of interest
Recommended from our members
Improving mental health and psychosocial wellbeing in humanitarian settings: reflections on research funded through R2HC
Major knowledge gaps remain concerning the most effective ways to address mental health and psychosocial needs of populations affected by humanitarian crises. The Research for Health in Humanitarian Crisis (R2HC) program aims to strengthen humanitarian health practice and policy through research. As a significant portion of R2HC’s research has focused on mental health and psychosocial support interventions, the program has been interested in strengthening a community of practice in this field. Following a meeting between grantees, we set out to provide an overview of the R2HC portfolio, and draw lessons learned. In this paper, we discuss the mental health and psychosocial support-focused research projects funded by R2HC; review the implications of initial findings from this research portfolio; and highlight four remaining knowledge gaps in this field. Between 2014 and 2019, R2HC funded 18 academic-practitioner partnerships focused on mental health and psychosocial support, comprising 38% of the overall portfolio (18 of 48 projects) at a value of approximately 7.2 million GBP. All projects have focused on evaluating the impact of interventions. In line with consensus-based recommendations to consider a wide range of mental health and psychosocial needs in humanitarian settings, research projects have evaluated diverse interventions. Findings so far have both challenged and confirmed widely-held assumptions about the effectiveness of mental health and psychosocial interventions in humanitarian settings. They point to the importance of building effective, sustained, and diverse partnerships between scholars, humanitarian practitioners, and funders, to ensure long-term program improvements and appropriate evidence-informed decision making. Further research needs to fill knowledge gaps regarding how to: scale-up interventions that have been found to be effective (e.g., questions related to integration across sectors, adaptation of interventions across different contexts, and optimal care systems); address neglected mental health conditions and populations (e.g., elderly, people with disabilities, sexual minorities, people with severe, pre-existing mental disorders); build on available local resources and supports (e.g., how to build on traditional, religious healing and community-wide social support practices); and ensure equity, quality, fidelity, and sustainability for interventions in real-world contexts (e.g., answering questions about how interventions from controlled studies can be transferred to more representative humanitarian contexts)
The anti-spasmodic effect of peppermint oil during colonoscopy: a systematic review and meta-analysis
Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries
Abstract
Background
Existing implementation measures developed in high-income countries may have limited appropriateness for use within low- and middle-income countries (LMIC). In response, researchers at Johns Hopkins University began developing the Mental Health Implementation Science Tools (mhIST) in 2013 to assess priority implementation determinants and outcomes across four key stakeholder groups—consumers, providers, organization leaders, and policy makers—with dedicated versions of scales for each group. These were field tested and refined in several contexts, and criterion validity was established in Ukraine. The Consumer and Provider mhIST have since grown in popularity in mental health research, outpacing psychometric evaluation. Our objective was to establish the cross-context psychometric properties of these versions and inform future revisions.
Methods
We compiled secondary data from seven studies across six LMIC—Colombia, Myanmar, Pakistan, Thailand, Ukraine, and Zambia—to evaluate the psychometric performance of the Consumer and Provider mhIST. We used exploratory factor analysis to identify dimensionality, factor structure, and item loadings for each scale within each stakeholder version. We also used alignment analysis (i.e., multi-group confirmatory factor analysis) to estimate measurement invariance and differential item functioning of the Consumer scales across the six countries.
Results
All but one scale within the Provider and Consumer versions had Cronbach’s alpha greater than 0.8. Exploratory factor analysis indicated most scales were multidimensional, with factors generally aligning with a priori subscales for the Provider version; the Consumer version has no predefined subscales. Alignment analysis of the Consumer mhIST indicated a range of measurement invariance for scales across settings (R2 0.46 to 0.77). Several items were identified for potential revision due to participant nonresponse or low or cross- factor loadings. We found only one item, which asked consumers whether their intervention provider was available when needed, to have differential item functioning in both intercept and loading.
Conclusion
We provide evidence that the Consumer and Provider versions of the mhIST are internally valid and reliable across diverse contexts and stakeholder groups for mental health research in LMIC. We recommend the instrument be revised based on these analyses and future research examine instrument utility by linking measurement to other outcomes of interest.
</jats:sec
Psychometric Performance of the Mental Health Implementation Science Tools (mhIST) Across Six Low- and Middle-income Countries
Abstract
BackgroundExisting implementation measures developed in high-income countries may have limited appropriateness for use within low- and middle-income countries (LMIC). In response, researchers at Johns Hopkins University began developing the Mental Health Implementation Science Tools (mhIST) in 2013 to assess priority implementation determinants and outcomes across four key stakeholder groups – consumers, providers, organization leaders, and policy makers – with dedicated versions of scales for each group. These were field tested and refined in several contexts, and criterion validity was established in Ukraine. The Consumer and Provider mhIST have since grown in popularity in mental health research, outpacing psychometric evaluation. Our objective was to establish the cross-context psychometric properties of these versions and inform future revisions.MethodsWe compiled data from seven studies across six LMIC – Colombia, Myanmar, Pakistan, Thailand, Ukraine, and Zambia – to evaluate the psychometric performance of the Consumer and Provider mhIST. We used exploratory factor analysis to identify dimensionality, factor structure, and item loadings for each scale within each stakeholder version. We also used alignment analysis (i.e., multi-group confirmatory factor analysis) to estimate measurement invariance and differential item functioning of the Consumer scales across the six countries.FindingsAll but one scale within the Provider and Consumer versions had a Cronbach’s alpha greater than 0.8. Exploratory factor analysis indicated most scales were multidimensional, with factors generally aligning with a priori subscales for the Provider version; the Consumer version has no predefined subscales. Alignment analysis of the Consumer mhIST indicated a range of measurement invariance for scales across settings (R2 0.46 to 0.77). Several items were identified for potential revision due to participant non-response or low or cross- factor loadings. We found only one item – which asked consumers whether their intervention provider was available when needed – to have differential item functioning in both intercept and loading.ConclusionWe provide evidence that the Consumer and Provider versions of the mhIST are internally valid and reliable across diverse contexts and stakeholder groups for mental health research in LMIC. We recommend the instrument be revised based on these analyses and future research examine instrument utility by linking measurement to other outcomes of interest.</jats:p
Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries
Abstract
Background
Existing implementation measures developed in high-income countries may have limited appropriateness for use within low- and middle-income countries (LMIC). In response, researchers at Johns Hopkins University began developing the Mental Health Implementation Science Tools (mhIST) in 2013 to assess priority implementation determinants and outcomes across four key stakeholder groups – consumers, providers, organization leaders, and policy makers – with dedicated versions of scales for each group. These were field tested and refined in several contexts, and criterion validity was established in Ukraine. The Consumer and Provider mhIST have since grown in popularity in mental health research, outpacing psychometric evaluation. Our objective was to establish the cross-context psychometric properties of these versions and inform future revisions.
Methods
We compiled data from seven studies across six LMIC – Colombia, Myanmar, Pakistan, Thailand, Ukraine, and Zambia – to evaluate the psychometric performance of the Consumer and Provider mhIST. We used exploratory factor analysis to identify dimensionality, factor structure, and item loadings for each scale within each stakeholder version. We also used alignment analysis (i.e., multi-group confirmatory factor analysis) to estimate measurement invariance and differential item functioning of the Consumer scales across the six countries.
Results
All but one scale within the Provider and Consumer versions had a Cronbach’s alpha greater than 0.8. Exploratory factor analysis indicated most scales were multidimensional, with factors generally aligning with a priori subscales for the Provider version; the Consumer version has no predefined subscales. Alignment analysis of the Consumer mhIST indicated a range of measurement invariance for scales across settings (R2 0.46 to 0.77). Several items were identified for potential revision due to participant non-response or low or cross- factor loadings. We found only one item – which asked consumers whether their intervention provider was available when needed – to have differential item functioning in both intercept and loading.
Conclusion
We provide evidence that the Consumer and Provider versions of the mhIST are internally valid and reliable across diverse contexts and stakeholder groups for mental health research in LMIC. We recommend the instrument be revised based on these analyses and future research examine instrument utility by linking measurement to other outcomes of interest.</jats:p
Additional file 1 of Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries
Additional file 1: Table S1. Mental Health Implementation Science Tools (mhIST), Consumer version. Table S2. Mental Health Implementation Science Tools (mhIST), Provider version. Table S3. Fit statistics for models selected in exploratory factor analysis. S4. Stata syntax for alignment analysis
