23 research outputs found

    Response Times as an Indicator of Data Quality: Associations with Interviewer, Respondent, and Question Characteristics in a Health Survey of Diverse Respondents

    Get PDF
    Survey research remains one of the most important ways that researchers learn about key features of populations. Data obtained in the survey interview are a collaborative achievement accomplished through the interplay of the interviewer, respondent, and survey instrument, yet our field is still in the process of comprehensively documenting and examining whether, when, and how characteristics of interviewers, respondents, and questions combine to influence the quality of the data obtained. Researchers tend to consider longer response times as indicators of potential problems as they indicate longer processing or interaction from the respondent, the interviewer (where applicable), or both. Previous work demonstrates response times are associated with various characteristics of interviewers (where applicable), respondents, and questions across web, telephone, and face-to-face interviews. However, these studies vary in the characteristics considered, limited by the characteristics available in the study at hand. In addition, features of the survey interview situation have differential impact on responses from respondents in different racial, ethnic, or other socially defined cultural groups, potentially increasing systematic error and compromising researchers’ ability to make group comparisons. As examples, certain question characteristics or interviewer characteristics may have differential effects across respondents from different racial or ethnic groups (Johnson, Shavitt, and Holbrook 2011; Warnecke et al., 1997). The purpose of the current study is to add to the corpus of existing work to examine how response times are associated with characteristics of interviewers, respondents, and questions, focusing on racially diverse respondents answering questions about trust in medical researchers, participation in medical research, and their health participation. Data are provided by the 2013-2014 “Voices Heard” survey, a computer-assisted telephone survey designed to measure respondents’ perceptions of barriers and facilitators to participating in medical research. Interviews (n=410) were conducted with a quota sample of respondents nearly equally distributed into across the following subgroups: white, black, Latino, and American Indian

    Chapter 18: Response Times as an Indicator of Data Quality: Associations with Question, Interviewer, and Respondent Characteristics in a Health Survey of Diverse Respondents. Appendix 18

    Get PDF
    Appendix 18A Description of individual question characteristics and hypotheses for their relationship with RTs Appendix 18B Description of established tools for evaluating questions and hypotheses for their relationship with RTs Appendix 18C Sample Description Table 18.C1. Number of completed interviews by respondents’ race/ethnicity and sample Appendix 18D Additional Tables Appendix 18E Reference

    Questioning Identity: How a Diverse Set of Respondents Answer Standard Questions About Ethnicity and Race

    Get PDF
    Ethnoracial identity refers to the racial and ethnic categories that people use to classify themselves and others. How it is measured in surveys has implications for understanding inequalities. Yet how people self-identify may not conform to the categories standardized survey questions use to measure ethnicity and race, leading to potential measurement error. In interviewer-administered surveys, answers to survey questions are achieved through interviewer–respondent interaction. An analysis of interviewer–respondent interaction can illuminate whether, when, how, and why respondents experience problems with questions. In this study, we examine how indicators of interviewer–respondent interactional problems vary across ethnoracial groups when respondents answer questions about ethnicity and race. Further, we explore how interviewers respond in the presence of these interactional problems. Data are provided by the 2013–2014 Voices Heard Survey, a computer-assisted telephone survey designed to measure perceptions of participating in medical research among an ethnoracially diverse sample of respondents

    Attitudes Toward Advance Care Planning Among Persons with Dementia and their Caregivers

    Get PDF
    Objectives: To examine factors that influence decision-making, preferences, and plans related to advance care planning (ACP) and end-of-life care among persons with dementia and their caregivers, and examine how these may differ by race. Design: Cross-sectional survey. Setting: 13 geographically dispersed Alzheimer's Disease Centers across the United States. Participants: 431 racially diverse caregivers of persons with dementia. Measurements: Survey on "Care Planning for Individuals with Dementia." Results: The respondents were knowledgeable about dementia and hospice care, indicated the person with dementia would want comfort care at the end stage of illness, and reported high levels of both legal ACP (e.g., living will; 87%) and informal ACP discussions (79%) for the person with dementia. However, notable racial differences were present. Relative to white persons with dementia, African American persons with dementia were reported to have a lower preference for comfort care (81% vs. 58%) and lower rates of completion of legal ACP (89% vs. 73%). Racial differences in ACP and care preferences were also reflected in geographic differences. Additionally, African American study partners had a lower level of knowledge about dementia and reported a greater influence of religious/spiritual beliefs on the desired types of medical treatments. Notably, all respondents indicated that more information about the stages of dementia and end-of-life health care options would be helpful. Conclusions: Educational programs may be useful in reducing racial differences in attitudes towards ACP. These programs could focus on the clinical course of dementia and issues related to end-of-life care, including the importance of ACP

    Ipsilesional Mu Rhythm Desynchronization and Changes in Motor Behavior Following Post Stroke BCI Intervention for Motor Rehabilitation

    Get PDF
    Loss of motor function is a common deficit following stroke insult and often manifests as persistent upper extremity (UE) disability which can affect a survivor’s ability to participate in activities of daily living. Recent research suggests the use of brain–computer interface (BCI) devices might improve UE function in stroke survivors at various times since stroke. This randomized crossover-controlled trial examines whether intervention with this BCI device design attenuates the effects of hemiparesis, encourages reorganization of motor related brain signals (EEG measured sensorimotor rhythm desynchronization), and improves movement, as measured by the Action Research Arm Test (ARAT). A sample of 21 stroke survivors, presenting with varied times since stroke and levels of UE impairment, received a maximum of 18–30 h of intervention with a novel electroencephalogram-based BCI-driven functional electrical stimulator (EEG-BCI-FES) device. Driven by spectral power recordings from contralateral EEG electrodes during cued attempted grasping of the hand, the user’s input to the EEG-BCI-FES device modulates horizontal movement of a virtual cursor and also facilitates concurrent stimulation of the impaired UE. Outcome measures of function and capacity were assessed at baseline, mid-therapy, and at completion of therapy while EEG was recorded only during intervention sessions. A significant increase in r-squared values [reflecting Mu rhythm (8–12 Hz) desynchronization as the result of attempted movements of the impaired hand] presented post-therapy compared to baseline. These findings suggest that intervention corresponds with greater desynchronization of Mu rhythm in the ipsilesional hemisphere during attempted movements of the impaired hand and this change is related to changes in behavior as a result of the intervention. BCI intervention may be an effective way of addressing the recovery of a stroke impaired UE and studying neuromechanical coupling with motor outputs.Clinical Trial Registration:ClinicalTrials.gov, identifier NCT02098265

    Behavioral Outcomes Following Brain–Computer Interface Intervention for Upper Extremity Rehabilitation in Stroke: A Randomized Controlled Trial

    Get PDF
    Stroke is a leading cause of persistent upper extremity (UE) motor disability in adults. Brain–computer interface (BCI) intervention has demonstrated potential as a motor rehabilitation strategy for stroke survivors. This sub-analysis of ongoing clinical trial (NCT02098265) examines rehabilitative efficacy of this BCI design and seeks to identify stroke participant characteristics associated with behavioral improvement. Stroke participants (n = 21) with UE impairment were assessed using Action Research Arm Test (ARAT) and measures of function. Nine participants completed three assessments during the experimental BCI intervention period and at 1-month follow-up. Twelve other participants first completed three assessments over a parallel time-matched control period and then crossed over into the BCI intervention condition 1-month later. Participants who realized positive change (≥1 point) in total ARAT performance of the stroke affected UE between the first and third assessments of the intervention period were dichotomized as “responders” (<1 = “non-responders”) and similarly analyzed. Of the 14 participants with room for ARAT improvement, 64% (9/14) showed some positive change at completion and approximately 43% (6/14) of the participants had changes of minimal detectable change (MDC = 3 pts) or minimally clinical important difference (MCID = 5.7 points). Participants with room for improvement in the primary outcome measure made significant mean gains in ARATtotal score at completion (ΔARATtotal = 2, p = 0.028) and 1-month follow-up (ΔARATtotal = 3.4, p = 0.0010), controlling for severity, gender, chronicity, and concordance. Secondary outcome measures, SISmobility, SISadl, SISstrength, and 9HPTaffected, also showed significant improvement over time during intervention. Participants in intervention through follow-up showed a significantly increased improvement rate in SISstrength compared to controls (p = 0.0117), controlling for severity, chronicity, gender, as well as the individual effects of time and intervention type. Participants who best responded to BCI intervention, as evaluated by ARAT score improvement, showed significantly increased outcome values through completion and follow-up for SISmobility (p = 0.0002, p = 0.002) and SISstrength (p = 0.04995, p = 0.0483). These findings may suggest possible secondary outcome measure patterns indicative of increased improvement resulting from this BCI intervention regimen as well as demonstrating primary efficacy of this BCI design for treatment of UE impairment in stroke survivors.Clinical Trial Registration:ClinicalTrials.gov, NCT02098265

    Chapter 18: Response Times as an Indicator of Data Quality: Associations with Question, Interviewer, and Respondent Characteristics in a Health Survey of Diverse Respondents. Appendix 18

    Get PDF
    Appendix 18A Description of individual question characteristics and hypotheses for their relationship with RTs Appendix 18B Description of established tools for evaluating questions and hypotheses for their relationship with RTs Appendix 18C Sample Description Table 18.C1. Number of completed interviews by respondents’ race/ethnicity and sample Appendix 18D Additional Tables Appendix 18E Reference

    Measuring Trust in Medical Researchers: Adding Insights from Cognitive Interviews to Examine Agree-Disagree and Construct-Specific Survey Questions

    No full text
    While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD
    corecore