94 research outputs found

    Retrospective Pretest: A Practical Technique For Professional Development Evaluation

    Get PDF
    The purpose of this study was to field test an instrument incorporating a retrospective pretest to determine whether it could reliably be used as an evaluation tool for a professional development conference. Based on a prominent evaluation taxonomy, the instrument provides a practical, low-cost approach to evaluating the quality of professional development interventions across a wide variety of disciplines. The instrument includes not only the questions typically associated with measuring participants’ reactions but also includes a set of questions to gauge whether and how much learning occurred. Results indicate that the data produced from the instrument were reliable

    HRDQ Submissions of Quantitative Research Reports: Three Common Comments in Decision Letters and a Checklist

    Get PDF
    The purpose of this work is to assist authors in preparing manuscripts for potential publication in HRDQ, as well as to provide a resource to authors who may receive a decision letter noting such an issu

    MOWDOC: A Dataset of Documents From Taking the Measure of Work for Building a Latent Semantic Analysis Space

    Get PDF
    Introduction For organizational researchers employing surveys, understanding the semantic link between and among survey items and responses is key. Researchers like Schwarz (1999) have long understood, for example, that item order can impact survey responses. To account for “item wording similarity,” researchers may allow item error variances to correlate (cf. Rich et al., 2010, p. 625). Other researchers, such as Newman et al. (2010), have pointed to semantic similarity between items as support for the premise that work engagement is like old wine in a new bottle. Recently, organizational researchers (e.g., Arnulf et al., 2014, 2018) have been able to use latent semantic analysis (LSA) and semantic survey response theory (SSRT) to quantify the semantic similarity between and among scales, items, and survey responses. Latent semantic analysis is a computational model that assesses similarity in language where the similarity of any “given word (or series of words) is given by the context where this word is usually found” (Arnulf et al., 2020, p. 4). Latent semantic analysis involves establishing a semantic space from a corpus of existing documents (e.g., journal articles, newspaper stories, item sets). The corpus of documents is represented in a word-by-document matrix and then transformed into an LSA space through singular value decomposition. The reduced LSA space can be used to assess the semantic similarity of documents within the space as well as new documents that are projected onto the space. Patterns of semantic similarity resulting from LSA have accounted for a substantive amount of variability in how individuals respond to survey items that purport to measure (a) transformational leadership, motivation, and self-reported work outcomes (60–86%; Arnulf et al., 2014), (b) employee engagement and job satisfaction (25–69%; Nimon et al., 2016), and (c) perceptions of a trainee program, intrinsic motivation, and work outcomes (31–55%, Arnulf et al., 2019). It also appears that personality, demographics, professional training, and interest in the subject matter have an impact on the degree to which an individual\u27s responses follow a semantically predictable pattern (Arnulf et al., 2018; Arnulf and Larsen, 2020, Arnulf et al., 2020). While being able to objectively access the degree to which survey responses are impacted by semantics is a great step forward in survey research, such research is often conducted with LSA spaces that are not open and therefore not customizable except by those that have access to the body of text upon which the LSA space is built. In this day of open science, researchers need access not only to the LSA space on which semantic survey research may be based but also to the underlying corpus of text to determine whether choices made in the generation of the LSA space have an impact on the results found. Researchers may not be able to create their own LSA spaces for a number of reasons, including the fact that on some occasions it is difficult to collect a representative corpus of text (Quesada, 2011). However, building an LSA space allows researchers to customize the space including the application of weighting schemes and the level of dimensionality for the LSA space. As shown by Arnulf et al. (2018), the dimensionality of the LSA space is a factor when using an LSA space to predict empirical correlations from scale item cosines. To help address the barrier to creating an LSA space for use in the analysis of scale items in organizational research, this report provides a dataset of documents from measures reviewed in Taking the Measure of Work. In Taking the Measure of Work, Fields provided the items for 324 scales and subscales which cover the areas of job satisfaction, organizational commitment, job characteristics, job stress, job roles, organizational justice, work-family conflict, person-organization fit, work behaviors, and work values. The MOWDOC dataset presented in this manuscript provides the documents necessary to create a semantic space from the item sets presented in Fields\u27s Taking the Measure of Work

    Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Get PDF
    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights (cf. Courville & Thompson, 2001; Nimon, Roberts, & Gavrilova, 2010; Zientek, Capraro, & Capraro, 2008), often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what and how independent variables contribute to a regression equation. Thus, this paper presents a guidebook of variable importance measures that inform MR results, linking measures to a theoretical framework that demonstrates the complementary roles they play when interpreting regression findings. We also provide a data-driven example of how to publish MR results that demonstrates how to present a more complete picture of the contributions variables make to a regression equation. We end with several recommendations for practice regarding how to integrate multiple variable importance measures into MR analyses

    The Assumption of a Reliable Instrument and Other Pitfalls to Avoid When Considering the Reliability of Data

    Get PDF
    The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data

    (Semi)automated approaches to data extraction for systematic reviews and meta-analyses in social sciences: A living review protocol

    Get PDF
    Background: An abundance of rapidly accumulating scientific evidence presents novel opportunities for researchers and practitioners alike, yet such advantages are often overshadowed by resource demands associated with finding and aggregating a continually expanding body of scientific information. Across social science disciplines, the use of automation technologies for timely and accurate knowledge synthesis can enhance research translation value, better inform key policy development, and expand the current understanding of human interactions, organizations, and systems. Ongoing developments surrounding automation are highly concentrated in research for evidence-based medicine with limited evidence surrounding tools and techniques applied outside of the clinical research community. Our objective is to conduct a living systematic review of automated data extraction techniques supporting systematic reviews and meta-analyses in the social sciences. The aim of this study is to extend the automation knowledge base by synthesizing current trends in the application of extraction technologies of key data elements of interest for social scientists. Methods: The proposed study is a living systematic review employing a partial replication framework based on extant literature surrounding automation of data extraction for systematic reviews and meta-analyses. Protocol development, base review, and updates follow PRISMA standards for reporting systematic reviews. This protocol is preregistered in OSF: (Semi)Automated Approaches to Data Extraction for Systematic Reviews and Meta-Analyses in Social Sciences: A Living Review Protocol on August 14, 2022. Conclusions: Anticipated outcomes of this study include: (a) generate insights supporting advancement in transferring existing reliable methods to social science research; (b) provide a foundation for protocol development leading to enhancement of comparability and benchmarking standards across disciplines; and (c) uncover exigencies that spur continued value-adding innovation and interdisciplinary collaboration for the benefit of the collective systematic review community

    The online student connectedness survey: Initial evidence of construct validity

    Get PDF
    The Online Student Connectedness Survey (OSCS) was introduced to the academic community in 2012 as an instrument designed to measure feelings of connectedness between students participating in online degree and certification programs. The purpose of this study was to examine data from the instrument for initial evidence of validity and reliability and to establish a nomological network between the OSCS, the Classroom Connectedness Survey (CCS), and the Community of Inquiry Survey (COI), which are similar instruments in the field. Results provided evidence of factor validity and reliability. Additionally, statistically and practically significant correlations were demonstrated between factors contained in the OSCS and established instruments measuring factors related to student connectedness. These results indicate that for the sample used in this study, the OSCS provides data that are valid and reliable for assessing feelings of connection between participants in online courses at institutions of higher learning
    corecore