4 research outputs found

    The Impeding Role of Initial Unrealistic Goal-Setting on Training Performance: Identifying Underpinning Self-Regulatory Processes and Solutions

    Get PDF
    The importance of goal-setting within training for enhancing performance has been touted in the literature. Goal-setting theory suggests that setting a moderately difficult, specific goal results in a modest goal-performance discrepancy between the trainee\u27s goal and actual performance level. Such a discrepancy has been found to enhance subsequent performance, as the trainee will employ greater effort and task engagement to resolve the discrepancy. However, research has also shown that the motivating effect of a discrepancy will reverse with repeated feedback indicating that one\u27s performance has fallen short of the desired goal. To date, no prior research has examined the effect that a single large goal-performance discrepancy has on subsequent performance. This study sought to elucidate this relationship, its underlying mechanisms, as well as provide a remedy for mitigating this hypothesized negative effect. Data collected from 206 undergraduate participants, who completed a videogame-based training program, were used to test the study hypotheses. As hypothesized, a large initial goal-performance discrepancy was found to have a negative impact on subsequent performance; however, the hypothesized role of self-regulation as the mechanism underlying this relationship was not supported. These results suggest that during the initial stages of training, training performance was hindered for trainees who set unrealistically difficult goals and thus failed to reach their self-set goal. Unrealistic self set goals may negate the organization\u27s monetary investment in the training. Fortunately, this study also demonstrated a simple solution; providing goal-setting advisement was found to help trainees successfully set lower, more accurate, initial goals and therefore experience a lower goal-performance discrepancy, as compared to trainees who did not receive the advisement. Additional practical implications and future directions are discussed

    Crowdsourcing Job Satisfaction Data: Examining the Construct Validity of Glassdoor.com Ratings

    Get PDF
    Researchers, practitioners, and job seekers now routinely use crowdsourced data about organizations for both decision-making and research purposes. Despite the popularity of such websites, empirical evidence regarding their validity is generally absent. In this study, we tackled this problem by combining two curated datasets: (a) the results of the 2017 Federal Employee Viewpoint Survey (FEVS), which contains facet-level job satisfaction ratings from 407,789 US federal employees, and which we aggregated to the agency level, and (b) current overall and facet ratings of job satisfaction of the federal agencies contained within FEVS from Glassdoor.com as scraped from the Glassdoor application programming interface (API) within a month of the FEVS survey’s administration. Using these data, we examined convergent validity, discriminant validity, and methods effects for the measurement of both overall and facet-level job satisfaction by analyzing a multitrait-multimethod matrix (MTMM). Most centrally, we provide evidence that overall Glassdoor ratings of satisfaction within US federal agencies correlate moderately with aggregated FEVS overall ratings (r = .516), supporting the validity of the overall Glassdoor rating as a measure of overall job satisfaction aggregated to the organizational level. In contrast, the validity of facet-level measurement was not well-supported. Overall, given varying strengths and weaknesses with both Glassdoor and survey data, we recommend the combined use of both traditional and crowdsourced data on organizational characteristics for both research and practice
    corecore