137 research outputs found

    The common good balance sheet, an adequate tool to capture non-financials?

    Get PDF
    In relation to organizational performance measurement, there is a growing concern about the creation of value for people, society and the environment. The traditional corporate reporting does not adequately satisfy the information needs of stakeholders for assessing an organization's past and future potential performance. Practitioners and scholars have developed new non-financial reporting frameworks from a social and environmental perspective, giving birth to the field of Integrated Reporting (IR). The Economy for the Common Good (ECG) model and its tools to facilitate sustainability management and reporting can provide a framework to do it. The present study depicts the theoretical foundations from the business administration field research on which the ECG model relies. Moreover, this paper is the first one that empirically validates such measurement scales by applying of Exploratory Factor Analysis on a sample of 206 European firms. Results show that two out of five dimensions are appropriately defined, along with some guidelines to refine the model. Consequently, it allows knowledge to advance as it assesses the measurement scales' statistical validity and reliability. However, as this is the first quantitative-driven research on the ECG model, the authors' future research will confirm the present results by means of Confirmatory Factor Analysis (CFA)

    Assessing an organizational culture instrument based on the Competing Values Framework: Exploratory and confirmatory factor analyses

    Get PDF
    BACKGROUND: The Competing Values Framework (CVF) has been widely used in health services research to assess organizational culture as a predictor of quality improvement implementation, employee and patient satisfaction, and team functioning, among other outcomes. CVF instruments generally are presented as well-validated with reliable aggregated subscales. However, only one study in the health sector has been conducted for the express purpose of validation, and that study population was limited to hospital managers from a single geographic locale. METHODS: We used exploratory and confirmatory factor analyses to examine the underlying structure of data from a CVF instrument. We analyzed cross-sectional data from a work environment survey conducted in the Veterans Health Administration (VHA). The study population comprised all staff in non-supervisory positions. The survey included 14 items adapted from a popular CVF instrument, which measures organizational culture according to four subscales: hierarchical, entrepreneurial, team, and rational. RESULTS: Data from 71,776 non-supervisory employees (approximate response rate 51%) from 168 VHA facilities were used in this analysis. Internal consistency of the subscales was moderate to strong (α = 0.68 to 0.85). However, the entrepreneurial, team, and rational subscales had higher correlations across subscales than within, indicating poor divergent properties. Exploratory factor analysis revealed two factors, comprising the ten items from the entrepreneurial, team, and rational subscales loading on the first factor, and two items from the hierarchical subscale loading on the second factor, along with one item from the rational subscale that cross-loaded on both factors. Results from confirmatory factor analysis suggested that the two-subscale solution provides a more parsimonious fit to the data as compared to the original four-subscale model. CONCLUSION: This study suggests that there may be problems applying conventional CVF subscales to non-supervisors, and underscores the importance of assessing psychometric properties of instruments in each new context and population to which they are applied. It also further highlights the challenges management scholars face in assessing organizational culture in a reliable and comparable way. More research is needed to determine if the emergent two-subscale solution is a valid or meaningful alternative and whether these findings generalize beyond VHA

    Organizational readiness to change assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Promoting Action on Research Implementation in Health Services, or PARIHS, framework is a theoretical framework widely promoted as a guide to implement evidence-based clinical practices. However, it has as yet no pool of validated measurement instruments that operationalize the constructs defined in the framework. The present article introduces an Organizational Readiness to Change Assessment instrument (ORCA), organized according to the core elements and sub-elements of the PARIHS framework, and reports on initial validation.</p> <p>Methods</p> <p>We conducted scale reliability and factor analyses on cross-sectional, secondary data from three quality improvement projects (n = 80) conducted in the Veterans Health Administration. In each project, identical 77-item ORCA instruments were administered to one or more staff from each facility involved in quality improvement projects. Items were organized into 19 subscales and three primary scales corresponding to the core elements of the PARIHS framework: (1) Strength and extent of evidence for the clinical practice changes represented by the QI program, assessed with four subscales, (2) Quality of the organizational context for the QI program, assessed with six subscales, and (3) Capacity for internal facilitation of the QI program, assessed with nine subscales.</p> <p>Results</p> <p>Cronbach's alpha for scale reliability were 0.74, 0.85 and 0.95 for the evidence, context and facilitation scales, respectively. The evidence scale and its three constituent subscales failed to meet the conventional threshold of 0.80 for reliability, and three individual items were eliminated from evidence subscales following reliability testing. In exploratory factor analysis, three factors were retained. Seven of the nine facilitation subscales loaded onto the first factor; five of the six context subscales loaded onto the second factor; and the three evidence subscales loaded on the third factor. Two subscales failed to load significantly on any factor. One measured resources in general (from the context scale), and one clinical champion role (from the facilitation scale).</p> <p>Conclusion</p> <p>We find general support for the reliability and factor structure of the ORCA. However, there was poor reliability among measures of evidence, and factor analysis results for measures of general resources and clinical champion role did not conform to the PARIHS framework. Additional validation is needed, including criterion validation.</p

    A Customer Perspective on Product Eliminations: How the Removal of Products Affects Customers and Business Relationships

    Full text link
    Regardless of the apparent need for product eliminations, many managers hesitate to act as they fear deleterious effects on customer satisfaction and loyalty. Other managers do carry out product eliminations, but often fail to consider the consequences for customers and business relationships. Given the relevance and problems of product eliminations, research on this topic in general and on the consequences for customers and business relationships in particular is surprisingly scarce. Therefore, this empirical study explores how and to what extent the elimination of a product negatively affects customers and business relationships. Results indicate that eliminating a product may result in severe economic and psychological costs to customers, thereby seriously decreasing customer satisfaction and loyalty. This paper also shows that these costs are not exogenous in nature. Instead, depending on the characteristics of the eliminated product these costs are found to be more or less strongly driven by a company’s behavior when implementing the elimination at the customer interface

    On the Importance of Complaint Handling Design : A Multi-Level Analysis of the Impact in Specific Complaint Situations

    Full text link
    Given the large investments required for high-quality complaint handling design, managers need practical guidance in understanding its actual importance for their particular company. However, while prior research emphasizes the general relevance of complaint handling design, it fails to provide a more differentiated perspective on this interesting issue. This study, which is based on an integrative multi-level framework and a dyadic dataset, addresses this important gap in research. Results indicate that the impact of a company’s complaint handling design varies significantly depending on the characteristics of the complaining customers with which the firm has to deal. Further, this paper shows that, contingent on these characteristics, a company’s complaint handling design can shape complainants’ fairness perceptions either considerably or only slightly. Overall, findings suggest that companies should apply an adaptive approach to complaint handling to avoid misallocation of attention, energy, and resources

    Predicting implementation from organizational readiness for change: a study protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is widespread interest in measuring organizational readiness to implement evidence-based practices in clinical care. However, there are a number of challenges to validating organizational measures, including inferential bias arising from the halo effect and method bias - two threats to validity that, while well-documented by organizational scholars, are often ignored in health services research. We describe a protocol to comprehensively assess the psychometric properties of a previously developed survey, the Organizational Readiness to Change Assessment.</p> <p>Objectives</p> <p>Our objective is to conduct a comprehensive assessment of the psychometric properties of the Organizational Readiness to Change Assessment incorporating methods specifically to address threats from halo effect and method bias.</p> <p>Methods and Design</p> <p>We will conduct three sets of analyses using longitudinal, secondary data from four partner projects, each testing interventions to improve the implementation of an evidence-based clinical practice. Partner projects field the Organizational Readiness to Change Assessment at baseline (n = 208 respondents; 53 facilities), and prospectively assesses the degree to which the evidence-based practice is implemented. We will conduct predictive and concurrent validities using hierarchical linear modeling and multivariate regression, respectively. For predictive validity, the outcome is the change from baseline to follow-up in the use of the evidence-based practice. We will use intra-class correlations derived from hierarchical linear models to assess inter-rater reliability. Two partner projects will also field measures of job satisfaction for convergent and discriminant validity analyses, and will field Organizational Readiness to Change Assessment measures at follow-up for concurrent validity (n = 158 respondents; 33 facilities). Convergent and discriminant validities will test associations between organizational readiness and different aspects of job satisfaction: satisfaction with leadership, which should be highly correlated with readiness, versus satisfaction with salary, which should be less correlated with readiness. Content validity will be assessed using an expert panel and modified Delphi technique.</p> <p>Discussion</p> <p>We propose a comprehensive protocol for validating a survey instrument for assessing organizational readiness to change that specifically addresses key threats of bias related to halo effect, method bias and questions of construct validity that often go unexplored in research using measures of organizational constructs.</p

    Safety citizenship behavior (SCB) in the workplace: A stable construct? Analysis of psychometric invariance across four European countries

    Get PDF
    Safety citizenship behaviors (SCBs) are important participative organizational behaviors that emerge in work-groups. SCBs create a work environment that supports individual and team safety, encourages a proactive management of workplace safety, and ultimately, prevents accidents. In spite of the importance of SCBs, little consensus exists on research issues like the dimensionality of safety citizenship, and if any superordinate factor level of safety citizenship should be conceptualized, and thus measured. The present study addressed this issue by examining the dimensionality of SCBs, as they relate to behaviors of helping, stewardship, civic virtue, whistleblowing, voice, and initiating change in current practices. Data on SCBs were collected from four industrial plants (N = 1065) in four European countries (Italy, Russia, Switzerland, United Kingdom). The results show that SCBs structure around two superordinate second-order factors that reflect affiliation and challenge. Multi-group analyses supported the structure and metric invariance of the two-factor model across the four national subsamples

    Organizational readiness for implementing change: a psychometric assessment of a new measure

    Get PDF
    BACKGROUND: Organizational readiness for change in healthcare settings is an important factor in successful implementation of new policies, programs, and practices. However, research on the topic is hindered by the absence of a brief, reliable, and valid measure. Until such a measure is developed, we cannot advance scientific knowledge about readiness or provide evidence-based guidance to organizational leaders about how to increase readiness. This article presents results of a psychometric assessment of a new measure called Organizational Readiness for Implementing Change (ORIC), which we developed based on Weiner’s theory of organizational readiness for change. METHODS: We conducted four studies to assess the psychometric properties of ORIC. In study one, we assessed the content adequacy of the new measure using quantitative methods. In study two, we examined the measure’s factor structure and reliability in a laboratory simulation. In study three, we assessed the reliability and validity of an organization-level measure of readiness based on aggregated individual-level data from study two. In study four, we conducted a small field study utilizing the same analytic methods as in study three. RESULTS: Content adequacy assessment indicated that the items developed to measure change commitment and change efficacy reflected the theoretical content of these two facets of organizational readiness and distinguished the facets from hypothesized determinants of readiness. Exploratory and confirmatory factor analysis in the lab and field studies revealed two correlated factors, as expected, with good model fit and high item loadings. Reliability analysis in the lab and field studies showed high inter-item consistency for the resulting individual-level scales for change commitment and change efficacy. Inter-rater reliability and inter-rater agreement statistics supported the aggregation of individual level readiness perceptions to the organizational level of analysis. CONCLUSIONS: This article provides evidence in support of the ORIC measure. We believe this measure will enable testing of theories about determinants and consequences of organizational readiness and, ultimately, assist healthcare leaders to reduce the number of health organization change efforts that do not achieve desired benefits. Although ORIC shows promise, further assessment is needed to test for convergent, discriminant, and predictive validity

    Many Labs 5:Testing pre-data collection peer review as an intervention to increase replicability

    Get PDF
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3?9; median total sample = 1,279.5, range = 276?3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (?r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00?.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19?.50)
    corecore