40 research outputs found

    The Impact of Person-Organization Fit and Psychological Ownership on Turnover in Open Source Software Projects

    Get PDF
    Open source software (OSS) projects represent an alternate form of software production by relying primarily on voluntary contributions. Despite the immense success of several mainstream OSS projects such as Mozilla, Linux, and Apache, a vast majority of such projects fail to sustain their development due to high levels of developer turnover. While existing research in the area has offered a rich foundation, we know little about how developers’ perceptions of fit with the project environment may be moderated by the sense of ownership they have toward the project and how it may impact their turnover intentions. Using survey data from 574 GitHub developers, we tested a model to examine the impact of Person-Organization fit and psychological ownership on developers’ turnover intentions. Our results suggest that two relevant dimensions of fit, namely, value and demands-abilities fit, negatively impact turnover intentions and that their sense of ownership moderates these effects

    THE ROLE OF OWNERSHIP AND SOCIAL IDENTITY IN PREDICTING DEVELOPER TURNOVER IN OPEN SOURCE SOFTWARE PROJECTS

    Get PDF
    Open source software (OSS) development methodology that promises to produce reliable, flexible, and high quality software code, at minimal cost, by harnessing the power of distributed peer review and transparency of process and has become increasingly popular in the past few years. For-profit companies have increasingly adopted the OSS paradigm to produce quality software at low cost. A vast majority of OSS projects depend on voluntary contributions by developers to sustain their development. In this context, turnover of developers has been considered a critical issue hindering the success of projects. This dissertation develops two studies addressing the issue. The first study is a methodological pilot and lays the foundation of this research by focusing on modeling turnover behavior of core open source contributors using a logistic hierarchical linear modeling approach. It argues that argue that taking both the developer and the project level factors into account will lead to a richer understanding of the issue of turnover in open source projects. The second study provides a conceptual integration of developer and project level factors using the Ownership, Role theory and Social Identity literatures, and proposes testable hypotheses, methods and findings. The implications of this research are likely to benefit OSS managers in understanding the developer and project level factors associated with developer turnover and the contexts in which they interact

    In Which Model Do We Trust and When? Comparing the Explanatory and Predictive Abilities of Models of E-Government Trust

    Get PDF
    With the growth in digital provision of government services (i.e., e-government), a substantial quantity of recent research has focused on models of satisfaction and trust with public sector services. Although few would deny the relevance of the satisfaction-trust relationship, there is little agreement about how to optimally model these relationships. In this paper, we compare an assortment of conceptual models of the e-government citizen satisfaction-trust relationship, drawn from service-quality and expectancy-disconfirmation paradigms for their ability to explain trust, their parsimony, and their in-sample and out-of-sample predictive abilities. We use survey data from the American Customer Satisfaction Index (ACSI) measuring citizen e-government experiences. Our findings suggest that while expectancy-disconfirmation model does well for explanation, service-quality model is better suited for prediction of citizen trust. Overall, the service-quality model also offers the best compromise between predictive accuracy and explanatory power. These findings offer new insights for researchers, government agencies, and practitioners

    The Impact of Anonymous Peripheral Contributions on Open Source Software Development

    Get PDF
    Online peer production communities such as open source software (OSS) projects attract both identified and anonymous peripheral contributions (APC) (e.g., defect reports, feature requests, or forum posts). While we can attribute identified peripheral contributions (IPC) to specific individuals and OSS projects need them to succeed, one cannot trace back anonymous peripheral contributions (APC), and they can have both positive and negative ramifications for project development. Open platforms and managers face a challenging design choice in deciding whether to allow APC and for which tasks or what type of projects. We examine the impact that the ratio between APC and IPC has on OSS project performance. Our results suggest that the OSS projects perform the best when they contain a uniform anonymity level (i.e., they contain predominantly APC or predominantly IPC). However, our results also suggest that OSS projects have lower performance when the ratio between APC and IPC nears one (i.e., they contain close to the same number of APC and IPC). Furthermore, our results suggest that these results differ depending on the type of application that a project develops. Our study contributes to the ongoing debate about the implications of anonymity for online communities and informs managers about the effect that anonymous contributions have on their projects

    The Impact of Person-Organization Fit on Turnover in Open Source Software Projects

    Get PDF
    Participant turnover in open source software development is a critical problem. Using Schneider’s (1987) Attraction Selection and Attrition Framework and the notion of Person- Organization fit, we hypothesize about the relationship between a participant\u27s fit with an open source project and turnover. Specifically we predict that value fit, needs-supplies fit and demandsabilities fit between a participant and a particular project will have a negative association with participant turnover and that the role of the participant in the project acts a moderator. An empirical study is designed to examine the hypotheses using a combination of survey and archival data. Since for-profit companies are increasingly leveraging open source software development, implications of our findings will be useful for project managers seeking to retain talented contributors in the absence of financial compensation

    PLS-Based Model Selection: The Role of Alternative Explanations in Information Systems Research

    Get PDF
    Exploring theoretically plausible alternative models for explaining the phenomenon under study is a crucial step in advancing scientific knowledge. This paper advocates model selection in information systems (IS) studies that use partial least squares path modeling (PLS) and suggests the use of model selection criteria derived from information theory for this purpose. These criteria allow researchers to compare alternative models and select a parsimonious yet well-fitting model. However, as our review of prior IS research practice shows, their use—while common in the econometrics field and in factor-based SEM—has not found its way into studies using PLS. Using a Monte Carlo study, we compare the performance of several model selection criteria in selecting the best model from a set of competing models under different model set-ups and various conditions of sample size, effect size, and loading patterns. Our results suggest that appropriate model selection cannot be achieved by relying on the PLS criteria (i.e., R2, Adjusted R2, GoF, and Q2), as is the current practice in academic research. Instead, model selection criteria—in particular, the Bayesian information criterion (BIC) and the Geweke-Meese criterion (GM)—should be used due to their high model selection accuracy and ease of use. To support researchers in the adoption of these criteria, we introduce a five-step procedure that delineates the roles of model selection and statistical inference and discuss misconceptions that may arise in their use

    Efficient ML Models for Practical Secure Inference

    Full text link
    ML-as-a-service continues to grow, and so does the need for very strong privacy guarantees. Secure inference has emerged as a potential solution, wherein cryptographic primitives allow inference without revealing users' inputs to a model provider or model's weights to a user. For instance, the model provider could be a diagnostics company that has trained a state-of-the-art DenseNet-121 model for interpreting a chest X-ray and the user could be a patient at a hospital. While secure inference is in principle feasible for this setting, there are no existing techniques that make it practical at scale. The CrypTFlow2 framework provides a potential solution with its ability to automatically and correctly translate clear-text inference to secure inference for arbitrary models. However, the resultant secure inference from CrypTFlow2 is impractically expensive: Almost 3TB of communication is required to interpret a single X-ray on DenseNet-121. In this paper, we address this outstanding challenge of inefficiency of secure inference with three contributions. First, we show that the primary bottlenecks in secure inference are large linear layers which can be optimized with the choice of network backbone and the use of operators developed for efficient clear-text inference. This finding and emphasis deviates from many recent works which focus on optimizing non-linear activation layers when performing secure inference of smaller networks. Second, based on analysis of a bottle-necked convolution layer, we design a X-operator which is a more efficient drop-in replacement. Third, we show that the fast Winograd convolution algorithm further improves efficiency of secure inference. In combination, these three optimizations prove to be highly effective for the problem of X-ray interpretation trained on the CheXpert dataset.Comment: 10 pages include references, 4 figure

    Extraordinary Claims Require Extraordinary Evidence: A Comment on “Recent Developments in PLS”

    Get PDF
    Evermann and Rönkkö (2023) review recent developments in partial least squares (PLS) with the aim of providing guidance to researchers. Indeed, the explosion of methodological advances in PLS in the last decade necessitates such overview articles. In so far as the goal is to provide an objective assessment of the technique, such articles are most welcome. Unfortunately, the authors’ extraordinary and questionable claims paint a misleading picture of PLS. Our goal in this short commentary is to address selected claims made by Evermann and Rönkkö (2023) using simulations and the latest research. Our objective is to bring a positive perspective to this debate and highlight the recent developments in PLS that make it an increasingly valuable technique in IS and management research in general

    The shortcomings of equal weights estimation and the composite equivalence index in PLS-SEM

    Get PDF
    Purpose The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis differentiated indicator weights produced by partial least squares structural equation modeling (PLS-SEM). Design/methodology/approach The authors rely on prior literature as well as empirical illustrations and a simulation study to assess the efficacy of equal weights estimation and the CEI. Findings The results show that the CEI lacks discriminatory power, and its use can lead to major differences in structural model estimates, conceals measurement model issues and almost always leads to inferior out-of-sample predictive accuracy compared to differentiated weights produced by PLS-SEM. Research limitations/implications In light of its manifold conceptual and empirical limitations, the authors advise against the use of the CEI. Its adoption and the routine use of equal weights estimation could adversely affect the validity of measurement and structural model results and understate structural model predictive accuracy. Although this study shows that the CEI is an unsuitable metric to decide between equal weights and differentiated weights, it does not propose another means for such a comparison. Practical implications The results suggest that researchers and practitioners should prefer differentiated indicator weights such as those produced by PLS-SEM over equal weights. Originality/value To the best of the authors’ knowledge, this study is the first to provide a comprehensive assessment of the CEI’s usefulness. The results provide guidance for researchers considering using equal indicator weights instead of PLS-SEM-based weighted indicators

    Prediction: coveted, yet forsaken? Introducing a cross-validated predictive ability test in partial least squares path modeling

    Get PDF
    Management researchers often develop theories and policies that are forward‐looking. The prospective outlook of predictive modeling, where a model predicts unseen or new data, can complement the retrospective nature of causal‐explanatory modeling that dominates the field. Partial least squares (PLS) path modeling is an excellent tool for building theories that offer both explanation and prediction. A limitation of PLS, however, is the lack of a statistical test to assess whether a proposed or alternative theoretical model offers significantly better out‐of‐sample predictive power than a benchmark or an established model. Such an assessment of predictive power is essential for theory development and validation, and for selecting a model on which to base managerial and policy decisions. We introduce the cross‐validated predictive ability test (CVPAT) to conduct a pairwise comparison of predictive power of competing models, and substantiate its performance via multiple Monte Carlo studies. We propose a stepwise predictive model comparison procedure to guide researchers, and demonstrate CVPAT's practical utility using the well‐known American Customer Satisfaction Index (ACSI) model
    corecore