62,630 research outputs found

    Union Mediation and Adaptation to Reciprocal Loyalty Arrangements

    Get PDF
    This study assesses the industrial relations application of the “loyalty-exit-voice” proposition. The loyalty concept is linked to reciprocal employer-employee arrangements and examined as a job attribute in a vignette questionnaire distributed to low and medium-skilled employees. The responses provided by employees in three European countries indicate that reciprocal loyalty arrangements, which involve the exchange of higher effort for job security, are one of the most desirable job attributes. This attribute exerts a higher impact on the job evaluations provided by unionised workers, compared to their non-union counterparts. This pattern is robust to a number of methodological considerations. It appears to be an outcome of adaptation to union mediated cooperation. Overall the evidence suggests that the loyalty-job evaluation profiles of unionised workers are receptive to repeated interaction and negative shocks, such as unemployment experience. This is not the case for the non-union workers. Finally, unionised workers appear to “voice” a lower job satisfaction, but exhibit low “exit” intentions, compared to the non-unionised labour.EPICURUS, a project supported by the European Commission through the 5th Framework Programme “Improving Human Potential” (contract number: HPSE-CT-2002-00143

    The Adversarial Attack and Detection under the Fisher Information Metric

    Full text link
    Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the model vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses.Comment: Accepted as an AAAI-2019 oral pape

    Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets

    Full text link
    We are concerned with the vulnerability of computer vision models to distributional shifts. We formulate a combinatorial optimization problem that allows evaluating the regions in the image space where a given model is more vulnerable, in terms of image transformations applied to the input, and face it with standard search algorithms. We further embed this idea in a training procedure, where we define new data augmentation rules according to the image transformations that the current model is most vulnerable to, over iterations. An empirical evaluation on classification and semantic segmentation problems suggests that the devised algorithm allows to train models that are more robust against content-preserving image manipulations and, in general, against distributional shifts.Comment: ICCV 2019 (camera ready

    On The Stability of Interpretable Models

    Full text link
    Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process. Bias in data collection and preparation, or in model's construction may severely affect the accountability of the design process. We conduct an experimental study of the stability of interpretable models with respect to feature selection, instance selection, and model selection. Our conclusions should raise awareness and attention of the scientific community on the need of a stability impact assessment of interpretable models
    • 

    corecore