41 research outputs found

    The Commune Movement during the 1960s and the 1970s in Britain, Denmark and the United States

    Get PDF
    The communal revival that began in the mid-1960s developed into a new mode of activism, ‘communal activism’ or the ‘commune movement’, forming its own politics, lifestyle and ideology. Communal activism spread and flourished until the mid-1970s in many parts of the world. To analyse this global phenomenon, this thesis explores the similarities and differences between the commune movements of Denmark, UK and the US. By examining the motivations for the communal revival, links with 1960s radicalism, communes’ praxis and outward-facing activities, and the crisis within the commune movement and responses to it, this thesis places communal activism within the context of wider social movements for social change. Challenging existing interpretations which have understood the communal revival as an alternative living experiment to the nuclear family, or as a smaller part of the counter-culture, this thesis argues that the commune participants created varied and new experiments for a total revolution against the prevailing social order and its dominant values and institutions, including the patriarchal family and capitalism. Communards embraced autonomy and solidarity based on individual communes’ situations and tended to reject charismatic leadership. Functioning as an independent entity, each commune engaged with their local communities designing various political and cultural projects. They interacted with other social movements groups through collective work for the women’s liberation and environmentalist movement. As a genuine grass root social movement communal activism became an essential part of Left politics bridging the 1960s and 1970s

    Resilient Linear Classification: An Approach to Deal with Attacks on Training Data

    Get PDF
    Data-driven techniques are used in cyber-physical systems (CPS) for controlling autonomous vehicles, handling demand responses for energy management, and modeling human physiology for medical devices. These data-driven techniques extract models from training data, where their performance is often analyzed with respect to random errors in the training data. However, if the training data is maliciously altered by attackers, the effect of these attacks on the learning algorithms underpinning data-driven CPS have yet to be considered. In this paper, we analyze the resilience of classification algorithms to training data attacks. Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data. Using the metric, we show that traditional linear classification algorithms are resilient under restricted conditions. To overcome these limitations, we propose a linear classification algorithm with a majority constraint and prove that it is strictly more resilient than the traditional algorithms. Evaluations on both synthetic data and a real-world retrospective arrhythmia medical case-study show that the traditional algorithms are vulnerable to tampered training data, whereas the proposed algorithm is more resilient (as measured by worst-case tampering)

    PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction

    Get PDF
    We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i.e., the confidence set for a given input contains the true label with high probability. We demonstrate how our approach can be used to construct PAC confidence sets on ResNet for ImageNet, a visual object tracking model, and a dynamics model for the half-cheetah reinforcement learning problem

    Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation

    Get PDF
    Reliable uncertainty estimates are an important tool for helping autonomous agents or human decision makers understand and leverage predictive models. However, existing approaches to estimating uncertainty largely ignore the possibility of covariate shift—i.e., where the real-world data distribution may differ from the training distribution. As a consequence, existing algorithms can overestimate certainty, possibly yielding a false sense of confidence in the predictive model. We propose an algorithm for calibrating predictions that accounts for the possibility of covariate shift, given labeled examples from the training distribution and unlabeled examples from the real-world distribution. Our algorithm uses importance weighting to correct for the shift from the training to the real-world distribution. However, importance weighting relies on the training and real-world distributions to be sufficiently close. Building on ideas from domain adaptation, we additionally learn a feature map that tries to equalize these two distributions. In an empirical evaluation, we show that our proposed approach outperforms existing approaches to calibrated prediction when there is covariate shift

    PAC Prediction Sets Under Label Shift

    Full text link
    Prediction sets capture uncertainty by predicting sets of labels rather than individual labels, enabling downstream decisions to conservatively account for all plausible outcomes. Conformal inference algorithms construct prediction sets guaranteed to contain the true label with high probability. These guarantees fail to hold in the face of distribution shift, which is precisely when reliable uncertainty quantification can be most useful. We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting. This method estimates the predicted probabilities of the classes in a target domain, as well as the confusion matrix, then propagates uncertainty in these estimates through a Gaussian elimination algorithm to compute confidence intervals for importance weights. Finally, it uses these intervals to construct prediction sets. We evaluate our approach on five datasets: the CIFAR-10, ChestX-Ray and Entity-13 image datasets, the tabular CDC Heart dataset, and the AGNews text dataset. Our algorithm satisfies the PAC guarantee while producing smaller, more informative, prediction sets compared to several baselines

    VisionGuard: Runtime Detection of Adversarial Inputs to Perception Systems

    Get PDF
    Deep neural network (DNN) models have proven to be vulnerable to adversarial attacks. In this paper, we propose VisionGuard, a novel attack- and dataset-agnostic and computationally-light defense mechanism for adversarial inputs to DNN-based perception systems. In particular, VisionGuard relies on the observation that adversarial images are sensitive to lossy compression transformations. Specifically, to determine if an image is adversarial, VisionGuard checks if the output of the target classifier on a given input image changes significantly after feeding it a transformed version of the image under investigation. Moreover, we show that VisionGuard is computationally-light both at runtime and design-time which makes it suitable for real-time applications that may also involve large-scale image domains. To highlight this, we demonstrate the efficiency of VisionGuard on ImageNet, a task that is computationally challenging for the majority of relevant defenses. Finally, we include extensive comparative experiments on the MNIST, CIFAR10, and ImageNet datasets that show that VisionGuard outperforms existing defenses in terms of scalability and detection performance

    Application of Habitat Evaluation Procedure with Quantifying the Eco-Corridor in the Process of Environmental Impact Assessment

    No full text
    In contrast to other fields, environmental protection (e.g., habitat protection) often fails to include quantitative evaluation as part of the existing environmental impact assessment (EIA) process, and therefore the EIA is often a poor forecasting tool, which makes selecting a reasonable plan of action difficult. In this study, we used the Habitat Evaluation Procedure (HEP) to quantify the long-term effects of a road construction project on an ecosystem. The water deer (Hydropotes inermis) was selected as the species of study since it uses an optimum habitat; water deer habitat data were collected on vegetation cover, stream water density, geographic contour, land use class, and road networks. The Habitat Suitability Index (HSI) and Cumulative Habitat Unit (CHU) values for the water deer were estimated to investigate the major land cover classes, the national river systems, and vegetation cover. Results showed that the environmental impact in the road construction project area would result in a net ecological loss value of 1211 without installation of an eco-corridor, which reduced to 662 with an eco-corridor, providing a 55% increase in the net value after 50 years of the mitigation plan. Comparing the 13 proposed ecological mitigation corridors, the corridor that would result in the highest net increase (with an increase of 69.5), was corridor #4, which was regarded as the most appropriate corridor to properly connect water deer habitat. In sum, the study derived the net increase in quantitative values corresponding with different mitigation methods over time for a road construction project; this procedure can be effectively utilized in the future to select the location of ecological corridors while considering the costs of constructing them
    corecore