3,849 research outputs found

    RAB: Provable Robustness Against Backdoor Attacks

    Full text link
    Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks, including evasion and backdoor (poisoning) attacks. On the defense side, there have been intensive efforts on improving both empirical and provable robustness against evasion attacks; however, provable robustness against backdoor attacks still remains largely unexplored. In this paper, we focus on certifying the machine learning model robustness against general threat models, especially backdoor attacks. We first provide a unified framework via randomized smoothing techniques and show how it can be instantiated to certify the robustness against both evasion and backdoor attacks. We then propose the first robust training process, RAB, to smooth the trained model and certify its robustness against backdoor attacks. We derive the robustness bound for machine learning models trained with RAB, and prove that our robustness bound is tight. In addition, we show that it is possible to train the robust smoothed models efficiently for simple models such as K-nearest neighbor classifiers, and we propose an exact smooth-training algorithm which eliminates the need to sample from a noise distribution for such models. Empirically, we conduct comprehensive experiments for different machine learning (ML) models such as DNNs, differentially private DNNs, and K-NN models on MNIST, CIFAR-10 and ImageNet datasets, and provide the first benchmark for certified robustness against backdoor attacks. In addition, we evaluate K-NN models on a spambase tabular dataset to demonstrate the advantages of the proposed exact algorithm. Both the theoretic analysis and the comprehensive evaluation on diverse ML models and datasets shed lights on further robust learning strategies against general training time attacks.Comment: 31 pages, 5 figures, 7 table

    Certifying Out-of-Domain Generalization for Blackbox Functions

    Full text link
    Certifying the robustness of model performance under bounded data distribution drifts has recently attracted intensive interest under the umbrella of distributional robustness. However, existing techniques either make strong assumptions on the model class and loss functions that can be certified, such as smoothness expressed via Lipschitz continuity of gradients, or require to solve complex optimization problems. As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss. In this paper, we focus on the problem of certifying distributional robustness for blackbox models and bounded loss functions, and propose a novel certification framework based on the Hellinger distance. Our certification technique scales to ImageNet-scale datasets, complex models, and a diverse set of loss functions. We then focus on one specific application enabled by such scalability and flexibility, i.e., certifying out-of-domain generalization for large neural networks and loss functions such as accuracy and AUC. We experimentally validate our certification method on a number of datasets, ranging from ImageNet, where we provide the first non-vacuous certified out-of-domain generalization, to smaller classification tasks where we are able to compare with the state-of-the-art and show that our method performs considerably better.Comment: 39th International Conference on Machine Learning (ICML) 202

    The art of the pivot: How new ventures manage identification relationships with stakeholders as they change direction

    Get PDF
    Many new ventures have to pivot – radically transform what they are about – because their original approach has failed. However, pivoting risks disrupting relationships with key stakeholders, such as user communities, who identify with ventures. Stakeholders may respond by withdrawing support and starving ventures of the resources needed to thrive. This can pose an existential threat to ventures, yet it is unclear how they can manage this problem. To explore this important phenomenon, we conduct a qualitative process study of The Impossible Project, a photography venture which encountered significant resistance from its user community as it pivoted from an analog focus to an analog-digital positioning. We develop a process model of stakeholder identification management that reveals how ventures can use identification reset work to defuse tensions with stakeholders whose identification with the venture is threatened. A core finding is that ventures can remove the affective hostility of stakeholders and rebuild connections with many of them by exposing their struggles, thus creating a bond focused around these shared experiences. We offer contributions to scholarship on identification management, user community identification, and pivoting

    Alcohol based surgical prep solution and the risk of fire in the operating room: a case report

    Get PDF
    A few cases of fire in the operating room are reported in the literature. The factors that may initiate these fires are many and include alcohol based surgical prep solutions, electrosurgical equipment, flammable drapes etc. We are reporting a case of fire in the operating room while operating on a patient with burst fracture C6 vertebra with quadriplegia. The cause of the fire was due to incomplete drying of the covering drapes with an alcohol based surgical prep solution. This paper discusses potential preventive measures to minimize the incidence of fire in the operating room

    TSS: Transformation-Specific Smoothing for Robustness Certification

    Full text link
    As machine learning (ML) systems become pervasive, safeguarding their security is critical. However, recently it has been demonstrated that motivated adversaries are able to mislead ML systems by perturbing test data using semantic transformations. While there exists a rich body of research providing provable robustness guarantees for ML models against p\ell_p norm bounded adversarial perturbations, guarantees against semantic perturbations remain largely underexplored. In this paper, we provide TSS -- a unified framework for certifying ML robustness against general adversarial semantic transformations. First, depending on the properties of each transformation, we divide common transformations into two categories, namely resolvable (e.g., Gaussian blur) and differentially resolvable (e.g., rotation) transformations. For the former, we propose transformation-specific randomized smoothing strategies and obtain strong robustness certification. The latter category covers transformations that involve interpolation errors, and we propose a novel approach based on stratified sampling to certify the robustness. Our framework TSS leverages these certification strategies and combines with consistency-enhanced training to provide rigorous certification of robustness. We conduct extensive experiments on over ten types of challenging semantic transformations and show that TSS significantly outperforms the state of the art. Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset. For instance, our framework achieves 30.4% certified robust accuracy against rotation attack (within ±30\pm 30^\circ) on ImageNet. Moreover, to consider a broader range of transformations, we show TSS is also robust against adaptive attacks and unforeseen image corruptions such as CIFAR-10-C and ImageNet-C.Comment: 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS '21

    Toward reliability in the NISQ era: robust interval guarantee for quantum measurements on approximate states

    Get PDF
    Near-term quantum computation holds potential across multiple application domains. However, imperfect preparation and evolution of states due to algorithmic and experimental shortcomings, characteristic in the near-term implementation, would typically result in measurement outcomes deviating from the ideal setting. It is thus crucial for any near-term application to quantify and bound these output errors. We address this need by deriving robustness intervals which are guaranteed to contain the output in the ideal setting. The first type of interval is based on formulating robustness bounds as semidefinite programs, and uses only the first moment and the fidelity to the ideal state. Furthermore, we consider higher statistical moments of the observable and generalize bounds for pure states based on the non-negativity of Gram matrices to mixed states, thus enabling their applicability in the NISQ era where noisy scenarios are prevalent. Finally, we demonstrate our results in the context of the variational quantum eigensolver (VQE) on noisy and noiseless simulations

    Estimates of linkage disequilibrium and effective population size in rainbow trout

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The use of molecular genetic technologies for broodstock management and selective breeding of aquaculture species is becoming increasingly more common with the continued development of genome tools and reagents. Several laboratories have produced genetic maps for rainbow trout to aid in the identification of loci affecting phenotypes of interest. These maps have resulted in the identification of many quantitative/qualitative trait loci affecting phenotypic variation in traits associated with albinism, disease resistance, temperature tolerance, sex determination, embryonic development rate, spawning date, condition factor and growth. Unfortunately, the elucidation of the precise allelic variation and/or genes underlying phenotypic diversity has yet to be achieved in this species having low marker densities and lacking a whole genome reference sequence. Experimental designs which integrate segregation analyses with linkage disequilibrium (LD) approaches facilitate the discovery of genes affecting important traits. To date the extent of LD has been characterized for humans and several agriculturally important livestock species but not for rainbow trout.</p> <p>Results</p> <p>We observed that the level of LD between syntenic loci decayed rapidly at distances greater than 2 cM which is similar to observations of LD in other agriculturally important species including cattle, sheep, pigs and chickens. However, in some cases significant LD was also observed up to 50 cM. Our estimate of effective population size based on genome wide estimates of LD for the NCCCWA broodstock population was 145, indicating that this population will respond well to high selection intensity. However, the range of effective population size based on individual chromosomes was 75.51 - 203.35, possibly indicating that suites of genes on each chromosome are disproportionately under selection pressures.</p> <p>Conclusions</p> <p>Our results indicate that large numbers of markers, more than are currently available for this species, will be required to enable the use of genome-wide integrated mapping approaches aimed at identifying genes of interest in rainbow trout.</p

    The Potential of Physical Exercise to Mitigate Radiation Damage—A Systematic Review

    Get PDF
    There is a need to investigate new countermeasures against the detrimental effects of ionizing radiation as deep space exploration missions are on the horizon.Objective: In this systematic review, the effects of physical exercise upon ionizing radiation-induced damage were evaluated.Methods: Systematic searches were performed in Medline, Embase, Cochrane library, and the databases from space agencies. Of 2,798 publications that were screened, 22 studies contained relevant data that were further extracted and analyzed. Risk of bias of included studies was assessed. Due to the high level of heterogeneity, meta-analysis was not performed. Five outcome groups were assessed by calculating Hedges' g effect sizes and visualized using effect size plots.Results: Exercise decreased radiation-induced DNA damage, oxidative stress, and inflammation, while increasing antioxidant activity. Although the results were highly heterogeneous, there was evidence for a beneficial effect of exercise in cellular, clinical, and functional outcomes.Conclusions: Out of 72 outcomes, 68 showed a beneficial effect of physical training when exposed to ionizing radiation. As the first study to investigate a potential protective mechanism of physical exercise against radiation effects in a systematic review, the current findings may help inform medical capabilities of human spaceflight and may also be relevant for terrestrial clinical care such as radiation oncology.</jats:p

    Children and older adults exhibit distinct sub-optimal cost-benefit functions when preparing to move their eyes and hands

    Get PDF
    "© 2015 Gonzalez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited"Numerous activities require an individual to respond quickly to the correct stimulus. The provision of advance information allows response priming but heightened responses can cause errors (responding too early or reacting to the wrong stimulus). Thus, a balance is required between the online cognitive mechanisms (inhibitory and anticipatory) used to prepare and execute a motor response at the appropriate time. We investigated the use of advance information in 71 participants across four different age groups: (i) children, (ii) young adults, (iii) middle-aged adults, and (iv) older adults. We implemented 'cued' and 'non-cued' conditions to assess age-related changes in saccadic and touch responses to targets in three movement conditions: (a) Eyes only; (b) Hands only; (c) Eyes and Hand. Children made less saccade errors compared to young adults, but they also exhibited longer response times in cued versus non-cued conditions. In contrast, older adults showed faster responses in cued conditions but exhibited more errors. The results indicate that young adults (18 -25 years) achieve an optimal balance between anticipation and execution. In contrast, children show benefits (few errors) and costs (slow responses) of good inhibition when preparing a motor response based on advance information; whilst older adults show the benefits and costs associated with a prospective response strategy (i.e., good anticipation)

    Sociocognitive perspectives in strategic management

    Get PDF
    How a firm is perceived has implications for strategy formulation, strategy implementation, and firm outcomes. However, strategic management researchers have traditionally devoted less attention to theories that address these perceptual implications. This special topic forum (STF) includes six articles that use a sociocognitive lens to help expand our theoretical understanding of strategy and strategic management. A sociocognitive perspective encompasses how observers perceive, interpret, and make sense of an organization’s strategic processes, actions, and related outcomes. The goal of this STF is therefore to advance theory in an integral domain of management scholarship while also augmenting well-known frameworks for teaching and practice. Specifically, the articles not only reflect the work that has taken place over the past three decades but also generate important theoretical and practical advances. We introduce each article, explain the key strategic questions it addresses, and offer suggestions for future research
    corecore