867 research outputs found

    Social enrichment reverses the isolation-induced deficits of neuronal plasticity in the hippocampus of male rats

    Get PDF
    Environmental enrichment is known to improve brain plasticity and protect synaptic function from negative insults. In the present study we used the exposure to social enrichment to ameliorate the negative effect observed in post weaning isolated male rats in which neurotrophic factors, neurogenesis, neuronal dendritic trees and spines were altered markedly in the hippocampus. After the 4 weeks of post-weaning social isolation followed by 4 weeks of reunion, different neuronal growth markers as well as neuronal morphology were evaluated using different experimental approaches. Social enrichment restored the reduction of BDNF, NGF and Arc gene expression in the whole hippocampus of social isolated rats. This effect was paralleled by an increase in density and morphology of dendritic spines, as well as in neuronal tree arborisation in granule cells of the dentate gyrus. These changes were associated with a marked increase in neuronal proliferation and neurogenesis in the same hippocampal subregion that were reduced by social isolation stress. These results further suggest that the exposure to social enrichment, by abolishing the negative effect of social isolation stress on hippocampal plasticity, may improve neuronal resilience with a beneficial effect on cognitive function

    General bounds on non-standard neutrino interactions

    Full text link
    We derive model-independent bounds on production and detection non-standard neutrino interactions (NSI). We find that the constraints for NSI parameters are around O(10^{-2}) to O(10^{-1}). Furthermore, we review and update the constraints on matter NSI. We conclude that the bounds on production and detection NSI are generally one order of magnitude stronger than their matter counterparts.Comment: 18 pages, revtex4, 1 axodraw figure. Minor changes, matches published versio

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Feature-Guided Black-Box Safety Testing of Deep Neural Networks

    Full text link
    Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc.) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. These features are converted into a mutable saliency distribution, where high probability is assigned to pixels that affect the composition of the image with respect to the human visual system. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, where the first player's objective is to minimise the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random. We show that, theoretically, the two-player game can con- verge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. For Lipschitz networks, we also identify conditions that provide safety guarantees that no adversarial examples exist. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples. Our experiments show that, despite the black-box setting, manipulations guided by a perception-based saliency distribution are competitive with state-of-the-art methods that rely on white-box saliency matrices or sophisticated optimization procedures. Finally, we show how our method can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure

    Is Feature Selection Secure against Training Data Poisoning?

    Get PDF
    Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies. Feature selection has been widely used in machine learning for security applications to improve generalization and computational efficiency, although it is not clear whether its use may be beneficial or even counterproductive when training data are poisoned by intelligent attackers. In this work, we shed light on this issue by providing a framework to investigate the robustness of popular feature selection methods, including LASSO, ridge regression and the elastic net. Our results on malware detection show that feature selection methods can be significantly compromised under attack (we can reduce LASSO to almost random choices of feature sets by careful insertion of less than 5% poisoned training samples), highlighting the need for specific countermeasures

    Cyber peacekeeping operations and the regulation of the use of lethal force

    Get PDF
    Peacekeeping is an essential tool at the disposal of the United Nations for the maintenance of international peace and security. The growing relevance of cyber technologies presents itself as an opportunity to adapt peacekeeping to the challenges of a rapidly evolving security landscape. This article introduces the notion of "cyber-peacekeeping," defined as the incorporation and use of cyber capabilities by peacekeepers. It discusses the legal basis for cyber-peacekeeping and the foundational principles of consent, impartiality, and use of defensive force. The article examines the use of lethal force by cyber-peacekeepers under the law of armed conflict paradigm. It considers the circumstances under which cyber peacekeepers become a party to an international or non-international armed conflict, whether they become combatants, and under what circumstances cyber peacekeepers can commit direct participation in hostilities. The article also considers the use of lethal force under the law enforcement paradigm. In this respect, it discusses the applicability of International Human Rights Law to cyber-peacekeeping as well as its extraterritorial application by focusing on the right to life. It then goes on to examine the application of the requirements of necessity, proportionality, and precautions to the use of lethal force by cyber peacekeepers. The article is one of the first systematic expositions of how international law regulates the use of lethal force in the course of cyber-peacekeeping but its findings can also apply to traditional peacekeeping

    DeltaPhish: Detecting Phishing Webpages in Compromised Websites

    Full text link
    The large-scale deployment of modern phishing attacks relies on the automatic exploitation of vulnerable websites in the wild, to maximize profit while hindering attack traceability, detection and blacklisting. To the best of our knowledge, this is the first work that specifically leverages this adversarial behavior for detection purposes. We show that phishing webpages can be accurately detected by highlighting HTML code and visual differences with respect to other (legitimate) pages hosted within a compromised website. Our system, named DeltaPhish, can be installed as part of a web application firewall, to detect the presence of anomalous content on a website after compromise, and eventually prevent access to it. DeltaPhish is also robust against adversarial attempts in which the HTML code of the phishing page is carefully manipulated to evade detection. We empirically evaluate it on more than 5,500 webpages collected in the wild from compromised websites, showing that it is capable of detecting more than 99% of phishing webpages, while only misclassifying less than 1% of legitimate pages. We further show that the detection rate remains higher than 70% even under very sophisticated attacks carefully designed to evade our system.Comment: Preprint version of the work accepted at ESORICS 201

    Empirical assessment of generating adversarial configurations for software product lines

    Get PDF
    Software product line (SPL) engineering allows the derivation of products tailored to stakeholders’ needs through the setting of a large number of configuration options. Unfortunately, options and their interactions create a huge configuration space which is either intractable or too costly to explore exhaustively. Instead of covering all products, machine learning (ML) approximates the set of acceptable products (e.g., successful builds, passing tests) out of a training set (a sample of configurations). However, ML techniques can make prediction errors yielding non-acceptable products wasting time, energy and other resources. We apply adversarial machine learning techniques to the world of SPLs and craft new configurations faking to be acceptable configurations but that are not and vice-versa. It allows to diagnose prediction errors and take appropriate actions. We develop two adversarial configuration generators on top of state-of-the-art attack algorithms and capable of synthesizing configurations that are both adversarial and conform to logical constraints. We empirically assess our generators within two case studies: an industrial video synthesizer (MOTIV) and an industry-strength, open-source Web-app configurator (JHipster). For the two cases, our attacks yield (up to) a 100% misclassification rate without sacrificing the logical validity of adversarial configurations. This work lays the foundations of a quality assurance framework for ML-based SPLs

    Societal issues in machine learning: When learning from data is not enough

    Get PDF
    It has been argued that Artificial Intelligence (AI) is experiencing a fast process of commodification. Such characterization is on the interest of big IT companies, but it correctly reflects the current industrialization of AI. This phenomenon means that AI systems and products are reaching the society at large and, therefore, that societal issues related to the use of AI and Machine Learning (ML) cannot be ignored any longer. Designing ML models from this human-centered perspective means incorporating human-relevant requirements such as safety, fairness, privacy, and interpretability, but also considering broad societal issues such as ethics and legislation. These are essential aspects to foster the acceptance of ML-based technologies, as well as to ensure compliance with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. The ESANN special session for which this tutorial acts as an introduction aims to showcase the state of the art on these increasingly relevant topics among ML theoreticians and practitioners. For this purpose, we welcomed both solid contributions and preliminary relevant results showing the potential, the limitations and the challenges of new ideas, as well as refinements, or hybridizations among the different fields of research, ML and related approaches in facing real-world problems involving societal issues
    corecore