133,402 research outputs found

    Reconceptualising adaptation to climate change as part of pathways of change and response

    Get PDF
    The need to adapt to climate change is now widely recognised as evidence of its impacts on social and natural systems grows and greenhouse gas emissions continue unabated. Yet efforts to adapt to climate change, as reported in the literature over the last decade and in selected case studies, have not led to substantial rates of implementation of adaptation actions despite substantial investments in adaptation science. Moreover, implemented actions have been mostly incremental and focused on proximate causes; there are far fewer reports of more systemic or transformative actions. We found that the nature and effectiveness of responses was strongly influenced by framing. Recent decision-oriented approaches that aim to overcome this situation are framed within a "pathways" metaphor to emphasise the need for robust decision making within adaptive processes in the face of uncertainty and inter-temporal complexity. However, to date, such "adaptation pathways" approaches have mostly focused on contexts with clearly identified decision-makers and unambiguous goals; as a result, they generally assume prevailing governance regimes are conducive for adaptation and hence constrain responses to proximate causes of vulnerability. In this paper, we explore a broader conceptualisation of "adaptation pathways" that draws on 'pathways thinking' in the sustainable development domain to consider the implications of path dependency, interactions between adaptation plans, vested interests and global change, and situations where values, interests, or institutions constrain societal responses to change. This re-conceptualisation of adaptation pathways aims to inform decision makers about integrating incremental actions on proximate causes with the transformative aspects of societal change. Case studies illustrate what this might entail. The paper ends with a call for further exploration of theory, methods and procedures to operationalise this broader conceptualisation of adaptation

    Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks

    Full text link
    Malware still constitutes a major threat in the cybersecurity landscape, also due to the widespread use of infection vectors such as documents. These infection vectors hide embedded malicious code to the victim users, facilitating the use of social engineering techniques to infect their machines. Research showed that machine-learning algorithms provide effective detection mechanisms against such threats, but the existence of an arms race in adversarial settings has recently challenged such systems. In this work, we focus on malware embedded in PDF files as a representative case of such an arms race. We start by providing a comprehensive taxonomy of the different approaches used to generate PDF malware, and of the corresponding learning-based detection systems. We then categorize threats specifically targeted against learning-based PDF malware detectors, using a well-established framework in the field of adversarial machine learning. This framework allows us to categorize known vulnerabilities of learning-based PDF malware detectors and to identify novel attacks that may threaten such systems, along with the potential defense mechanisms that can mitigate the impact of such threats. We conclude the paper by discussing how such findings highlight promising research directions towards tackling the more general challenge of designing robust malware detectors in adversarial settings

    Machine-assisted Cyber Threat Analysis using Conceptual Knowledge Discovery

    Get PDF
    Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together

    Adversarial Detection of Flash Malware: Limitations and Open Issues

    Full text link
    During the past four years, Flash malware has become one of the most insidious threats to detect, with almost 600 critical vulnerabilities targeting Adobe Flash disclosed in the wild. Research has shown that machine learning can be successfully used to detect Flash malware by leveraging static analysis to extract information from the structure of the file or its bytecode. However, the robustness of Flash malware detectors against well-crafted evasion attempts - also known as adversarial examples - has never been investigated. In this paper, we propose a security evaluation of a novel, representative Flash detector that embeds a combination of the prominent, static features employed by state-of-the-art tools. In particular, we discuss how to craft adversarial Flash malware examples, showing that it suffices to manipulate the corresponding source malware samples slightly to evade detection. We then empirically demonstrate that popular defense techniques proposed to mitigate evasion attempts, including re-training on adversarial examples, may not always be sufficient to ensure robustness. We argue that this occurs when the feature vectors extracted from adversarial examples become indistinguishable from those of benign data, meaning that the given feature representation is intrinsically vulnerable. In this respect, we are the first to formally define and quantitatively characterize this vulnerability, highlighting when an attack can be countered by solely improving the security of the learning algorithm, or when it requires also considering additional features. We conclude the paper by suggesting alternative research directions to improve the security of learning-based Flash malware detectors

    MASTER’S PROJECT: CHALLENGING STRUCTURAL RACISM IN PHILANTHROPY THROUGH CREATIVE EXPRESSION AND DEEP LISTENING

    Get PDF
    This capstone project is an account of a personal transformation journey that started in March of 2017. It follows my deep and personal exploration of challenging systemic racism as I spoke with many leaders in the philanthropic and artistic communities. In addition, I created artwork to help incorporate and synthesize my emotions around white supremacy and process what I was learning. The qualitative information that was gathered was abundant and the supporting art journaling technique was useful in the translation of that information

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Scaling Success: Lessons from Adaptation Pilots in the Rainfed Regions of India

    Get PDF
    "Scaling Success" examines how agricultural communities are adapting to the challenges posed by climate change through the lens of India's rainfed agriculture regions. Rainfed agriculture currently occupies 58 percent of India's cultivated land and accounts for up to 40 percent of its total food production. However, these regions face potential production losses of more than $200 billion USD in rice, wheat, and maize by 2050 due to the effects of climate change. Unless action is taken soon at a large scale, farmers will see sharp decreases in revenue and yields.Rainfed regions across the globe have been an important focus for the first generation of adaptation projects, but to date, few have achieved a scale that can be truly transformational. Drawing on lessons learnt from 21 case studies of rainfed agriculture interventions, the report provides guidance on how to design, fund and support adaptation projects that can achieve scale

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines
    • …
    corecore