26,280 research outputs found

    Refining the PoinTER “human firewall” pentesting framework

    Get PDF
    PurposePenetration tests have become a valuable tool in the cyber security defence strategy, in terms of detecting vulnerabilities. Although penetration testing has traditionally focused on technical aspects, the field has started to realise the importance of the human in the organisation, and the need to ensure that humans are resistant to cyber-attacks. To achieve this, some organisations “pentest” their employees, testing their resilience and ability to detect and repel human-targeted attacks. In a previous paper we reported on PoinTER (Prepare TEst Remediate), a human pentesting framework, tailored to the needs of SMEs. In this paper, we propose improvements to refine our framework. The improvements are based on a derived set of ethical principles that have been subjected to ethical scrutiny.MethodologyWe conducted a systematic literature review of academic research, a review of actual hacker techniques, industry recommendations and official body advice related to social engineering techniques. To meet our requirements to have an ethical human pentesting framework, we compiled a list of ethical principles from the research literature which we used to filter out techniques deemed unethical.FindingsDrawing on social engineering techniques from academic research, reported by the hacker community, industry recommendations and official body advice and subjecting each technique to ethical inspection, using a comprehensive list of ethical principles, we propose the refined GDPR compliant and privacy respecting PoinTER Framework. The list of ethical principles, we suggest, could also inform ethical technical pentests.OriginalityPrevious work has considered penetration testing humans, but few have produced a comprehensive framework such as PoinTER. PoinTER has been rigorously derived from multiple sources and ethically scrutinised through inspection, using a comprehensive list of ethical principles derived from the research literature

    Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder

    Full text link
    Deep neural networks are vulnerable to backdoor attacks, where an adversary maliciously manipulates the model behavior through overlaying images with special triggers. Existing backdoor defense methods often require accessing a few validation data and model parameters, which are impractical in many real-world applications, e.g., when the model is provided as a cloud service. In this paper, we address the practical task of blind backdoor defense at test time, in particular for black-box models. The true label of every test image needs to be recovered on the fly from the hard label predictions of a suspicious model. The heuristic trigger search in image space, however, is not scalable to complex triggers or high image resolution. We circumvent such barrier by leveraging generic image generation models, and propose a framework of Blind Defense with Masked AutoEncoder (BDMAE). It uses the image structural similarity and label consistency between the test image and MAE restorations to detect possible triggers. The detection result is refined by considering the topology of triggers. We obtain a purified test image from restorations for making prediction. Our approach is blind to the model architectures, trigger patterns or image benignity. Extensive experiments on multiple datasets with different backdoor attacks validate its effectiveness and generalizability. Code is available at https://github.com/tsun/BDMAE

    Towards a Framework for Managing Inconsistencies in Systems of Systems

    Get PDF
    The growth in the complexity of software systems has led to a proliferation of systems that have been created independently to provide specific functions, such as activity tracking, household energy management or personal nutrition assistance. The runtime composition of these individual systems into Systems of Systems (SoSs) enables support for more sophisticated functionality that cannot be provided by individual constituent systems on their own. However, in order to realize the benefits of these functionalities it is necessary to address a number of challenges associated with SoSs, including, but not limited to, operational and managerial independence, geographic distribution of participating systems, evolutionary development, and emergent conflicting behavior that can occur due interactions between the requirements of the participating systems. In this paper, we present a framework for conflict management in SoSs. The management of conflicting requirements involves four steps, namely (a) overlap detection, (b) conflict identification, (c) conflict diagnosis, and (d) conflict resolution based on the use of a utility function. The framework uses a Monitor-Analyze-Plan- Execute- Knowledge (MAPE-K) architectural pattern. In order to illustrate the work, we use an example SoS ecosystem designed to support food security at different levels of granularity
    • …
    corecore