10 research outputs found

    The Impact of Procedural Security Countermeasures on Employee Security Behaviour: A Qualitative Study

    Get PDF
    The growing number of information security breaches in organisations presents a serious risk to the confidentiality of personal and commercially sensitive data. Current research studies indicate that humans are the weakest link in the information security chain and the root cause of numerous security incidents in organisations. Based on literature gaps, this study investigates how procedural security countermeasures tend to affect employee security behaviour. Data for this study was collected in organisations located in the United States and Ireland. Results suggest that procedural security countermeasures are inclined to promote security-cautious behaviour in organisations, while their absence tends to lead to non-compliant behaviour

    Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

    No full text
    The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security that raises new questions within both fields. The interest in learning-based methods for security and system design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties by design alone, learning methods are being used to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be co-opted or evaded by adversaries, who change to counter them. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees making the task of designing secure learning-based systems a lucrative open research area with many challenges. The Perspectives Workshop, ``Machine Learning Methods for Computer Security\u27\u27 was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. This workshop featured twenty-two invited talks from leading researchers within the secure learning community covering topics in adversarial learning, game-theoretic learning, collective classification, privacy-preserving learning, security evaluation metrics, digital forensics, authorship identification, adversarial advertisement detection, learning for offensive security, and data sanitization. The workshop also featured workgroup sessions organized into three topic: machine learning for computer security, secure learning, and future applications of secure learning
    corecore