12,433 research outputs found

    Classification of logical vulnerability based on group attacking method

    Get PDF
    New advancement in the field of e-commerce software technology has also brought many benefits, at the same time developing process always face different sort of problems from design phase to implement phase. Software faults and defects increases the issues of reliability and security, that’s reason why a solution of this problem is required to fortify these issues. The paper addresses the problem associated with lack of clear component-based web application related classification of logical vulnerabilities through identifying Attack Group Method by categorizing two different types of vulnerabilities in component-based web applications. A new classification scheme of logical group attack method is proposed and developed by using a Posteriori Empirically methodology

    Model the System from Adversary Viewpoint: Threats Identification and Modeling

    Full text link
    Security attacks are hard to understand, often expressed with unfriendly and limited details, making it difficult for security experts and for security analysts to create intelligible security specifications. For instance, to explain Why (attack objective), What (i.e., system assets, goals, etc.), and How (attack method), adversary achieved his attack goals. We introduce in this paper a security attack meta-model for our SysML-Sec framework, developed to improve the threat identification and modeling through the explicit representation of security concerns with knowledge representation techniques. Our proposed meta-model enables the specification of these concerns through ontological concepts which define the semantics of the security artifacts and introduced using SysML-Sec diagrams. This meta-model also enables representing the relationships that tie several such concepts together. This representation is then used for reasoning about the knowledge introduced by system designers as well as security experts through the graphical environment of the SysML-Sec framework.Comment: In Proceedings AIDP 2014, arXiv:1410.322

    Towards quantum enhanced adversarial robustness in machine learning

    Full text link
    Machine learning algorithms are powerful tools for data driven tasks such as image classification and feature detection, however their vulnerability to adversarial examples - input samples manipulated to fool the algorithm - remains a serious challenge. The integration of machine learning with quantum computing has the potential to yield tools offering not only better accuracy and computational efficiency, but also superior robustness against adversarial attacks. Indeed, recent work has employed quantum mechanical phenomena to defend against adversarial attacks, spurring the rapid development of the field of quantum adversarial machine learning (QAML) and potentially yielding a new source of quantum advantage. Despite promising early results, there remain challenges towards building robust real-world QAML tools. In this review we discuss recent progress in QAML and identify key challenges. We also suggest future research directions which could determine the route to practicality for QAML approaches as quantum computing hardware scales up and noise levels are reduced.Comment: 10 Pages, 4 Figure
    • …
    corecore