161,839 research outputs found

    Explanation and trust: what to tell the user in security and AI?

    Get PDF
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Risk and Business Goal Based Security Requirement and Countermeasure Prioritization

    Get PDF
    Companies are under pressure to be in control of their assets but at the same time they must operate as efficiently as possible. This means that they aim to implement “good-enough security” but need to be able to justify their security investment plans. Currently companies achieve this by means of checklist-based security assessments, but these methods are a way to achieve consensus without being able to provide justifications of countermeasures in terms of business goals. But such justifications are needed to operate securely and effectively in networked businesses. In this paper, we first compare a Risk-Based Requirements Prioritization method (RiskREP) with some requirements engineering and risk assessment methods based on their requirements elicitation and prioritization properties. RiskREP extends misuse case-based requirements engineering methods with IT architecture-based risk assessment and countermeasure definition and prioritization. Then, we present how RiskREP prioritizes countermeasures by linking business goals to countermeasure specification. Prioritizing countermeasures based on business goals is especially important to provide the stakeholders with structured arguments for choosing a set of countermeasures to implement. We illustrate RiskREP and how it prioritizes the countermeasures it elicits by an application to an action case

    The Value of User-Visible Internet Cryptography

    Full text link
    Cryptographic mechanisms are used in a wide range of applications, including email clients, web browsers, document and asset management systems, where typical users are not cryptography experts. A number of empirical studies have demonstrated that explicit, user-visible cryptographic mechanisms are not widely used by non-expert users, and as a result arguments have been made that cryptographic mechanisms need to be better hidden or embedded in end-user processes and tools. Other mechanisms, such as HTTPS, have cryptography built-in and only become visible to the user when a dialogue appears due to a (potential) problem. This paper surveys deployed and potential technologies in use, examines the social and legal context of broad classes of users, and from there, assesses the value and issues for those users

    Transparent authentication: Utilising heart rate for user authentication

    Get PDF
    There has been exponential growth in the use of wearable technologies in the last decade with smart watches having a large share of the market. Smart watches were primarily used for health and fitness purposes but recent years have seen a rise in their deployment in other areas. Recent smart watches are fitted with sensors with enhanced functionality and capabilities. For example, some function as standalone device with the ability to create activity logs and transmit data to a secondary device. The capability has contributed to their increased usage in recent years with researchers focusing on their potential. This paper explores the ability to extract physiological data from smart watch technology to achieve user authentication. The approach is suitable not only because of the capacity for data capture but also easy connectivity with other devices - principally the Smartphone. For the purpose of this study, heart rate data is captured and extracted from 30 subjects continually over an hour. While security is the ultimate goal, usability should also be key consideration. Most bioelectrical signals like heart rate are non-stationary time-dependent signals therefore Discrete Wavelet Transform (DWT) is employed. DWT decomposes the bioelectrical signal into n level sub-bands of detail coefficients and approximation coefficients. Biorthogonal Wavelet (bior 4.4) is applied to extract features from the four levels of detail coefficents. Ten statistical features are extracted from each level of the coffecient sub-band. Classification of each sub-band levels are done using a Feedforward neural Network (FF-NN). The 1 st , 2 nd , 3 rd and 4 th levels had an Equal Error Rate (EER) of 17.20%, 18.17%, 20.93% and 21.83% respectively. To improve the EER, fusion of the four level sub-band is applied at the feature level. The proposed fusion showed an improved result over the initial result with an EER of 11.25% As a one-off authentication decision, an 11% EER is not ideal, its use on a continuous basis makes this more than feasible in practice
    • …
    corecore