35 research outputs found

    Privacy-certification standards for extended-reality devices and services

    Get PDF
    In this position paper, we discuss the need for, and potential requirements for privacy certification standards for extended-reality devices and related services. We begin by presenting motivations, before discussing related efforts. We then review the issue of certification as a research problem and identify key requirements. Finally, we out-line key recommendations for how these might feed into a grander roadmap for privacy and security research

    Dataset Construction and Analysis of Screenshot Malware

    Get PDF
    Among the various types of spyware, screenloggers are distinguished by their ability to capture screenshots. This gives them considerable nuisance capacity, giving rise to theft of sensitive data or, failing that, to serious invasions of the privacy of users. Several examples of attacks relying on this screen capture feature have been documented in recent years. However, there is not sufficient empirical and experimental evidence on this topic. Indeed, to the best of our knowledge, there is no dataset dedicated to screenshot-taking malware until today. The lack of datasets or common testbed platforms makes it difficult to analyse and study their behaviour in order to develop effective countermeasures. The screenshot feature is often a smart feature that does not activate automatically once the malware has infected the machine; the activation mechanisms of this function are often more complex. Consequently, a dataset which is completely dedicated to them would make it possible to better understand the subtleties of triggering screenshots and even to learn to distinguish them from the legitimate applications widely present on devices. The main purpose of this paper is to build such a dataset and analyse the behaviour of screenloggers

    Cyber security threats and challenges in collaborative mixed-reality

    Get PDF
    Collaborative Mixed-Reality (CMR) applications are gaining interest in a wide range of areas including games, social interaction, design and health-care. To date, the vast majority of published work has focused on display technology advancements, software, collaboration architectures and applications. However, the potential security concerns that affect collaborative platforms have received limited research attention. In this position paper, we investigate the challenges posed by cyber-security threats to CMR systems. We focus on how typical network architectures facilitating CMR and how their vulnerabilities can be exploited by attackers, and discuss the degree of potential social, monetary impacts, psychological and other harms that may result from such exploits. The main purpose of this paper is to provoke a discussion on CMR security concerns. We highlight insights from a cyber-security threat modelling perspective and also propose potential directions for research and development toward better mitigation strategies. We present a simple, systematic approach to understanding a CMR attack surface through an abstraction-based reasoning framework to identify potential attack vectors. Using this framework, security analysts, engineers, designers and users alike (stakeholders) can identify potential Indicators of Exposures (IoE) and Indicators of Compromise (IoC). Our framework allows stakeholders to reduce their CMR attack surface as well understand how Intrusion Detection System (IDS) approaches can be adopted for CMR systems. To demonstrate the validity to our framework, we illustrate several CMR attack surfaces through a set of use-cases. Finally, we also present a discussion on future directions this line of research should take

    Addressing Uncertainty using Hypothesis-Uncertainty Graphs

    No full text
    Virtual archaeology can be used to investigate heritage sites using physically-based simulations. However, a high degree of photo-, physical- and functional realism in the rendition can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene - which is frequently not the case when investigating the past

    On properties of cyberattacks and their nuances

    No full text
    Several attack models attempt to describe behaviours of attackers in order to understand and combat them better. However, models are to some degree incomplete. They may lack insight about minor variations about attacks that are observed in the real world, but not described in the model. This may lead to similar attacks being classified as the same one. The appropriate solution would be to modify the attack model (to deal with that particular use case) or replace it entirely. However, doing so may be undesirable as the model may work well for most cases, or, time and resource constraints may factor in as well. This paper investigates the uses of descriptions of minor variations in attacks, as well as how and when it may (and may not) be appropriate to communicate those differences in existing attack models. We propose that such nuances can be appended as annotations to existing attack models. We investigate commonalities across a range of existing models, and identify where and how annotations may be helpful. Using annotations appropriately should enable analysts and researchers to express subtle, but important variations in attacks that may not fit a model that is currently being used. The value of this paper is that we demonstrate how annotations may help analysts communicate and ask better questions during identification of unknown aspects of attacks faster, e.g. as a means of storing mental notes in a structured manner, esp. while facing zero-day attacks when information is incomplete

    Classification of malware families based on runtime behaviour

    No full text
    This paper distinguishes malware families from a specific category (i.e., ransomware) via dynamic analysis. We collect samples from four ransomware families and use Cuckoo sandbox environment, to observe their runtime behaviour. This study aims to provide new insight into malware family classification by comparing possible runtime features, and application of different extraction and selection techniques on them. As we try many extraction models on call traces such as bag-of-words, ngram sequences and wildcard patterns, we also look for other behavioural features such as files, registry and mutex artefacts. While wildcard patterns on call traces are designed to overcome advanced evasion strategies such as the insertion of junk API calls (causing ngram searches to fail), for the models generating too many features, we adapt new feature selection techniques with a classwise fashion to avoid unfair representation of families in the feature set which leads to poor detection performance. To our knowledge, no research paper has applied a classwise approach to the multi-class malware family identification. With a 96.05% correct classification ratio for four families, this study outperforms most studies applying similar techniques

    Classification of malware families based on runtime behaviour

    No full text
    This paper distinguishes malware families from a specific category (i.e., ransomware) via dynamic analysis. We collect samples from four ransomware families and use Cuckoo sandbox environment, to observe their runtime behaviour. This study aims to provide new insight into malware family classification by comparing possible runtime features, and application of different extraction and selection techniques on them. As we try many extraction models on call traces such as bag-of-words, ngram sequences and wildcard patterns, we also look for other behavioural features such as files, registry and mutex artefacts. While wildcard patterns on call traces are designed to overcome advanced evasion strategies such as the insertion of junk API calls (causing ngram searches to fail), for the models generating too many features, we adapt new feature selection techniques with a classwise fashion to avoid unfair representation of families in the feature set which leads to poor detection performance. To our knowledge, no research paper has applied a classwise approach to the multi-class malware family identification. With a 96.05% correct classification ratio for four families, this study outperforms most studies applying similar techniques

    On properties of cyberattacks and their nuances

    No full text
    Several attack models attempt to describe behaviours of attackers in order to understand and combat them better. However, models are to some degree incomplete. They may lack insight about minor variations about attacks that are observed in the real world, but not described in the model. This may lead to similar attacks being classified as the same one. The appropriate solution would be to modify the attack model (to deal with that particular use case) or replace it entirely. However, doing so may be undesirable as the model may work well for most cases, or, time and resource constraints may factor in as well. This paper investigates the uses of descriptions of minor variations in attacks, as well as how and when it may (and may not) be appropriate to communicate those differences in existing attack models. We propose that such nuances can be appended as annotations to existing attack models. We investigate commonalities across a range of existing models, and identify where and how annotations may be helpful. Using annotations appropriately should enable analysts and researchers to express subtle, but important variations in attacks that may not fit a model that is currently being used. The value of this paper is that we demonstrate how annotations may help analysts communicate and ask better questions during identification of unknown aspects of attacks faster, e.g. as a means of storing mental notes in a structured manner, esp. while facing zero-day attacks when information is incomplete

    Insider-threat detection using Gaussian mixture models and sensitivity profiles

    No full text
    The insider threat is one of the most challenging problems to detect due to its complex nature and significant impact on organisations. Insiders pose a great threat on organisations due to their knowledge on the organisation and its security protocols, their authorized access to the organisation’s resources, and the difficulty of discerning the behaviour of an insider threat from a normal employee’s behavior [1]. As a result, the insider-threat field faces the challenge of developing detection solutions that are able to detect threats without generating a great number of false positives, and are able to take into consideration the non-technical aspect of the problem. This paper introduces a novel automated anomaly detection method that uses Gaussian Mixture Models for modelling the normal behaviour of employees to detect anomalous behaviour that may be malicious. The paper also introduces a novel approach to insider-threat detection that capitalises on the knowledge of security experts during analysis using visual analytics and sensitivity profiles which is a novel approach to re-contextualise detection output by considering outside, qualitative, non-technical factors that analysts may be privy to, but not the detection method. A feasibility study with experts in threat detection was conducted to evaluate the detection performance of the proposed solution and its usability. The results demonstrate the success of designing a solution that builds on the knowledge of security experts during analysis and reduces the number of false positives generated by automated anomaly detection. The work presented in the paper also demonstrates the potential of introducing more methods for capitialising on the knowledge of security experts to improve the false negative rate, and the potential of designing sensitivity profiles

    Insider-threat detection using Gaussian mixture models and sensitivity profiles

    No full text
    The insider threat is one of the most challenging problems to detect due to its complex nature and significant impact on organisations. Insiders pose a great threat on organisations due to their knowledge on the organisation and its security protocols, their authorized access to the organisation’s resources, and the difficulty of discerning the behaviour of an insider threat from a normal employee’s behavior [1]. As a result, the insider-threat field faces the challenge of developing detection solutions that are able to detect threats without generating a great number of false positives, and are able to take into consideration the non-technical aspect of the problem. This paper introduces a novel automated anomaly detection method that uses Gaussian Mixture Models for modelling the normal behaviour of employees to detect anomalous behaviour that may be malicious. The paper also introduces a novel approach to insider-threat detection that capitalises on the knowledge of security experts during analysis using visual analytics and sensitivity profiles which is a novel approach to re-contextualise detection output by considering outside, qualitative, non-technical factors that analysts may be privy to, but not the detection method. A feasibility study with experts in threat detection was conducted to evaluate the detection performance of the proposed solution and its usability. The results demonstrate the success of designing a solution that builds on the knowledge of security experts during analysis and reduces the number of false positives generated by automated anomaly detection. The work presented in the paper also demonstrates the potential of introducing more methods for capitialising on the knowledge of security experts to improve the false negative rate, and the potential of designing sensitivity profiles
    corecore