816 research outputs found

    In Search for the Right Measure: Assessing Types of Developed Knowledge While Using a Gamified Web Toolkit

    Get PDF
    Game-based learning has been used to teach topics in diverse domains, but it is still hard to determine when such approaches are an efficient learning technique. In this paper we focus on one open challenge – the limited understanding in the community of the types of knowledge these games help to develop. Using a taxonomy that distinguishes between declarative, procedural and conditional knowledge, we evaluate a game-based toolkit to analyse and solve an information security problem within a holistic crime prevention framework. Twenty-eight participants used the toolkit. We designed a portfolio of learning assessment measures to capture learning of different types of knowledge. The measures included two theoretical open-answer questions to explore participants' understanding, three problem-specific open-answer questions to test their ability to apply the framework, and 9 multiple-choice questions to test their ability to transfer what was learned to other contexts. The assessment measures were administered before and after use of the tookit. The application questions were analysed by classifying suggested ideas. The theoretical questions were qualitatively analysed using a set of analytical techniques. The transferability questions were statistically analysed using ttests. Our results show that participants' answers to the application questions improved in quality after the use of the toolkit. In their answers to the theoretical questions most participants could explain the key features of the toolkit. Statistical analysis of the multiple-choice questions testing transferability however failed to demonstrate significant improvement. Whilist our participants understood the CCO framework and learned how to use the toolkit, participants didn't demonstrate transfer of knowledge to other situations in information security. We discuss our results, limitations of the study design and possible lessons to be learned from these

    An Insider Misuse Threat Detection and Prediction Language

    Get PDF
    Numerous studies indicate that amongst the various types of security threats, the problem of insider misuse of IT systems can have serious consequences for the health of computing infrastructures. Although incidents of external origin are also dangerous, the insider IT misuse problem is difficult to address for a number of reasons. A fundamental reason that makes the problem mitigation difficult relates to the level of trust legitimate users possess inside the organization. The trust factor makes it difficult to detect threats originating from the actions and credentials of individual users. An equally important difficulty in the process of mitigating insider IT threats is based on the variability of the problem. The nature of Insider IT misuse varies amongst organizations. Hence, the problem of expressing what constitutes a threat, as well as the process of detecting and predicting it are non trivial tasks that add up to the multi- factorial nature of insider IT misuse. This thesis is concerned with the process of systematizing the specification of insider threats, focusing on their system-level detection and prediction. The design of suitable user audit mechanisms and semantics form a Domain Specific Language to detect and predict insider misuse incidents. As a result, the thesis proposes in detail ways to construct standardized descriptions (signatures) of insider threat incidents, as means of aiding researchers and IT system experts mitigate the problem of insider IT misuse. The produced audit engine (LUARM – Logging User Actions in Relational Mode) and the Insider Threat Prediction and Specification Language (ITPSL) are two utilities that can be added to the IT insider misuse mitigation arsenal. LUARM is a novel audit engine designed specifically to address the needs of monitoring insider actions. These needs cannot be met by traditional open source audit utilities. ITPSL is an XML based markup that can standardize the description of incidents and threats and thus make use of the LUARM audit data. Its novelty lies on the fact that it can be used to detect as well as predict instances of threats, a task that has not been achieved to this date by a domain specific language to address threats. The research project evaluated the produced language using a cyber-misuse experiment approach derived from real world misuse incident data. The results of the experiment showed that the ITPSL and its associated audit engine LUARM provide a good foundation for insider threat specification and prediction. Some language deficiencies relate to the fact that the insider threat specification process requires a good knowledge of the software applications used in a computer system. As the language is easily expandable, future developments to improve the language towards this direction are suggested

    Toward Online Linguistic Surveillance of Threatening Messages

    Get PDF
    Threats are communicative acts, but it is not always obvious what they communicate or when they communicate imminent credible and serious risk. This paper proposes a research- and theory-based set of over 20 potential linguistic risk indicators that may discriminate credible from non-credible threats within online threat message corpora. Two prongs are proposed: (1) Using expert and layperson ratings to validate subjective scales in relation to annotated known risk messages, and (2) Using the resulting annotated corpora for automated machine learning with computational linguistic analyses to classify non-threats, false threats, and credible threats. Rating scales are proposed, existing threat corpora are identified, and some prospective computational linguistic procedures are identified. Implications for ongoing threat surveillance and its applications are explored

    Human Errors in Data Breaches: An Exploratory Configurational Analysis

    Get PDF
    Information Systems (IS) are critical for employee productivity and organizational success. Data breaches are on the rise—with thousands of data breaches accounting for billions of records breached and annual global cybersecurity costs projected to reach $10.5 trillion by 2025. A data breach is the unauthorized disclosure of sensitive information—and can be achieved intentionally or unintentionally. Significant causes of data breaches are hacking and human error; in some estimates, human error accounted for about a quarter of all data breaches in 2018. Furthermore, the significance of human error on data breaches is largely underrepresented, as hackers often capitalize on organizational users’ human errors resulting in the compromise of systems or information. The research problem that this study addressed is that organizational data breaches caused by human error are both costly and have the most significant impact on Personally Identifiable Information (PII) breaches. Human error types can be classified in three categories—Skill-Based Error (SBE), Rule-Based Mistakes (RBM), and Knowledge-Based Mistakes (KBM)—tied to the associated levels of human performance. The various circumstantial and contextual factors that influence human performance to cause or contribute to human error are called Performance Influencing Factors (PIF). These PIFs have been examined in the safety literature and most notably in Human Reliability Analysis (HRA) applications. The list of PIFs is context specific and had yet to be comprehensively established in the cybersecurity literature—a significant research gap. The main goal of this research study was to employ configurational analysis—specifically, Fuzzy-Set Qualitative Analysis (fsQCA)—to empirically assess the conjunctural causal relationship of internal (individual) and external (organizational and contextual) Cybersecurity Performance Influencing Factors (CS-PIFs) leading to Cybersecurity Human Error (CS-HE) (SBE, RBM, and KBM) that resulted in the largest data breaches across multiple organization types from 2007 to 2019 in the United States (US). Feedback was solicited from 31 Cybersecurity Subject Matter Experts (SME), and they identified 1st order CS-PIFs and validated the following 2nd order CS-PIFs: organizational cybersecurity; cybersecurity policies and procedures; cybersecurity education, training, and awareness; ergonomics; cybersecurity knowledge, skills, and abilities; and employee cybersecurity fitness for duty. Utilizing data collected from 102 data breach cases, this research found that multiple combinations, or causal recipes, of CS-PIFs led to certain CS-HEs, that resulted in data breaches. Specifically, seven of the 36 fsQCA models had solution consistencies that exceeded the minimum threshold of 0.80, thereby providing argument for the contextual nature of CS-PIFs, CS-HE, and data breaches. Two additional findings were also discovered—five sufficient configurations were present in two models, and the absence of strong cybersecurity knowledge, skills, and abilities is a necessary condition for all cybersecurity human error outcomes in the observed cases

    Employing Variation in the Object of Learning for the Design-based Development of Serious Games that Support Learning of Conditional Knowledge

    Get PDF
    Learning how to cope with tasks that do not have optimal solutions is a life-long challenge. In particular when such education and training needs to be scalable, technologies are needed to support teachers and facilitators in providing the feedback and discussion necessary for quality learning. In this thesis, I conduct design-based research by following a typical game development cycle to develop a serious game. I propose a framework that derives learning and motivational principles to include them into the design of serious games. My exploration starts with project management as a learning domain, and for practical reasons, shifts towards information security. The first (concept) phase of the development includes an in-depth study: a simulation game of negotiation (Study 1: class study, n=60). In the second (design) phase I used rapid prototyping to develop a gamified web toolkit, embodying the CCO framework from crime prevention, making five small-scale formative evaluations (Study 2, n=17) and a final lab evaluation (Study 3, n=28). In the final (production) stage the toolkit was used in two class studies (Study 4, n=34 and Study 5, n=20), exploring its adoption in a real-world environment. This thesis makes three main contributions. One contribution is the adaptation of the iterative method of the phenomenographic learning study to the study of the efficiency of serious games. This employs open questionsing, analysed with 3 different means of analysis to demonstrate 4 distinct types of evidence of deep learning. Another contribution is the provided partial evidence for the positive effects from the introduction of variation on engagement and learning. The third contribution is the development of four design- based research principles: i) the importance of being agile; ii) feedback from interpretation of the theory; iii) particular needs for facilitation; and iv) reusing user-generated content

    A Framework for an Adaptive Early Warning and Response System for Insider Privacy Breaches

    Get PDF
    Organisations such as governments and healthcare bodies are increasingly responsible for managing large amounts of personal information, and the increasing complexity of modern information systems is causing growing concerns about the protection of these assets from insider threats. Insider threats are very difficult to handle, because the insiders have direct access to information and are trusted by their organisations. The nature of insider privacy breaches varies with the organisation’s acceptable usage policy and the attributes of an insider. However, the level of risk that insiders pose depends on insider breach scenarios including their access patterns and contextual information, such as timing of access. Protection from insider threats is a newly emerging research area, and thus, only few approaches are available that systemise the continuous monitoring of dynamic insider usage characteristics and adaptation depending on the level of risk. The aim of this research is to develop a formal framework for an adaptive early warning and response system for insider privacy breaches within dynamic software systems. This framework will allow the specification of multiple policies at different risk levels, depending on event patterns, timing constraints, and the enforcement of adaptive response actions, to interrupt insider activity. Our framework is based on Usage Control (UCON), a comprehensive model that controls previous, ongoing, and subsequent resource usage. We extend UCON to include interrupt policy decisions, in which multiple policy decisions can be expressed at different risk levels. In particular, interrupt policy decisions can be dynamically adapted upon the occurrence of an event or over time. We propose a computational model that represents the concurrent behaviour of an adaptive early warning and response system in the form of statechart. In addition, we propose a Privacy Breach Specification Language (PBSL) based on this computational model, in which event patterns, timing constraints, and the triggered early warning level are expressed in the form of policy rules. The main features of PBSL are its expressiveness, simplicity, practicality, and formal semantics. The formal semantics of the PBSL, together with a model of the mechanisms enforcing the policies, is given in an operational style. Enforcement mechanisms, which are defined by the outcomes of the policy rules, influence the system state by mutually interacting between the policy rules and the system behaviour. We demonstrate the use of this PBSL with a case study from the e-government domain that includes some real-world insider breach scenarios. The formal framework utilises a tool that supports the animation of the enforcement and policy models. This tool also supports the model checking used to formally verify the safety and progress properties of the system over the policy and the enforcement specifications

    Deteção de propagação de ameaças e exfiltração de dados em redes empresariais

    Get PDF
    Modern corporations face nowadays multiple threats within their networks. In an era where companies are tightly dependent on information, these threats can seriously compromise the safety and integrity of sensitive data. Unauthorized access and illicit programs comprise a way of penetrating the corporate networks, able to traversing and propagating to other terminals across the private network, in search of confidential data and business secrets. The efficiency of traditional security defenses are being questioned with the number of data breaches occurred nowadays, being essential the development of new active monitoring systems with artificial intelligence capable to achieve almost perfect detection in very short time frames. However, network monitoring and storage of network activity records are restricted and limited by legal laws and privacy strategies, like encryption, aiming to protect the confidentiality of private parties. This dissertation proposes methodologies to infer behavior patterns and disclose anomalies from network traffic analysis, detecting slight variations compared with the normal profile. Bounded by network OSI layers 1 to 4, raw data are modeled in features, representing network observations, and posteriorly, processed by machine learning algorithms to classify network activity. Assuming the inevitability of a network terminal to be compromised, this work comprises two scenarios: a self-spreading force that propagates over internal network and a data exfiltration charge which dispatch confidential info to the public network. Although features and modeling processes have been tested for these two cases, it is a generic operation that can be used in more complex scenarios as well as in different domains. The last chapter describes the proof of concept scenario and how data was generated, along with some evaluation metrics to perceive the model’s performance. The tests manifested promising results, ranging from 96% to 99% for the propagation case and 86% to 97% regarding data exfiltration.Nos dias de hoje, várias organizações enfrentam múltiplas ameaças no interior da sua rede. Numa época onde as empresas dependem cada vez mais da informação, estas ameaças podem compremeter seriamente a segurança e a integridade de dados confidenciais. O acesso não autorizado e o uso de programas ilícitos constituem uma forma de penetrar e ultrapassar as barreiras organizacionais, sendo capazes de propagarem-se para outros terminais presentes no interior da rede privada com o intuito de atingir dados confidenciais e segredos comerciais. A eficiência da segurança oferecida pelos sistemas de defesa tradicionais está a ser posta em causa devido ao elevado número de ataques de divulgação de dados sofridos pelas empresas. Desta forma, o desenvolvimento de novos sistemas de monitorização ativos usando inteligência artificial é crucial na medida de atingir uma deteção mais precisa em curtos períodos de tempo. No entanto, a monitorização e o armazenamento dos registos da atividade da rede são restritos e limitados por questões legais e estratégias de privacidade, como a cifra dos dados, visando proteger a confidencialidade das entidades. Esta dissertação propõe metodologias para inferir padrões de comportamento e revelar anomalias através da análise de tráfego que passa na rede, detetando pequenas variações em comparação com o perfil normal de atividade. Delimitado pelas camadas de rede OSI 1 a 4, os dados em bruto são modelados em features, representando observações de rede e, posteriormente, processados por algoritmos de machine learning para classificar a atividade de rede. Assumindo a inevitabilidade de um terminal ser comprometido, este trabalho compreende dois cenários: um ataque que se auto-propaga sobre a rede interna e uma tentativa de exfiltração de dados que envia informações para a rede pública. Embora os processos de criação de features e de modelação tenham sido testados para estes dois casos, é uma operação genérica que pode ser utilizada em cenários mais complexos, bem como em domínios diferentes. O último capítulo inclui uma prova de conceito e descreve o método de criação dos dados, com a utilização de algumas métricas de avaliação de forma a espelhar a performance do modelo. Os testes mostraram resultados promissores, variando entre 96% e 99% para o caso da propagação e entre 86% e 97% relativamente ao roubo de dados.Mestrado em Engenharia de Computadores e Telemátic
    • …
    corecore