6,257 research outputs found

    Quantitative analysis of the release order of defensive mechanisms

    Get PDF
    PhD ThesisDependency on information technology (IT) and computer and information security (CIS) has become a critical concern for many organizations. This concern has essentially centred on protecting secrecy, confidentiality, integrity and availability of information. To overcome this concern, defensive mechanisms, which encompass a variety of services and protections, have been proposed to protect system resources from misuse. Most of these defensive mechanisms, such as CAPTCHAs and spam filters, rely in the first instance on a single algorithm as a defensive mechanism. Attackers would eventually break each mechanism. So, each algorithm would ultimately become useless and the system no longer protected. Although this broken algorithm will be replaced by a new algorithm, no one shed light on a set of algorithms as a defensive mechanism. This thesis looks at a set of algorithms as a holistic defensive mechanism. Our hypothesis is that the order in which a set of defensive algorithms is released has a significant impact on the time taken by attackers to break the combined set of algorithms. The rationale behind this hypothesis is that attackers learn from their attempts, and that the release schedule of defensive mechanisms can be adjusted so as to impair the learning process. To demonstrate the correctness of our hypothesis, an experimental study involving forty participants was conducted to evaluate the effect of algorithms’ order on the time taken to break them. In addition, this experiment explores how the learning process of attackers could be observed. The results showed that the order in which algorithms are released has a statistically significant impact on the time attackers take to break all algorithms. Based on these results, a model has been constructed using Stochastic Petri Nets, which facilitate theoretical analysis of the release order of a set of algorithms approach. Moreover, a tailored optimization algorithm is proposed using a Markov Decision Process model in order to obtain efficiently the optimal release strategy for any given model by maximizing the time taken to break a set of algorithms. As our hypothesis is based on the learning acquisition ability of attackers while interacting with the system, the Attacker Learning Curve (ALC) concept is developed. Based on empirical results of the ALC, an attack strategy detection approach is introduced and evaluated, which has achieved a detection success rate higher than 70%. The empirical findings in this detection approach provide a new understanding of not only how to detect the attack strategy used, but also how to track the attack strategy through the probabilities of classifying results that may provide an advantage for optimising the release order of defensive mechanisms

    Continuous trust management frameworks : concept, design and characteristics

    Get PDF
    PhD ThesisA Trust Management Framework is a collection of technical components and governing rules and contracts to establish secure, confidential, and Trustworthy transactions among the Trust Stakeholders whether they are Users, Service Providers, or Legal Authorities. Despite the presence of many Trust Frameworks projects, they still fail at presenting a mature Framework that can be Trusted by all its Stakeholders. Particularly speaking, most of the current research focus on the Security aspects that may satisfy some Stakeholders but ignore other vital Trust Properties like Privacy, Legal Authority Enforcement, Practicality, and Customizability. This thesis is all about understanding and utilising the state of the art technologies of Trust Management to come up with a Trust Management Framework that could be Trusted by all its Stakeholders by providing a Continuous Data Control where the exchanged data would be handled in a Trustworthy manner before and after the data release from one party to another. For that we call it: Continuous Trust Management Framework. In this thesis, we present a literature survey where we illustrate the general picture of the current research main categorise as well as the main Trust Stakeholders, Trust Challenges, and Trust Requirements. We picked few samples representing each of the main categorise in the literature of Trust Management Frameworks for detailed comparison to understand the strengths and weaknesses of those categorise. Showing that the current Trust Management Frameworks are focusing on fulfilling most of the Trust Attributes needed by the Trust Stakeholders except for the Continuous Data Control Attribute, we argued for the vitality of our proposed generic design of the Continuous Trust Management Framework. To demonstrate our Design practicality, we present a prototype implementing its basic Stakeholders like the Users, Service Providers, Identity Provider, and Auditor on top of the OpenID Connect protocol. The sample use-case of our prototype is to protect the Users’ email addresses. That is, Users would ask for their emails not to be iii shared with third parties but some Providers would act maliciously and share these emails with third parties who would, in turn, send spam emails to the victim Users. While the prototype Auditor would be able to protect and track data before their release to the Service Providers, it would not be able to enforce the data access policy after release. We later generalise our sample use-case to cover various Mass Active Attacks on Users’ Credentials like, for example, using stolen credit cards or illegally impersonating third-party identity. To protect the Users’ Credentials after release, we introduce a set of theories and building blocks to aid our Continuous Trust Framework’s Auditor that would act as the Trust Enforcement point. These theories rely primarily on analysing the data logs recorded by our prototype prior to releasing the data. To test our theories, we present a Simulation Model of the Auditor to optimise its parameters. During some of our Simulation Stages, we assumed the availability of a Data Governance Unit, DGU, that would provide hardware roots of Trust. This DGU is to be installed in the Service Providers’ server-side to govern how they handle the Users’ data. The final simulation results include a set of different Defensive Strategies’ Flavours that could be utilized by the Auditor depending on the environment where it operates. This thesis concludes with the fact that utilising Hard Trust Measures such as DGU without effective Defensive Strategies may not provide the ultimate Trust solution. That is especially true at the bootstrapping phase where Service Providers would be reluctant to adopt a restrictive technology like our proposed DGU. Nevertheless, even in the absence of the DGU technology now, deploying the developed Defensive Strategies’ Flavours that do not rely on DGU would still provide significant improvements in terms of enforcing Trust even after data release compared to the currently widely deployed Strategy: doing nothing!Public Authority for Applied Education and Training in Kuwait, PAAET

    Automated Human Screening for Detecting Concealed Knowledge

    Get PDF
    Screening individuals for concealed knowledge has traditionally been the purview of professional interrogators investigating a crime. But the ability to detect when a person is hiding important information would be of high value to many other fields and functions. This dissertation proposes design principles for and reports on an implementation and empirical evaluation of a non-invasive, automated system for human screening. The screening system design (termed an automated screening kiosk or ASK) is patterned after a standard interviewing method called the Concealed Information Test (CIT), which is built on theories explaining psychophysiological and behavioral effects of human orienting and defensive responses. As part of testing the ASK proof of concept, I propose and empirically examine alternative indicators of concealed knowledge in a CIT. Specifically, I propose kinesic rigidity as a viable cue, propose and instantiate an automated method for capturing rigidity, and test its viability using a traditional CIT experiment. I also examine oculomotor behavior using a mock security screening experiment using an ASK system design. Participants in this second experiment packed a fake improvised explosive device (IED) in a bag and were screened by an ASK system. Results indicate that the ASK design, if implemented within a highly controlled framework such as the CIT, has potential to overcome barriers to more widespread application of concealed knowledge testing in government and business settings

    Security and privacy problems in voice assistant applications: A survey

    Get PDF
    Voice assistant applications have become omniscient nowadays. Two models that provide the two most important functions for real-life applications (i.e., Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR) models and Speaker Identification (SI) models. According to recent studies, security and privacy threats have also emerged with the rapid development of the Internet of Things (IoT). The security issues researched include attack techniques toward machine learning models and other hardware components widely used in voice assistant applications. The privacy issues include technical-wise information stealing and policy-wise privacy breaches. The voice assistant application takes a steadily growing market share every year, but their privacy and security issues never stopped causing huge economic losses and endangering users' personal sensitive information. Thus, it is important to have a comprehensive survey to outline the categorization of the current research regarding the security and privacy problems of voice assistant applications. This paper concludes and assesses five kinds of security attacks and three types of privacy threats in the papers published in the top-tier conferences of cyber security and voice domain

    A Multi Agent System for Flow-Based Intrusion Detection

    Get PDF
    The detection and elimination of threats to cyber security is essential for system functionality, protection of valuable information, and preventing costly destruction of assets. This thesis presents a Mobile Multi-Agent Flow-Based IDS called MFIREv3 that provides network anomaly detection of intrusions and automated defense. This version of the MFIRE system includes the development and testing of a Multi-Objective Evolutionary Algorithm (MOEA) for feature selection that provides agents with the optimal set of features for classifying the state of the network. Feature selection provides separable data points for the selected attacks: Worm, Distributed Denial of Service, Man-in-the-Middle, Scan, and Trojan. This investigation develops three techniques of self-organization for multiple distributed agents in an intrusion detection system: Reputation, Stochastic, and Maximum Cover. These three movement models are tested for effectiveness in locating good agent vantage points within the network to classify the state of the network. MFIREv3 also introduces the design of defensive measures to limit the effects of network attacks. Defensive measures included in this research are rate-limiting and elimination of infected nodes. The results of this research provide an optimistic outlook for flow-based multi-agent systems for cyber security. The impact of this research illustrates how feature selection in cooperation with movement models for multi agent systems provides excellent attack detection and classification

    Poisoning Attacks on Learning-Based Keystroke Authentication and a Residue Feature Based Defense

    Get PDF
    Behavioral biometrics, such as keystroke dynamics, are characterized by relatively large variation in the input samples as compared to physiological biometrics such as fingerprints and iris. Recent advances in machine learning have resulted in behaviorbased pattern learning methods that obviate the effects of variation by mapping the variable behavior patterns to a unique identity with high accuracy. However, it has also exposed the learning systems to attacks that use updating mechanisms in learning by injecting imposter samples to deliberately drift the data to impostors’ patterns. Using the principles of adversarial drift, we develop a class of poisoning attacks, named Frog-Boiling attacks. The update samples are crafted with slow changes and random perturbations so that they can bypass the classifiers detection. Taking the case of keystroke dynamics which includes motoric and neurological learning, we demonstrate the success of our attack mechanism. We also present a detection mechanism for the frog-boiling attack that uses correlation between successive training samples to detect spurious input patterns. To measure the effect of adversarial drift in frog-boiling attack and the effectiveness of the proposed defense mechanism, we use traditional error rates such as FAR, FRR, and EER and the metric in terms of shifts in biometric menagerie

    Improved Detection for Advanced Polymorphic Malware

    Get PDF
    Malicious Software (malware) attacks across the internet are increasing at an alarming rate. Cyber-attacks have become increasingly more sophisticated and targeted. These targeted attacks are aimed at compromising networks, stealing personal financial information and removing sensitive data or disrupting operations. Current malware detection approaches work well for previously known signatures. However, malware developers utilize techniques to mutate and change software properties (signatures) to avoid and evade detection. Polymorphic malware is practically undetectable with signature-based defensive technologies. Today’s effective detection rate for polymorphic malware detection ranges from 68.75% to 81.25%. New techniques are needed to improve malware detection rates. Improved detection of polymorphic malware can only be accomplished by extracting features beyond the signature realm. Targeted detection for polymorphic malware must rely upon extracting key features and characteristics for advanced analysis. Traditionally, malware researchers have relied on limited dimensional features such as behavior (dynamic) or source/execution code analysis (static). This study’s focus was to extract and evaluate a limited set of multidimensional topological data in order to improve detection for polymorphic malware. This study used multidimensional analysis (file properties, static and dynamic analysis) with machine learning algorithms to improve malware detection. This research demonstrated improved polymorphic malware detection can be achieved with machine learning. This study conducted a number of experiments using a standard experimental testing protocol. This study utilized three advanced algorithms (Metabagging (MB), Instance Based k-Means (IBk) and Deep Learning Multi-Layer Perceptron) with a limited set of multidimensional data. Experimental results delivered detection results above 99.43%. In addition, the experiments delivered near zero false positives. The study’s approach was based on single case experimental design, a well-accepted protocol for progressive testing. The study constructed a prototype to automate feature extraction, assemble files for analysis, and analyze results through multiple clustering algorithms. The study performed an evaluation of large malware sample datasets to understand effectiveness across a wide range of malware. The study developed an integrated framework which automated feature extraction for multidimensional analysis. The feature extraction framework consisted of four modules: 1) a pre-process module that extracts and generates topological features based on static analysis of machine code and file characteristics, 2) a behavioral analysis module that extracts behavioral characteristics based on file execution (dynamic analysis), 3) an input file construction and submission module, and 4) a machine learning module that employs various advanced algorithms. As with most studies, careful attention was paid to false positive and false negative rates which reduce their overall detection accuracy and effectiveness. This study provided a novel approach to expand the malware body of knowledge and improve the detection for polymorphic malware targeting Microsoft operating systems

    Designing, Building, and Modeling Maneuverable Applications within Shared Computing Resources

    Get PDF
    Extending the military principle of maneuver into war-fighting domain of cyberspace, academic and military researchers have produced many theoretical and strategic works, though few have focused on researching actual applications and systems that apply this principle. We present our research in designing, building and modeling maneuverable applications in order to gain the system advantages of resource provisioning, application optimization, and cybersecurity improvement. We have coined the phrase “Maneuverable Applications” to be defined as distributed and parallel application that take advantage of the modification, relocation, addition or removal of computing resources, giving the perception of movement. Our work with maneuverable applications has been within shared computing resources, such as the Clemson University Palmetto cluster, where multiple users share access and time to a collection of inter-networked computers and servers. In this dissertation, we describe our implementation and analytic modeling of environments and systems to maneuver computational nodes, network capabilities, and security enhancements for overcoming challenges to a cyberspace platform. Specifically we describe our work to create a system to provision a big data computational resource within academic environments. We also present a computing testbed built to allow researchers to study network optimizations of data centers. We discuss our Petri Net model of an adaptable system, which increases its cybersecurity posture in the face of varying levels of threat from malicious actors. Lastly, we present work and investigation into integrating these technologies into a prototype resource manager for maneuverable applications and validating our model using this implementation

    Real-time fusion and projection of network intrusion activity

    Get PDF
    Intrusion Detection Systems (IDS) warn of suspicious or malicious network activity and are a fundamental, yet passive, defense-in-depth layer for modern networks. Prior research has applied information fusion techniques to correlate the alerts of multiple IDSs and group those belonging to the same multi-stage attack into attack tracks. Projecting the next likely step in these tracks potentially enhances an analyst’s situational awareness; however, the reliance on attack plans, complicated algorithms, or expert knowledge of the respective network is prohibitive and prone to obsolescence with the continual deployment of new technology and evolution of hacker tradecraft. This thesis presents a real-time continually learning system capable of projecting attack tracks that does not require a priori knowledge about network architecture or rely on static attack templates. Prediction correctness over time and other metrics are used to assess the system’s performance. The system demonstrates the successful real-time adaptation of the model, including enhancements such as the prediction that a never before observed event is about to occur. The intrusion projection system is framed as part of a larger information fusion and impact assessment architecture for cyber security
    corecore