7,109 research outputs found

    DECEPTION BASED TECHNIQUES AGAINST RANSOMWARES: A SYSTEMATIC REVIEW

    Get PDF
    Ransomware is the most prevalent emerging business risk nowadays. It seriously affects business continuity and operations. According to Deloitte Cyber Security Landscape 2022, up to 4000 ransomware attacks occur daily, while the average number of days an organization takes to identify a breach is 191. Sophisticated cyber-attacks such as ransomware typically must go through multiple consecutive phases (initial foothold, network propagation, and action on objectives) before accomplishing its final objective. This study analyzed decoy-based solutions as an approach (detection, prevention, or mitigation) to overcome ransomware. A systematic literature review was conducted, in which the result has shown that deception-based techniques have given effective and significant performance against ransomware with minimal resources. It is also identified that contrary to general belief, deception techniques mainly involved in passive approaches (i.e., prevention, detection) possess other active capabilities such as ransomware traceback and obstruction (thwarting), file decryption, and decryption key recovery. Based on the literature review, several evaluation methods are also analyzed to measure the effectiveness of these deception-based techniques during the implementation process

    Markov modeling of moving target defense games

    Get PDF
    We introduce a Markov-model-based framework for Moving Target Defense (MTD) analysis. The framework allows modeling of broad range of MTD strategies, provides general theorems about how the probability of a successful adversary defeating an MTD strategy is related to the amount of time/cost spent by the adversary, and shows how a multi-level composition of MTD strategies can be analyzed by a straightforward combination of the analysis for each one of these strategies. Within the proposed framework we define the concept of security capacity which measures the strength or effectiveness of an MTD strategy: the security capacity depends on MTD specific parameters and more general system parameters. We apply our framework to two concrete MTD strategies

    Investigating Key Techniques to Leverage the Functionality of Ground/Wall Penetrating Radar

    Get PDF
    Ground penetrating radar (GPR) has been extensively utilized as a highly efficient and non-destructive testing method for infrastructure evaluation, such as highway rebar detection, bridge decks inspection, asphalt pavement monitoring, underground pipe leakage detection, railroad ballast assessment, etc. The focus of this dissertation is to investigate the key techniques to tackle with GPR signal processing from three perspectives: (1) Removing or suppressing the radar clutter signal; (2) Detecting the underground target or the region of interest (RoI) in the GPR image; (3) Imaging the underground target to eliminate or alleviate the feature distortion and reconstructing the shape of the target with good fidelity. In the first part of this dissertation, a low-rank and sparse representation based approach is designed to remove the clutter produced by rough ground surface reflection for impulse radar. In the second part, Hilbert Transform and 2-D Renyi entropy based statistical analysis is explored to improve RoI detection efficiency and to reduce the computational cost for more sophisticated data post-processing. In the third part, a back-projection imaging algorithm is designed for both ground-coupled and air-coupled multistatic GPR configurations. Since the refraction phenomenon at the air-ground interface is considered and the spatial offsets between the transceiver antennas are compensated in this algorithm, the data points collected by receiver antennas in time domain can be accurately mapped back to the spatial domain and the targets can be imaged in the scene space under testing. Experimental results validate that the proposed three-stage cascade signal processing methodologies can improve the performance of GPR system

    The Responsibility Quantification (ResQu) Model of Human Interaction with Automation

    Full text link
    Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human casual responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility to outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. The current model is an initial step in the complex goal to create a comprehensive responsibility model, that will enable quantification of human causal responsibility. It assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. Despite these limitations, it can aid in the analysis of systems designs alternatives and policy decisions regarding human responsibility in intelligent systems and advanced automation

    Real-time fusion and projection of network intrusion activity

    Get PDF
    Intrusion Detection Systems (IDS) warn of suspicious or malicious network activity and are a fundamental, yet passive, defense-in-depth layer for modern networks. Prior research has applied information fusion techniques to correlate the alerts of multiple IDSs and group those belonging to the same multi-stage attack into attack tracks. Projecting the next likely step in these tracks potentially enhances an analyst’s situational awareness; however, the reliance on attack plans, complicated algorithms, or expert knowledge of the respective network is prohibitive and prone to obsolescence with the continual deployment of new technology and evolution of hacker tradecraft. This thesis presents a real-time continually learning system capable of projecting attack tracks that does not require a priori knowledge about network architecture or rely on static attack templates. Prediction correctness over time and other metrics are used to assess the system’s performance. The system demonstrates the successful real-time adaptation of the model, including enhancements such as the prediction that a never before observed event is about to occur. The intrusion projection system is framed as part of a larger information fusion and impact assessment architecture for cyber security

    The History of the Quantitative Methods in Finance Conference Series. 1992-2007

    Get PDF
    This report charts the history of the Quantitative Methods in Finance (QMF) conference from its beginning in 1993 to the 15th conference in 2007. It lists alphabetically the 1037 speakers who presented at all 15 conferences and the titles of their papers.

    Automating Cyber Analytics

    Get PDF
    Model based security metrics are a growing area of cyber security research concerned with measuring the risk exposure of an information system. These metrics are typically studied in isolation, with the formulation of the test itself being the primary finding in publications. As a result, there is a flood of metric specifications available in the literature but a corresponding dearth of analyses verifying results for a given metric calculation under different conditions or comparing the efficacy of one measurement technique over another. The motivation of this thesis is to create a systematic methodology for model based security metric development, analysis, integration, and validation. In doing so we hope to fill a critical gap in the way we view and improve a system’s security. In order to understand the security posture of a system before it is rolled out and as it evolves, we present in this dissertation an end to end solution for the automated measurement of security metrics needed to identify risk early and accurately. To our knowledge this is a novel capability in design time security analysis which provides the foundation for ongoing research into predictive cyber security analytics. Modern development environments contain a wealth of information in infrastructure-as-code repositories, continuous build systems, and container descriptions that could inform security models, but risk evaluation based on these sources is ad-hoc at best, and often simply left until deployment. Our goal in this work is to lay the groundwork for security measurement to be a practical part of the system design, development, and integration lifecycle. In this thesis we provide a framework for the systematic validation of the existing security metrics body of knowledge. In doing so we endeavour not only to survey the current state of the art, but to create a common platform for future research in the area to be conducted. We then demonstrate the utility of our framework through the evaluation of leading security metrics against a reference set of system models we have created. We investigate how to calibrate security metrics for different use cases and establish a new methodology for security metric benchmarking. We further explore the research avenues unlocked by automation through our concept of an API driven S-MaaS (Security Metrics-as-a-Service) offering. We review our design considerations in packaging security metrics for programmatic access, and discuss how various client access-patterns are anticipated in our implementation strategy. Using existing metric processing pipelines as reference, we show how the simple, modular interfaces in S-MaaS support dynamic composition and orchestration. Next we review aspects of our framework which can benefit from optimization and further automation through machine learning. First we create a dataset of network models labeled with the corresponding security metrics. By training classifiers to predict security values based only on network inputs, we can avoid the computationally expensive attack graph generation steps. We use our findings from this simple experiment to motivate our current lines of research into supervised and unsupervised techniques such as network embeddings, interaction rule synthesis, and reinforcement learning environments. Finally, we examine the results of our case studies. We summarize our security analysis of a large scale network migration, and list the friction points along the way which are remediated by this work. We relate how our research for a large-scale performance benchmarking project has influenced our vision for the future of security metrics collection and analysis through dev-ops automation. We then describe how we applied our framework to measure the incremental security impact of running a distributed stream processing system inside a hardware trusted execution environment
    • …
    corecore