130 research outputs found

    A Novel Authentication and Validation Mechanism for Analyzing Syslogs Forensically

    Get PDF
    This research proposes a novel technique for authenticating and validating syslogs for forensic analysis. This technique uses a modification of the Needham Schroeder protocol, which uses nonces (numbers used only once) and public keys. Syslogs, which were developed from an event-logging perspective and not from an evidence-sustaining one, are system treasure maps that chart out and pinpoint attacks and attack attempts. Over the past few years, research on securing syslogs has yielded enhanced syslog protocols that focus on tamper prevention and detection. However, many of these protocols, though efficient from a security perspective, are inadequate when forensics comes into play. From a legal perspective, any kind of evidence found at a crime scene needs to be validated. In addition, any digital forensic evidence when presented in court needs to be admissible, authentic, believable, and reliable. Currently, a patchy log on the server side and client side cannot be considered as formal authentication of a wrongdoer. This work presents a method that ties together, authenticates, and validates all the entities involved in the crime scene--the user using the application, the system that is being used, and the application being used on the system by the user. This means that instead of merely transmitting the header and the message, which is the standard syslog protocol format, the syslog entry along with the user fingerprint, application fingerprint, and system fingerprint are transmitted to the logging server. The assignment of digital fingerprints and the addition of a challenge response mechanism to the underlying syslogging mechanism aim to validate generated syslogs forensically

    Understanding Security Threats in Cloud

    Get PDF
    As cloud computing has become a trend in the computing world, understanding its security concerns becomes essential for improving service quality and expanding business scale. This dissertation studies the security issues in a public cloud from three aspects. First, we investigate a new threat called power attack in the cloud. Second, we perform a systematical measurement on the public cloud to understand how cloud vendors react to existing security threats. Finally, we propose a novel technique to perform data reduction on audit data to improve system capacity, and hence helping to enhance security in cloud. In the power attack, we exploit various attack vectors in platform as a service (PaaS), infrastructure as a service (IaaS), and software as a service (SaaS) cloud environments. to demonstrate the feasibility of launching a power attack, we conduct series of testbed based experiments and data-center-level simulations. Moreover, we give a detailed analysis on how different power management methods could affect a power attack and how to mitigate such an attack. Our experimental results and analysis show that power attacks will pose a serious threat to modern data centers and should be taken into account while deploying new high-density servers and power management techniques. In the measurement study, we mainly investigate how cloud vendors have reacted to the co-residence threat inside the cloud, in terms of Virtual Machine (VM) placement, network management, and Virtual Private Cloud (VPC). Specifically, through intensive measurement probing, we first profile the dynamic environment of cloud instances inside the cloud. Then using real experiments, we quantify the impacts of VM placement and network management upon co-residence, respectively. Moreover, we explore VPC, which is a defensive service of Amazon EC2 for security enhancement, from the routing perspective. Advanced Persistent Threat (APT) is a serious cyber-threat, cloud vendors are seeking solutions to ``connect the suspicious dots\u27\u27 across multiple activities. This requires ubiquitous system auditing for long period of time, which in turn causes overwhelmingly large amount of system audit logs. We propose a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high quality forensics analysis. In particular, we first propose an aggregation algorithm that preserves the event dependency in data reduction to ensure high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. We conduct a comprehensive evaluation on real world auditing systems using more than one-month log traces to validate the efficacy of our approach

    POIROT: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting

    Full text link
    Cyber threat intelligence (CTI) is being used to search for indicators of attacks that might have compromised an enterprise network for a long time without being discovered. To have a more effective analysis, CTI open standards have incorporated descriptive relationships showing how the indicators or observables are related to each other. However, these relationships are either completely overlooked in information gathering or not used for threat hunting. In this paper, we propose a system, called POIROT, which uses these correlations to uncover the steps of a successful attack campaign. We use kernel audits as a reliable source that covers all causal relations and information flows among system entities and model threat hunting as an inexact graph pattern matching problem. Our technical approach is based on a novel similarity metric which assesses an alignment between a query graph constructed out of CTI correlations and a provenance graph constructed out of kernel audit log records. We evaluate POIROT on publicly released real-world incident reports as well as reports of an adversarial engagement designed by DARPA, including ten distinct attack campaigns against different OS platforms such as Linux, FreeBSD, and Windows. Our evaluation results show that POIROT is capable of searching inside graphs containing millions of nodes and pinpoint the attacks in a few minutes, and the results serve to illustrate that CTI correlations could be used as robust and reliable artifacts for threat hunting.Comment: The final version of this paper is going to appear in the ACM SIGSAC Conference on Computer and Communications Security (CCS'19), November 11-15, 2019, London, United Kingdo

    Intelligent and Improved Self-Adaptive Anomaly based Intrusion Detection System for Networks

    Get PDF
    With the advent of digital technology, computer networks have developed rapidly at an unprecedented pace contributing tremendously to social and economic development. They have become the backbone for all critical sectors and all the top Multi-National companies. Unfortunately, security threats for computer networks have increased dramatically over the last decade being much brazen and bolder. Intrusions or attacks on computers and networks are activities or attempts to jeopardize main system security objectives, which called as confidentiality, integrity and availability. They lead mostly in great financial losses, massive sensitive data leaks, thereby decreasing efficiency and the quality of productivity of an organization. There is a great need for an effective Network Intrusion Detection System (NIDS), which are security tools designed to interpret the intrusion attempts in incoming network traffic, thereby achieving a solid line of protection against inside and outside intruders. In this work, we propose to optimize a very popular soft computing tool prevalently used for intrusion detection namely Back Propagation Neural Network (BPNN) using a novel machine learning framework called “ISAGASAA”, based on Improved Self-Adaptive Genetic Algorithm (ISAGA) and Simulated Annealing Algorithm (SAA). ISAGA is our variant of standard Genetic Algorithm (GA), which is developed based on GA improved through an Adaptive Mutation Algorithm (AMA) and optimization strategies. The optimization strategies carried out are Parallel Processing (PP) and Fitness Value Hashing (FVH) that reduce execution time, convergence time and save processing power. While, SAA was incorporated to ISAGA in order to optimize its heuristic search. Experimental results based on Kyoto University benchmark dataset version 2015 demonstrate that our optimized NIDS based BPNN called “ANID BPNN-ISAGASAA” outperforms several state-of-art approaches in terms of detection rate and false positive rate. Moreover, improvement of GA through FVH and PP saves processing power and execution time. Thus, our model is very much convenient for network anomaly detection.

    Towards an efficient vulnerability analysis methodology for better security risk management

    Get PDF
    2010 Summer.Includes bibliographical references.Risk management is a process that allows IT managers to balance between cost of the protective measures and gains in mission capability. A system administrator has to make a decision and choose an appropriate security plan that maximizes the resource utilization. However, making the decision is not a trivial task. Most organizations have tight budgets for IT security; therefore, the chosen plan must be reviewed as thoroughly as other management decisions. Unfortunately, even the best-practice security risk management frameworks do not provide adequate information for effective risk management. Vulnerability scanning and penetration testing that form the core of traditional risk management, identify only the set of system vulnerabilities. Given the complexity of today's network infrastructure, it is not enough to consider the presence or absence of vulnerabilities in isolation. Materializing a threat strongly requires the combination of multiple attacks using different vulnerabilities. Such a requirement is far beyond the capabilities of current day vulnerability scanners. Consequently, assessing the cost of an attack or cost of implementing appropriate security controls is possible only in a piecemeal manner. In this work, we develop and formalize new network vulnerability analysis model. The model encodes in a concise manner, the contributions of different security conditions that lead to system compromise. We extend the model with a systematic risk assessment methodology to support reasoning under uncertainty in an attempt to evaluate the vulnerability exploitation probability. We develop a cost model to quantify the potential loss and gain that can occur in a system if certain conditions are met (or protected). We also quantify the security control cost incurred to implement a set of security hardening measures. We propose solutions for the system administrator's decision problems covering the area of the risk analysis and risk mitigation analysis. Finally, we extend the vulnerability assessment model to the areas of intrusion detection and forensic investigation

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    An Interactive Relaxation Approach for Anomaly Detection and Preventive Measures in Computer Networks

    Get PDF
    It is proposed to develop a framework of detecting and analyzing small and widespread changes in specific dynamic characteristics of several nodes. The characteristics are locally measured at each node in a large network of computers and analyzed using a computational paradigm known as the Relaxation technique. The goal is to be able to detect the onset of a worm or virus as it originates, spreads-out, attacks and disables the entire network. Currently, selective disabling of one or more features across an entire subnet, e.g. firewalls, provides limited security and keeps us from designing high performance net-centric systems. The most desirable response is to surgically disable one or more nodes, or to isolate one or more subnets.The proposed research seeks to model virus/worm propagation as a spatio-temporal process. Such models have been successfully applied in heat-flow and evidence or gestalt driven perception of images among others. In particular, we develop an iterative technique driven by the self-assessed dynamic status of each node in a network. The status of each node will be updated incrementally in concurrence with its connected neighbors to enable timely identification of compromised nodes and subnets. Several key insights used in image analysis of line-diagrams, through an iterative and relaxation-driven node labeling method, are explored to help develop this new framework
    corecore