33 research outputs found

    Lateral Movement in Windows Systems and Detecting the Undetected ShadowMove

    Get PDF
    Lateral Movement is a pervasive threat that exists because modern networked systems that provide access to multiple users are far more efficient than their non-networked counterparts. It is a well-known attack methodology with extensive research completed into preventing lateral movement in enterprise systems. However, attackers are using more sophisticated methods to move laterally that bypass typical detection systems. This research comprehensively reviews the problems in lateral movement detection and outlines common defenses to protect modern systems from lateral movement attacks. A literature review is conducted, outlining new techniques for automatic detection of malicious lateral movement, explaining common attack methods utilized by Advanced Persistent Threats, and components built into the Windows operating system that can assist with discovering malicious lateral movement. Finally, a novel method for moving laterally is introduced and studied, and an original method for detecting this method of lateral movement is proposed

    A Novel Method for Moving Laterally and Discovering Malicious Lateral Movements in Windows Operating Systems: A Case Study

    Get PDF
    Lateral movement is a pervasive threat because modern networked systems that provide access to multiple users are far more efficient than their non-networked counterparts. It is a well-known attack methodology with extensive research conducted investigating the prevention of lateral movement in enterprise systems. However, attackers use increasingly sophisticated methods to move laterally that bypass typical detection systems. This research comprehensively reviews the problems in lateral movement detection and outlines common defenses to protect modern systems from lateral movement attacks. A literature review outlines techniques for automatic detection of malicious lateral movement, explaining common attack methods utilized by advanced persistent threats and components built into the Windows operating system that can assist with discovering malicious lateral movement. Finally, a novel approach for moving laterally designed by other security researchers is reviewed and studied, an original process for detecting this method of lateral movement is proposed, and the application of the detection methodology is also expanded

    A Survey on Enterprise Network Security: Asset Behavioral Monitoring and Distributed Attack Detection

    Full text link
    Enterprise networks that host valuable assets and services are popular and frequent targets of distributed network attacks. In order to cope with the ever-increasing threats, industrial and research communities develop systems and methods to monitor the behaviors of their assets and protect them from critical attacks. In this paper, we systematically survey related research articles and industrial systems to highlight the current status of this arms race in enterprise network security. First, we discuss the taxonomy of distributed network attacks on enterprise assets, including distributed denial-of-service (DDoS) and reconnaissance attacks. Second, we review existing methods in monitoring and classifying network behavior of enterprise hosts to verify their benign activities and isolate potential anomalies. Third, state-of-the-art detection methods for distributed network attacks sourced from external attackers are elaborated, highlighting their merits and bottlenecks. Fourth, as programmable networks and machine learning (ML) techniques are increasingly becoming adopted by the community, their current applications in network security are discussed. Finally, we highlight several research gaps on enterprise network security to inspire future research.Comment: Journal paper submitted to Elseive

    A Machine Learning Approach for RDP-based Lateral Movement Detection

    Get PDF
    Detecting cyber threats has been an on-going research endeavor. In this era, advanced persistent threats (APTs) can incur significant costs for organizations and businesses. The ultimate goal of cybersecurity is to thwart attackers from achieving their malicious intent, whether it is credential stealing, infrastructure takeover, or program sabotage. Every cyberattack goes through several stages before its termination. Lateral movement (LM) is one of those stages that is of particular importance. Remote Desktop Protocol (RDP) is a method used in LM to successfully authenticate to an unauthorized host that leaves footprints on both host and network logs. In this thesis, we propose to detect evidence of LM using an anomaly-based approach that leverages Windows RDP event logs. We explore different feature sets extracted from these logs and evaluate various supervised and unsupervised machine learning (ML) techniques for classifying RDP sessions with high precision and recall. We also compare the performance of our proposed approach to a state-of-the-art approach and demonstrate that our ML model outperforms in classifying RDP sessions in Windows event logs. In addition, we demonstrate that our model is robust against certain types of adversarial attacks

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    Interactive visualization of event logs for cybersecurity

    Get PDF
    Hidden cyber threats revealed with new visualization software Eventpa

    A static analysis framework for security properties in mobile and cryptographic systems

    Get PDF
    We introduce a static analysis framework for detecting instances of security breaches in infinite mobile and cryptographic systems specified using the languages of the 7r-calculus and its cryptographic extension, the spi calculus. The framework is composed from three components: First, standard denotational semantics of the 7r-calculus and the spi calculus are constructed based on domain theory. The resulting model is sound and adequate with respect to transitions in the operational semantics. The standard semantics is then extended correctly to non-uniformly capture the property of term substitution, which occurs as a result of communications and successful cryptographic operations. Finally, the non-standard semantics is abstracted to operate over finite domains so as to ensure the termination of the static analysis. The safety of the abstract semantics is proven with respect to the nonstandard semantics. The results of the abstract interpretation are then used to capture breaches of the secrecy and authenticity properties in the analysed systems. Two initial prototype implementations of the security analysis for the 7r-calculus and the spi calculus are also included in the thesis. The main contributions of this thesis are summarised by the following. In the area of denotational semantics, the thesis introduces a domain-theoretic model for the spi calculus that is sound and adequate with respect to transitions in the structural operational semantics. In the area of static program analysis, the thesis utilises the denotational approach as the basis for the construction of abstract interpretations for infinite systems modelled by the 7r-calculus and the spi calculus. This facilitates the use of computationally significant mathematical concepts like least fixed points and results in an analysis that is fully compositional. Also, the thesis demonstrates that the choice of the term-substitution property in mobile and cryptographic programs is rich enough to capture breaches of security properties, like process secrecy and authenticity. These properties are used to analyse a number of mobile and cryptographic protocols, like the file transfer protocol and the Needham-Schroeder, SPLICE/AS, Otway-Rees, Kerberos, Yahalom and Woo Lam authentication protocols

    Traffic microstructures and network anomaly detection

    Get PDF
    Much hope has been put in the modelling of network traffic with machine learning methods to detect previously unseen attacks. Many methods rely on features on a microscopic level such as packet sizes or interarrival times to identify reoccurring patterns and detect deviations from them. However, the success of these methods depends both on the quality of corresponding training and evaluation data as well as the understanding of the structures that methods learn. Currently, the academic community is lacking both, with widely used synthetic datasets facing serious problems and the disconnect between methods and data being named the "semantic gap". This thesis provides extensive examinations of the necessary requirements on traffic generation and microscopic traffic structures to enable the effective training and improvement of anomaly detection models. We first present and examine DetGen, a container-based traffic generation paradigm that enables precise control and ground truth information over factors that shape traffic microstructures. The goal of DetGen is to provide researchers with extensive ground truth information and enable the generation of customisable datasets that provide realistic structural diversity. DetGen was designed according to four specific traffic requirements that dataset generation needs to fulfil to enable machine-learning models to learn accurate and generalisable traffic representations. Current network intrusion datasets fail to meet these requirements, which we believe is one of the reasons for the lacking success of anomaly-based detection methods. We demonstrate the significance of these requirements experimentally by examining how model performance decreases when these requirements are not met. We then focus on the control and information over traffic microstructures that DetGen provides, and the corresponding benefits when examining and improving model failures for overall model development. We use three metrics to demonstrate that DetGen is able to provide more control and isolation over the generated traffic. The ground truth information DetGen provides enables us to probe two state-of-the-art traffic classifiers for failures on certain traffic structures, and the corresponding fixes in the model design almost halve the number of misclassifications . Drawing on these results, we propose CBAM, an anomaly detection model that detects network access attacks through deviations from reoccurring flow sequence patterns. CBAM is inspired by the design of self-supervised language models, and improves the AUC of current state-of-the-art by up to 140%. By understanding why several flow sequence structures present difficulties to our model, we make targeted design decisions that improve on these difficulties and ultimately boost the performance of our model. Lastly, we examine how the control and adversarial perturbation of traffic microstructures can be used by an attacker to evade detection. We show that in a stepping-stone attack, an attacker can evade every current detection model by mimicking the patterns observed in streaming services

    On the Effective Use of Data Dependency for Reliable Cloud Service Monitoring

    Get PDF
    Cloud computing is a large-scale distributed computing paradigm that aims at providing powerful computing and storage capability by dynamically sharing a pool of system resources (e.g., network bandwidth, storage space, or virtualized devices) in a multi-tenant environment. With the support of the computing technology, a plethora of cloud services have been developed for meeting the different requirements of cloud service customers (CSCs). While cloud service has many attractive advantages (e.g., rapid service deployment, reliable service availability, elastic service reconfiguration, or economic service billing), the security assurance of cloud services is a key concern for the service customers. Cloud monitoring is an essential mechanism for managing the security assurance of cloud services. Over the last few years, a large number of monitoring mechanisms have been proposed. The mechanisms are developed for monitoring varied security problems in the cloud with the common assumption that all the monitoring information is directly available. These mechanisms can achieve satisfactory monitoring performance only if the assumption can be satisfied (e.g., protecting cloud services from distributed denial of service (DDoS) attacks by checking the traffic information collected from network monitors). However, the existing mechanisms are unfortunately incapable of dealing with the security threats that are subtly crafted by malicious attackers without producing evident attack traces. Due to that the useful information related to the attacks is difficult to collect, the attacks can silently bypass the existing monitoring mechanisms and secretly undermine the victim services. As a result, to develop an effective monitoring mechanism for securing cloud services becomes a compelling demand. For motivating the issue, this thesis initially investigates a typical cloud security attack that can gradually drain system resources in a target cloud without triggering any alarms for highlighting the realistic demand of performing effective security monitoring in cloud systems. To combat the attack, a pragmatic security countermeasure is proposed for securing the cloud. To meet the demand, the thesis focuses on achieving effective security assurance management of cloud services by addressing the common shortcoming of existing monitoring mechanisms. Using the data relation (i.e., data dependency) existing in the collected monitoring data sets, the thesis demonstrates the possibility of leveraging the available information and the existing data relation to indirectly monitor cloud security problems with a novel inference-based security mechanism. Furthermore, the thesis also demonstrates the feasibility of taking advantage of data dependency to obtain the input information for running the inference mechanism by developing a practical data ascertaining technique. Finally, this thesis targets addressing potential data errors that can undermine the reliability of the proposed monitoring mechanism. Consequently, a reliability assessment mechanism is developed to select suitable data used by the proposed mechanism for generating reliable monitoring results
    corecore