514 research outputs found

    Signature Base Method Dataset Feature Reduction of Opcode Using Pre-Processing Approach

    Get PDF
    Malware can be defined as any type of malicious code that has the potential to harm a computer or network. To detect unknown malware families, the frequency of the appearance of Opcode (Operation Code) sequences are used through dynamic analysis. Opcode n-gram analysis used to extract features from the inspected files. Opcode n-grams are used as features during the classification process with the aim of identifying unknown malicious code. A support vector machine (SVM) is used to create a reference model, which is used to evaluate two methods of feature reduction, which are area of intersect. The SVM is configured to traverse through the dataset searching for Opcodes that have a positive impact on the classification of benign and malicious software. The dataset is constructed by representing each executable file as a set of Opcode density histograms. Classification tasks involve separating dataset into training and test data. The training sets are classified into benign and malicious software. In area of interest the characteristics of benign and malicious Opcodes are plotted as normal distributions. They are grouped into density curves of a single Opcode. The key feature to note is the overlapping area of the two density curves. In Subspace analysis the importance of individual Opcodes, are investigated by the eigenvalues and eigenvectors in subspace .PCA is used for data compression and mapping. The eigenvector filter Opcodes coincides with the SVM chose Opcodes

    Unsupervised Anomaly-based Malware Detection using Hardware Features

    Get PDF
    Recent works have shown promise in using microarchitectural execution patterns to detect malware programs. These detectors belong to a class of detectors known as signature-based detectors as they catch malware by comparing a program's execution pattern (signature) to execution patterns of known malware programs. In this work, we propose a new class of detectors - anomaly-based hardware malware detectors - that do not require signatures for malware detection, and thus can catch a wider range of malware including potentially novel ones. We use unsupervised machine learning to build profiles of normal program execution based on data from performance counters, and use these profiles to detect significant deviations in program behavior that occur as a result of malware exploitation. We show that real-world exploitation of popular programs such as IE and Adobe PDF Reader on a Windows/x86 platform can be detected with nearly perfect certainty. We also examine the limits and challenges in implementing this approach in face of a sophisticated adversary attempting to evade anomaly-based detection. The proposed detector is complementary to previously proposed signature-based detectors and can be used together to improve security.Comment: 1 page, Latex; added description for feature selection in Section 4, results unchange

    An Immune Inspired Approach to Anomaly Detection

    Get PDF
    The immune system provides a rich metaphor for computer security: anomaly detection that works in nature should work for machines. However, early artificial immune system approaches for computer security had only limited success. Arguably, this was due to these artificial systems being based on too simplistic a view of the immune system. We present here a second generation artificial immune system for process anomaly detection. It improves on earlier systems by having different artificial cell types that process information. Following detailed information about how to build such second generation systems, we find that communication between cells types is key to performance. Through realistic testing and validation we show that second generation artificial immune systems are capable of anomaly detection beyond generic system policies. The paper concludes with a discussion and outline of the next steps in this exciting area of computer security.Comment: 19 pages, 4 tables, 2 figures, Handbook of Research on Information Security and Assuranc

    Robust and secure monitoring and attribution of malicious behaviors

    Get PDF
    Worldwide computer systems continue to execute malicious software that degrades the systemsâ performance and consumes network capacity by generating high volumes of unwanted traffic. Network-based detectors can effectively identify machines participating in the ongoing attacks by monitoring the traffic to and from the systems. But, network detection alone is not enough; it does not improve the operation of the Internet or the health of other machines connected to the network. We must identify malicious code running on infected systems, participating in global attack networks. This dissertation describes a robust and secure approach that identifies malware present on infected systems based on its undesirable use of network. Our approach, using virtualization, attributes malicious traffic to host-level processes responsible for the traffic. The attribution identifies on-host processes, but malware instances often exhibit parasitic behaviors to subvert the execution of benign processes. We then augment the attribution software with a host-level monitor that detects parasitic behaviors occurring at the user- and kernel-level. User-level parasitic attack detection happens via the system-call interface because it is a non-bypassable interface for user-level processes. Due to the unavailability of one such interface inside the kernel for drivers, we create a new driver monitoring interface inside the kernel to detect parasitic attacks occurring through this interface. Our attribution software relies on a guest kernelâ s data to identify on-host processes. To allow secure attribution, we prevent illegal modifications of critical kernel data from kernel-level malware. Together, our contributions produce a unified research outcome --an improved malicious code identification system for user- and kernel-level malware.Ph.D.Committee Chair: Giffin, Jonathon; Committee Member: Ahamad, Mustaque; Committee Member: Blough, Douglas; Committee Member: Lee, Wenke; Committee Member: Traynor, Patric

    Fault-injection through model checking via naive assumptions about state machine synchrony semantics

    Get PDF
    Software behavior can be defined as the action or reaction of software to external and/or internal conditions. Software behavior is an important characteristic in determining software quality. Fault-injection is a method to assess software quality through its\u27 behavior. Our research involves a fault-injection process combined with model checking. We introduce a concept of naive assumptions which exploits the assumptions of execution order, synchrony and fairness. Naive assumptions are applied to inject faults into our models. We use linear temporal logic to examine the model for anomalous behaviors. This method shows us the benefits of using fault-injection and model checking and the advantage of the counter-examples generated by model checkers. We illustrate this technique on a fuel injection Sensor Failure Detection system and discuss the anomalies in detail
    corecore