10 research outputs found

    Evolution and Detection of Polymorphic and Metamorphic Malwares: A Survey

    Full text link
    Malwares are big threat to digital world and evolving with high complexity. It can penetrate networks, steal confidential information from computers, bring down servers and can cripple infrastructures etc. To combat the threat/attacks from the malwares, anti- malwares have been developed. The existing anti-malwares are mostly based on the assumption that the malware structure does not changes appreciably. But the recent advancement in second generation malwares can create variants and hence posed a challenge to anti-malwares developers. To combat the threat/attacks from the second generation malwares with low false alarm we present our survey on malwares and its detection techniques.Comment: 5 Page

    An Innovative Signature Detection System for Polymorphic and Monomorphic Internet Worms Detection and Containment

    Get PDF
    Most current anti-worm systems and intrusion-detection systems use signature-based technology instead of anomaly-based technology. Signature-based technology can only detect known attacks with identified signatures. Existing anti-worm systems cannot detect unknown Internet scanning worms automatically because these systems do not depend upon worm behaviour but upon the worm’s signature. Most detection algorithms used in current detection systems target only monomorphic worm payloads and offer no defence against polymorphic worms, which changes the payload dynamically. Anomaly detection systems can detect unknown worms but usually suffer from a high false alarm rate. Detecting unknown worms is challenging, and the worm defence must be automated because worms spread quickly and can flood the Internet in a short time. This research proposes an accurate, robust and fast technique to detect and contain Internet worms (monomorphic and polymorphic). The detection technique uses specific failure connection statuses on specific protocols such as UDP, TCP, ICMP, TCP slow scanning and stealth scanning as characteristics of the worms. Whereas the containment utilizes flags and labels of the segment header and the source and destination ports to generate the traffic signature of the worms. Experiments using eight different worms (monomorphic and polymorphic) in a testbed environment were conducted to verify the performance of the proposed technique. The experiment results showed that the proposed technique could detect stealth scanning up to 30 times faster than the technique proposed by another researcher and had no false-positive alarms for all scanning detection cases. The experiments showed the proposed technique was capable of containing the worm because of the traffic signature’s uniqueness

    Behaviour-based Virus Analysis and Detection

    Get PDF
    Every day, the growing number of viruses causes major damage to computer systems, which many antivirus products have been developed to protect. Regrettably, existing antivirus products do not provide a full solution to the problems associated with viruses. One of the main reasons for this is that these products typically use signature-based detection, so that the rapid growth in the number of viruses means that many signatures have to be added to their signature databases each day. These signatures then have to be stored in the computer system, where they consume increasing memory space. Moreover, the large database will also affect the speed of searching for signatures, and, hence, affect the performance of the system. As the number of viruses continues to grow, ever more space will be needed in the future. There is thus an urgent need for a novel and robust detection technique. One of the most encouraging recent developments in virus research is the use of formulae, which provides alternatives to classic virus detection methods. The proposed research uses temporal logic and behaviour-based detection to detect viruses. Interval Temporal Logic (ITL) will be used to generate virus specifications, properties and formulae based on the analysis of the behaviour of computer viruses, in order to detect them. Tempura, which is the executable subset of ITL, will be used to check whether a good or bad behaviour occurs with the help of ITL description and system traces. The process will also use AnaTempura, an integrated workbench tool for ITL that supports our system specifications. AnaTempura will offer validation and verification of the ITL specifications and provide runtime testing of these specifications

    Detection of unknown computer worms based on behavioral classification of the host

    No full text
    Machine learning techniques are widely used in many fields. One of the applications of machine learning in the field of information security is classification of a computer behavior into malicious and benign. Antiviruses consisting of signature-based methods are helpless against new (unknown) computer worms. This paper focuses on the feasibility of accurately detecting unknown worm activity in individual computers while minimizing the required set of features collected from the monitored computer. A comprehensive experiment for testing the feasibility of detecting unknown computer worms, employing several computer configurations, background applications, and user activity, was performed. During the experiments 323 computer features were monitored by an agent that was developed. Four feature selection methods were used to reduce the number of features and four learning algorithms were applied on the resulting feature subsets. The evaluation results suggest that by using classification algorithms applied on only 20 features the mean detection accuracy exceeded 90%, and for specific unknown worms accuracy reached above 99%, while maintaining a low level of false positive rate.

    Detection and Classification of Malicious Processes Using System Call Analysis

    Get PDF
    Despite efforts to mitigate the malware threat, the proliferation of malware continues, with record-setting numbers of malware samples being discovered each quarter. Malware are any intentionally malicious software, including software designed for extortion, sabotage, and espionage. Traditional malware defenses are primarily signature-based and heuristic-based, and include firewalls, intrusion detection systems, and antivirus software. Such defenses are reactive, performing well against known threats but struggling against new malware variants and zero-day threats. Together, the reactive nature of traditional defenses and the continuing spread of malware motivate the development of new techniques to detect such threats. One promising set of techniques uses features extracted from system call traces to infer malicious behaviors. This thesis studies the problem of detecting and classifying malicious processes using system call trace analysis. The goal of this study is to identify techniques that are `lightweight' enough and exhibit a low enough false positive rate to be deployed in production environments. The major contributions of this work are (1) a study of the effects of feature extraction strategy on malware detection performance; (2) the comparison of signature-based and statistical analysis techniques for malware detection and classification; (3) the use of sequential detection techniques to identify malicious behaviors as quickly as possible; (4) a study of malware detection performance at very low false positive rates; and (5) an extensive empirical evaluation, wherein the performance of the malware detection and classification systems are evaluated against data collected from production hosts and from the execution of recently discovered malware samples. The outcome of this study is a proof-of-concept system that detects the execution of malicious processes in production environments and classifies them according to their similarity to known malware.Ph.D., Electrical Engineering -- Drexel University, 201

    A Framework for Ensemble Predictive Modeling

    Get PDF
    Ensemble systems have been successfully applied in many fields, such as finance, bioinformatics, medicine, cheminformatics, manufacturing, geography, information security, information retrieval, image retrieval, and recommender systems. The ultimate objective of an ensemble system is to produce better predictions by combining the approximations of different classifiers/models. However, the ensemble performance depends on three main design features. Firstly, the diversity/independence of the base models/classifiers. If all models/classifiers produce similar/correlated predictions, then combining those predictions will not provide any improvement. Diversity is considered to be a key design feature of any successful ensemble system. Secondly, the fusion topology, namely, the selection of a representative topology. Thirdly, the fusion function, namely, the selection of a suitable function. Accordingly, building an effective ensemble system is a complex and challenging process, which requires intuition and deep knowledge of the problem context, and a well-defined predictive modeling process. Although several taxonomies have been reported in the literature, which aim to categorize ensemble systems from the system's designer point of view, there are still important research gaps need to be addressed. First, a comprehensive framework for developing ensemble systems is not yet available. Second, several strategies have been proposed to inject model diversity in the ensemble; however, there is a shortage of empirical studies that compare the effectiveness of these strategies. Third, most of the ensemble systems research has concentrated on simple problems, and relatively small/low-dimensional data sets. Further experimental research is required to investigate the application of ensemble systems to large and/or high-dimensional data sets, with a variety of data types. This research attempts to fill these gaps. First, the thesis proposes a framework for ensemble predictive modeling. It coins the term "ensemble predictive modeling" to refer to the process of developing ensemble systems. Second, the thesis empirically compares several diversity injection strategies. Third, the thesis validates the proposed framework using two real-world, large/high-dimensional, regression and classification case studies. The empirical results indicate the effectiveness of the proposed framework
    corecore