33 research outputs found

    DETECTION AND PREVENTION OF MISUSE OF SOFTWARE COMPONENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Distributed detection of anomalous internet sessions

    Get PDF
    Financial service providers are moving many services online reducing their costs and facilitating customers¿ interaction. Unfortunately criminals have quickly found several ways to avoid most security measures applied to browsers and banking sites. The use of highly dangerous malware has become the most significant threat and traditional signature-detection methods are nowadays easily circumvented due to the amount of new samples and the use of sophisticated evasion techniques. Antivirus vendors and malware experts are pushed to seek for new methodologies to improve the identification and understanding of malicious applications behavior and their targets. Financial institutions are now playing an important role by deploying their own detection tools against malware that specifically affect their customers. However, most detection approaches tend to base on sequence of bytes in order to create new signatures. This thesis approach is based on new sources of information: the web logs generated from each banking session, the normal browser execution and customers mobile phone behavior. The thesis can be divided in four parts: The first part involves the introduction of the thesis along with the presentation of the problems and the methodology used to perform the experimentation. The second part describes our contributions to the research, which are based in two areas: *Server side: Weblogs analysis. We first focus on the real time detection of anomalies through the analysis of web logs and the challenges introduced due to the amount of information generated daily. We propose different techniques to detect multiple threats by deploying per user and global models in a graph based environment that will allow increase performance of a set of highly related data. *Customer side: Browser analysis. We deal with the detection of malicious behaviors from the other side of a banking session: the browser. Malware samples must interact with the browser in order to retrieve or add information. Such relation interferes with the normal behavior of the browser. We propose to develop models capable of detecting unusual patterns of function calls in order to detect if a given sample is targeting an specific financial entity. In the third part, we propose to adapt our approaches to mobile phones and Critical Infrastructures environments. The latest online banking attack techniques circumvent protection schemes such password verification systems send via SMS. Man in the Mobile attacks are capable of compromising mobile devices and gaining access to SMS traffic. Once the Transaction Authentication Number is obtained, criminals are free to make fraudulent transfers. We propose to model the behavior of the applications related messaging services to automatically detect suspicious actions. Real time detection of unwanted SMS forwarding can improve the effectiveness of second channel authentication and build on detection techniques applied to browsers and Web servers. Finally, we describe possible adaptations of our techniques to another area outside the scope of online banking: critical infrastructures, an environment with similar features since the applications involved can also be profiled. Just as financial entities, critical infrastructures are experiencing an increase in the number of cyber attacks, but the sophistication of the malware samples utilized forces to new detection approaches. The aim of the last proposal is to demonstrate the validity of out approach in different scenarios. Conclusions. Finally, we conclude with a summary of our findings and the directions for future work

    Cost-effective Detection of Drive-by-Download Attacks with Hybrid Client Honeypots

    No full text
    With the increasing connectivity of and reliance on computers and networks, important aspects of computer systems are under a constant threat. In particular, drive-by-download attacks have emerged as a new threat to the integrity of computer systems. Drive-by-download attacks are clientside attacks that originate fromweb servers that are visited byweb browsers. As a vulnerable web browser retrieves a malicious web page, the malicious web server can push malware to a user's machine that can be executed without their notice or consent. The detection of malicious web pages that exist on the Internet is prohibitively expensive. It is estimated that approximately 150 million malicious web pages that launch drive-by-download attacks exist today. Socalled high-interaction client honeypots are devices that are able to detect these malicious web pages, but they are slow and known to miss attacks. Detection ofmaliciousweb pages in these quantitieswith client honeypots would cost millions of US dollars. Therefore, we have designed a more scalable system called a hybrid client honeypot. It consists of lightweight client honeypots, the so-called low-interaction client honeypots, and traditional high-interaction client honeypots. The lightweight low-interaction client honeypots inspect web pages at high speed and forward only likely malicious web pages to the high-interaction client honeypot for a final classification. For the comparison of client honeypots and evaluation of the hybrid client honeypot system, we have chosen a cost-based evaluation method: the true positive cost curve (TPCC). It allows us to evaluate client honeypots against their primary purpose of identification of malicious web pages. We show that costs of identifying malicious web pages with the developed hybrid client honeypot systems are reduced by a factor of nine compared to traditional high-interaction client honeypots. The five main contributions of our work are: High-Interaction Client Honeypot The first main contribution of our work is the design and implementation of a high-interaction client honeypot Capture-HPC. It is an open-source, publicly available client honeypot research platform, which allows researchers and security professionals to conduct research on malicious web pages and client honeypots. Based on our client honeypot implementation and analysis of existing client honeypots, we developed a component model of client honeypots. This model allows researchers to agree on the object of study, allows for focus of specific areas within the object of study, and provides a framework for communication of research around client honeypots. True Positive Cost Curve As mentioned above, we have chosen a cost-based evaluationmethod to compare and evaluate client honeypots against their primary purpose of identification ofmaliciousweb pages: the true positive cost curve. It takes into account the unique characteristics of client honeypots, speed, detection accuracy, and resource cost and provides a simple, cost-based mechanism to evaluate and compare client honeypots in an operating environment. As such, the TPCC provides a foundation for improving client honeypot technology. The TPCC is the second main contribution of our work. Mitigation of Risks to the Experimental Design with HAZOP - Mitigation of risks to internal and external validity on the experimental design using hazard and operability (HAZOP) study is the third main contribution. This methodology addresses risks to intent (internal validity) as well as generalizability of results beyond the experimental setting (external validity) in a systematic and thorough manner. Low-Interaction Client Honeypots - Malicious web pages are usually part of a malware distribution network that consists of several servers that are involved as part of the drive-by-download attack. Development and evaluation of classification methods that assess whether a web page is part of a malware distribution network is the fourth main contribution. Hybrid Client Honeypot System - The fifth main contribution is the hybrid client honeypot system. It incorporates the mentioned classification methods in the form of a low-interaction client honeypot and a high-interaction client honeypot into a hybrid client honeypot systemthat is capable of identifying malicious web pages in a cost effective way on a large scale. The hybrid client honeypot system outperforms a high-interaction client honeypot with identical resources and identical false positive rate

    Survey on representation techniques for malware detection system

    Get PDF
    Malicious programs are malignant software’s designed by hackers or cyber offenders with a harmful intent to disrupt computer operation. In various researches, we found that the balance between designing an accurate architecture that can detect the malware and track several advanced techniques that malware creators apply to get variants of malware are always a difficult line. Hence the study of malware detection techniques has become more important and challenging within the security field. This review paper provides a detailed discussion and full reviews for various types of malware, malware detection techniques, various researches on them, malware analysis methods and different dynamic programmingbased tools that could be used to represent the malware sampled. We have provided a comprehensive bibliography in malware detection, its techniques and analysis methods for malware researchers

    Operating system auditing and monitoring

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Novel Techniques of Using Diversity in Software Security and Information Hiding

    Get PDF
    Diversity is an important and valuable concept that has been adopted in many fields to reduce correlated risks and to increase survivability. In information security, diversity also helps to increase both defense capability and fault tolerance for information systems and communication networks, where diversity can be adopted from many different perspectives. This dissertation, in particular, focuses mainly on two aspects of diversity – the application software diversity and the diversity in data interpretation. Software diversity has many advantages over mono-culture in improving system security. A number of previous researches focused on utilizing existing off the shelf diverse software for network protection and intrusion detection, many of which depend on an important assumption – the diverse software utilized in the system is vulnerable only to different exploits. In the first work of this dissertation, we perform a systematic analysis on more than 6,000 vulnerabilities published in 2007 to evaluate the extent to which this assumption is valid. Our results show that the majority of the vulnerable application software products either do not have the same vulnerability, or cannot be compromised with the same exploit code. Following this work, we then propose an intrusion detection scheme which builds on two diverse programs to detect sophisticated attacks on security-critical data. Our model learns the underlying semantic correlation of the argument values in these programs, and consequently gains more accurate context information compared to existing schemes. Through experiments, we show that such context information is effective in detecting attacks which manipulate erratic arguments with comparable false-positive rates. Software diversity does not only exist on desktop and mainframe computers, it also exists on mobile platforms like smartphone operating systems. In our third work in this dissertation, we propose to investigate applications that run on diverse mobile platforms (e.g., Android and iOS) and to use them as the baseline for comparing their security architectures. Assuming that such applications need the same types of privileges to provide the same functionality on different mobile platforms, our analysis of more than 2,000 applications shows that those executing on iOS consistently ask for more permissions than their counterparts running on Android. We additionally analyze the underlying reasons and find out that part of the permission usage differences is caused by third-party libraries used in these applications. Different from software diversity, the fourth work in this dissertation focuses on the diversity in data interpretation, which helps to defend against coercion attacks. We propose Dummy-Relocatable Steganographic file system (DRSteg) to provide deniability in multi user environments where the adversary may have multiple snapshots of the disk content. The diverse ways of interpreting data in the storage allows a data owner to surrender only some data and attribute the unexplained changes across snapshots to the dummy data which are random bits. The level of deniability offered by our file system is configurable by the users, to balance against the resulting performance overhead. Additionally, our design guarantees the integrity of the protected data, except where users voluntarily overwrite data under duress. This dissertation makes valuable contributions on utilizing diversity in software security and information hiding. The systematic evaluation results obtained for mobile and desktop diverse software are important and useful to both research literature and industrial organizations. The proposed intrusion detection system and steganographic file system have been implemented as prototypes, which are effective in protecting valuable user data against adversaries in various threat scenarios

    Application Adaptive Bandwidth Management Using Real-Time Network Monitoring.

    Get PDF
    Application adaptive bandwidth management is a strategy for ensuring secure and reliable network operation in the presence of undesirable applications competing for a network’s crucial bandwidth, covert channels of communication via non-standard traffic on well-known ports, and coordinated Denial of Service attacks. The study undertaken here explored the classification, analysis and management of the network traffic on the basis of ports and protocols used, type of applications, traffic direction and flow rates on the East Tennessee State University’s campus-wide network. Bandwidth measurements over a nine-month period indicated bandwidth abuse of less than 0.0001% of total network bandwidth. The conclusion suggests the use of the defense-in-depth approach in conjunction with the KHYATI (Knowledge, Host hardening, Yauld monitoring, Analysis, Tools and Implementation) paradigm to ensure effective information assurance

    E-commerce security enhancement and anomaly intrusion detection using machine learning techniques

    Get PDF
    With the fast growth of the Internet and the World Wide Web, security has become a major concern of many organizations, enterprises and users. Criminal attacks and intrusions into computer and information systems are spreading quickly and they can come from anywhere on the globe. Intrusion prevention measures, such as user authentication, firewalls and cryptography have been used as the first line of defence to protect computer and information systems from intrusions. As intrusion prevention alone may not be sufficient in a highly dynamic environment, such as the Internet, intrusion detection has been used as the second line of defence against intrusions. However, existing cryptography-based intrusion prevention measures implemented in software, have problems with the protection of long-term private keys and the degradation of system performance. Moreover, the security of these software-based intrusion prevention measures depends on the security of the underlying operating system, and therefore they are vulnerable to threats caused by security flaws of the underlying operating system. On the other hand, existing anomaly intrusion detection approaches usually produce excessive false alarms. They also lack in efficiency due to high construction and maintenance costs. In our approach, we employ the "defence in depth" principle to develop a solution to solve these problems. Our solution consists of two lines of defence: preventing intrusions at the first line and detecting intrusions at the second line if the prevention measures of the first line have been penetrated. At the first line of defence, our goal is to develop an encryption model that enhances communication and end-system security, and improves the performance of web-based E-commerce systems. We have developed a hardware-based RSA encryption model to address the above mentioned problems of existing software-based intrusion prevention measures. The proposed hardware-based encryption model is based on the integration of an existing web-based client/server model and embedded hardware-based RSA encryption modules. DSP embedded hardware is selected to develop the proposed encryption model because of its advanced security features and high processing capability. The experimental results showed that the proposed DSP hardware-based RSA encryption model outperformed the software-based RSA implementation running on Pentium 4 machines that have almost double clock speed of the DSP's clock speed at large RSA encryption keys. At the second line of defence, our goal is to develop an anomaly intrusion detection model that improves the detection accuracy, efficiency and adaptability of existing anomaly detection approaches. Existing anomaly detection systems are not effective as they usually produce excessive false alarms. In addition, several anomaly detection approaches suffer a serious efficiency problem due to high construction costs of the detection profiles. High construction costs will eventually reduce the applicability of these approaches in practice. Furthermore, existing anomaly detection systems lack in adaptability because no mechanisms are provided to update their detection profiles dynamically, in order to adapt to the changes of the behaviour of monitored objects. We have developed a model for program anomaly intrusion detection to address these problems. The proposed detection model uses a hidden Markov model (HMM) to characterize normal program behaviour using system calls. In order to increase the detection rate and to reduce the false alarm rate, we propose two detection schemes: a two-layer detection scheme and a fuzzy-based detection scheme. The two-layer detection scheme aims at reducing false alarms by applying a double-layer test on each sequence of test traces of system calls. On the other hand, the fuzzy-based detection scheme focuses on further improving the detection rate, as well as reducing false alarms. It employs the fuzzy inference to combine multiple sequence information to correctly determine the sequence status. The experimental results showed that the proposed detection schemes reduced false alarms by approximately 48%, compared to the normal database scheme. In addition, our detection schemes generated strong anomaly signals for all tested traces, which in turn improve the detection rate. We propose an HMM incremental training scheme with optimal initialization to address the efficiency problem by reducing the construction costs, in terms of model training time and storage demand. Unlike the HMM batch training scheme, which updates the HMM model using the complete training set, our HMM incremental training scheme incrementally updates the HMM model using one training subset at a time, until convergence. The experimental results showed that the proposed HMM incremental training scheme reduced training time four-fold, compared to the HMM batch training, based on the well-known Baum-Welch algorithm. The proposed training scheme also reduced storage demand substantially, as the size of each training subset is significantly smaller than the size of the complete training set. We also describe our complete model for program anomaly detection using system calls in chapter 8. The complete model consists of two development stages: training stage and testing stage. In the training stage, an HMM model and a normal database are constructed to represent normal program behaviour. In addition, fuzzy sets and rules are defined to represent the space and combined conditions of the sequence parameters. In the testing stage, the HMM model and the normal database, are used to generate the sequence parameters which are used as the input for the fuzzy inference engine to evaluate each sequence of system calls for anomalies and possible intrusions. The proposed detection model also provides a mechanism to update its detection profile (the HMM model and the normal database) using online training data. This makes the proposed detection model up-to-date, and therefore, maintains the detection accuracy
    corecore