15 research outputs found

    Automated Instruction-Set Randomization for Web Applications in Diversified Redundant Systems

    Full text link
    The use of diversity and redundancy in the security do-main is an interesting approach to prevent or detect intru-sions. Many researchers have proposed architectures based on those concepts where diversity is either natural or ar-tificial. These architectures are based on the architecture of N-version programming and were often instantiated for web servers without taking into account the web applica-tion(s) running on those. In this article, we present a solu-tion to protect the web applications running on this kind of architectures in order to detect and tolerate code injection intrusions. Our solution consists in creating diversity in the web application scripts by randomizing the language un-derstood by the interpreter so that an injected code can not be executed by all the servers. We also present the issues re-lated to the automatization of our solution and present some solutions to tackle these issues.

    A Text Mining-Based Anomaly aZDetection Model in Network Security

    Get PDF
    Anomaly detection systems are extensively used security tools to detect cyber-threats and attack activities in computer systems and networks. In this paper, we present Text Mining-Based Anomaly Detection (TMAD) model. We discuss n-gram text categorization and focus our attention on a main contribution of method TF-IDF (Term frequency, inverse document frequency), which enhance the performance commonly term weighting schemes are used, where the weights reflect the importance of a word in a specific document of the considered collection. Mahalanobis Distances Map (MDM) and Support Vector Machine (SVM) are used to discover hidden correlations between the features and among the packet payloads. Experiments have been accomplished to estimate the performance of TMAD against ISCX dataset 2012 intrusion detection evaluation dataset. The results show TMAD has good accuracy

    Abusing locality in shared web hosting

    Full text link

    AIIDA-SQL: An Adaptive Intelligent Intrusion Detector Agent for detecting SQL Injection attacks

    Get PDF
    SQL Injection attacks on web applications have become one of the most important information security concerns over the past few years. This paper presents a hybrid approach based on the Adaptive Intelligent Intrusion Detector Agent (AIIDA-SQL) for the detection of those attacks. The AIIDA-SQL agent incorporates a Case-Based Reasoning (CBR) engine which is equipped with learning and adaptation capabilities for the classification of SQL queries and detection of malicious user requests. To carry out the tasks of attack classification and detection, the agent incorporates advanced algorithms in the reasoning cycle stages. Concretely, an innovative classification model based on a mixture of an Artificial Neuronal Network together with a Support Vector Machine is applied in the reuse stage of the CBR cycle. This strategy enables to classify the received SQL queries in a reliable way. Finally, a projection neural technique is incorporated, which notably eases the revision stage carried out by human experts in the case of suspicious queries. The experimental results obtained on a real-traffic case study show that AIIDA-SQL performs remarkably well in practice

    Automatic Input Rectification

    Get PDF
    We present a novel technique, automatic input rectification, and a prototype implementation called SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that is satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the program is highly likely to process correctly. Our experimental results show that, for a set of benchmark applications (namely, Google Picasa, ImageMagick, VLC, Swfdec, and Dillo), this approach effectively converts malicious inputs (which successfully exploit vulnerabilities in the application) into benign inputs that the application processes correctly. Moreover, a manual code analysis shows that, if an input does satisfy the learned constraints, it is incapable of exploiting these vulnerabilities. We also present the results of a user study designed to evaluate the subjective perceptual quality of outputs from benign but atypical inputs that have been automatically rectified by SOAP to conform to the learned constraints. Specifically, we obtained benign inputs that violate learned constraints, used our input rectifier to obtain rectified inputs, then paid Amazon Mechanical Turk users to provide their subjective qualitative perception of the difference between the outputs from the original and rectified inputs. The results indicate that rectification can often preserve much, and in many cases all, of the desirable data in the original input

    Método para la detección de intrusos mediante redes neuronales basado en la reducción de características

    Get PDF
    La aplicación de técnicas basadas en Inteligencia Artificial para la detección de intrusos (IDS), fundamentalmente las redes neuronales artificiales (ANN), están demostrando ser un enfoque muy adecuado para paliar muchos de los problemas abiertos en esta área. Sin embargo, el gran volumen de información que se requiere cada día para entrenar estos sistemas, junto con la necesidad exponencial de tiempo que requieren para asimilarlos, dificulta enormemente su puesta en marcha en escenarios reales. En este trabajo se propone un método basado en la aplicación de una técnica para la reducción de características, denominada Análisis de Componentes Principales (PCA). El PCA permite obtener un modelo para la reducción del tamaño de los vectores de entrada a la ANN, asegurando que la pérdida de información sea mínima y, en consecuencia, disminuyendo la complejidad del clasificador neuronal y manteniendo estables los tiempos de entrenamiento. Para validar la propuesta se ha diseñado un escenario de prueba mediante un IDS basado en ANN. Los resultados obtenidos a partir de las pruebas realizadas demuestran la validez de la propuesta y acreditan las líneas futuras de trabajo.Este trabajo ha sido financiado por el Ministerio de Educación y Ciencia de España bajo el proyecto de investigación TIN2006-04081 y por la Generalitat Valenciana bajo el proyecto de investigación GV/2007175

    Automatic input rectification

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 51-55).We present a novel technique, automatic input rectification, and a prototype implementation, SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that it satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the program is highly likely to process correctly. Our experimental results show that, for a set of benchmark applications (Google Picasa, ImageMagick, VLC, Swfdec, and Dillo), this approach effectively converts malicious inputs (which successfully exploit vulnerabilities in the application) into benign inputs that the application processes correctly. Moreover, a manual code analysis shows that, if an input does satisfy the learned constraints, it is incapable of exploiting these vulnerabilities. We also present the results of a user study designed to evaluate the subjective perceptual quality of outputs from benign but atypical inputs that have been automatically rectified by SOAP to conform to the learned constraints. Specifically, we obtained benign inputs that violate learned constraints, used our input rectifier to obtain rectified inputs, then paid Amazon Mechanical Turk users to provide their subjective qualitative perception of the difference between the outputs from the original and rectified inputs. The results indicate that rectification can often preserve much, and in many cases all, of the desirable data in the original input.by Fan Long.S.M

    A closer look at Intrusion Detection System for web applications

    Full text link
    Intrusion Detection System (IDS) is one of the security measures being used as an additional defence mechanism to prevent the security breaches on web. It has been well known methodology for detecting network-based attacks but still immature in the domain of securing web application. The objective of the paper is to thoroughly understand the design methodology of the detection system in respect to web applications. In this paper, we discuss several specific aspects of a web application in detail that makes challenging for a developer to build an efficient web IDS. The paper also provides a comprehensive overview of the existing detection systems exclusively designed to observe web traffic. Furthermore, we identify various dimensions for comparing the IDS from different perspectives based on their design and functionalities. We also provide a conceptual framework of an IDS with prevention mechanism to offer a systematic guidance for the implementation of the system specific to the web applications. We compare its features with five existing detection systems, namely AppSensor, PHPIDS, ModSecurity, Shadow Daemon and AQTRONIX WebKnight. The paper will highly facilitate the interest groups with the cutting edge information to understand the stronger and weaker sections of the web IDS and provide a firm foundation for developing an intelligent and efficient system

    A Framework for Hybrid Intrusion Detection Systems

    Get PDF
    Web application security is a definite threat to the world’s information technology infrastructure. The Open Web Application Security Project (OWASP), generally defines web application security violations as unauthorized or unintentional exposure, disclosure, or loss of personal information. These breaches occur without the company’s knowledge and it often takes a while before the web application attack is revealed to the public, specifically because the security violations are fixed. Due to the need to protect their reputation, organizations have begun researching solutions to these problems. The most widely accepted solution is the use of an Intrusion Detection System (IDS). Such systems currently rely on either signatures of the attack used for the data breach or changes in the behavior patterns of the system to identify an intruder. These systems, either signature-based or anomaly-based, are readily understood by attackers. Issues arise when attacks are not noticed by an existing IDS because the attack does not fit the pre-defined attack signatures the IDS is implemented to discover. Despite current IDSs capabilities, little research has identified a method to detect all potential attacks on a system. This thesis intends to address this problem. A particular emphasis will be placed on detecting advanced attacks, such as those that take place at the application layer. These types of attacks are able to bypass existing IDSs, increase the potential for a web application security breach to occur and not be detected. In particular, the attacks under study are all web application layer attacks. Those included in this thesis are SQL injection, cross-site scripting, directory traversal and remote file inclusion. This work identifies common and existing data breach detection methods as well as the necessary improvements for IDS models. Ultimately, the proposed approach combines an anomaly detection technique measured by cross entropy and a signature-based attack detection framework utilizing genetic algorithm. The proposed hybrid model for data breach detection benefits organizations by increasing security measures and allowing attacks to be identified in less time and more efficiently
    corecore