126 research outputs found

    Denial of Service in Web-Domains: Building Defenses Against Next-Generation Attack Behavior

    Get PDF
    The existing state-of-the-art in the field of application layer Distributed Denial of Service (DDoS) protection is generally designed, and thus effective, only for static web domains. To the best of our knowledge, our work is the first that studies the problem of application layer DDoS defense in web domains of dynamic content and organization, and for next-generation bot behaviour. In the first part of this thesis, we focus on the following research tasks: 1) we identify the main weaknesses of the existing application-layer anti-DDoS solutions as proposed in research literature and in the industry, 2) we obtain a comprehensive picture of the current-day as well as the next-generation application-layer attack behaviour and 3) we propose novel techniques, based on a multidisciplinary approach that combines offline machine learning algorithms and statistical analysis, for detection of suspicious web visitors in static web domains. Then, in the second part of the thesis, we propose and evaluate a novel anti-DDoS system that detects a broad range of application-layer DDoS attacks, both in static and dynamic web domains, through the use of advanced techniques of data mining. The key advantage of our system relative to other systems that resort to the use of challenge-response tests (such as CAPTCHAs) in combating malicious bots is that our system minimizes the number of these tests that are presented to valid human visitors while succeeding in preventing most malicious attackers from accessing the web site. The results of the experimental evaluation of the proposed system demonstrate effective detection of current and future variants of application layer DDoS attacks

    Behavioral complexity analysis of networked systems to identify malware attacks

    Get PDF
    2020 Fall.Includes bibliographical references.Internet of Things (IoT) environments are often composed of a diverse set of devices that span a broad range of functionality, making them a challenge to secure. This diversity of function leads to a commensurate diversity in network traffic, some devices have simple network footprints and some devices have complex network footprints. This network-complexity in a device's traffic provides a differentiator that can be used by the network to distinguish which devices are most effectively managed autonomously and which devices are not. This study proposes an informed autonomous learning method by quantifying the complexity of a device based on historic traffic and applies this complexity metric to build a probabilistic model of the device's normal behavior using a Gaussian Mixture Model (GMM). This method results in an anomaly detection classifier with inlier probability thresholds customized to the complexity of each device without requiring labeled data. The model efficacy is then evaluated using seven common types of real malware traffic and across four device datasets of network traffic: one residential-based, two from labs, and one consisting of commercial automation devices. The results of the analysis of over 100 devices and 800 experiments show that the model leads to highly accurate representations of the devices and a strong correlation between the measured complexity of a device and the accuracy to which its network behavior can be modeled

    Cryptography and Its Applications in Information Security

    Get PDF
    Nowadays, mankind is living in a cyber world. Modern technologies involve fast communication links between potentially billions of devices through complex networks (satellite, mobile phone, Internet, Internet of Things (IoT), etc.). The main concern posed by these entangled complex networks is their protection against passive and active attacks that could compromise public security (sabotage, espionage, cyber-terrorism) and privacy. This Special Issue ā€œCryptography and Its Applications in Information Securityā€ addresses the range of problems related to the security of information in networks and multimedia communications and to bring together researchers, practitioners, and industrials interested by such questions. It consists of eight peer-reviewed papers, however easily understandable, that cover a range of subjects and applications related security of information

    Features extraction using random matrix theory.

    Get PDF
    Representing the complex data in a concise and accurate way is a special stage in data mining methodology. Redundant and noisy data affects generalization power of any classification algorithm, undermines the results of any clustering algorithm and finally encumbers the monitoring of large dynamic systems. This work provides several efficient approaches to all aforementioned sides of the analysis. We established, that notable difference can be made, if the results from the theory of ensembles of random matrices are employed. Particularly important result of our study is a discovered family of methods based on projecting the data set on different subsets of the correlation spectrum. Generally, we start with traditional correlation matrix of a given data set. We perform singular value decomposition, and establish boundaries between essential and unimportant eigen-components of the spectrum. Then, depending on the nature of the problem at hand we either use former or later part for the projection purpose. Projecting the spectrum of interest is a common technique in linear and non-linear spectral methods such as Principal Component Analysis, Independent Component Analysis and Kernel Principal Component Analysis. Usually the part of the spectrum to project is defined by the amount of variance of overall data or feature space in non-linear case. The applicability of these spectral methods is limited by the assumption that larger variance has important dynamics, i.e. if the data has a high signal-to-noise ratio. If it is true, projection of principal components targets two problems in data mining, reduction in the number of features and selection of more important features. Our methodology does not make an assumption of high signal-to-noise ratio, instead, using the rigorous instruments of Random Matrix Theory (RNIT) it identifies the presence of noise and establishes its boundaries. The knowledge of the structure of the spectrum gives us possibility to make more insightful projections. For instance, in the application to router network traffic, the reconstruction error procedure for anomaly detection is based on the projection of noisy part of the spectrum. Whereas, in bioinformatics application of clustering the different types of leukemia, implicit denoising of the correlation matrix is achieved by decomposing the spectrum to random and non-random parts. For temporal high dimensional data, spectrum and eigenvectors of its correlation matrix is another representation of the data. Thus, eigenvalues, components of the eigenvectors, inverse participation ratio of eigenvector components and other operators of eigen analysis are spectral features of dynamic system. In our work we proposed to extract spectral features using the RMT. We demonstrated that with extracted spectral features we can monitor the changing dynamics of network traffic. Experimenting with the delayed correlation matrices of network traffic and extracting its spectral features, we visualized the delayed processes in the system. We demonstrated in our work that broad range of applications in feature extraction can benefit from the novel RMT based approach to the spectral representation of the data

    Contribuciones para la DetecciĆ³n de Ataques Distribuidos de DenegaciĆ³n de Servicio (DDoS) en la Capa de AplicaciĆ³n

    Get PDF
    Se analizaron seis aspectos sobre la detecciĆ³n de ataques DDoS: tĆ©cnicas, variables, herramientas, ubicaciĆ³n de implementaciĆ³n, punto en el tiempo y precisiĆ³n de detecciĆ³n. Este anĆ”lisis permitiĆ³ realizar una contribuciĆ³n Ćŗtil al diseƱo de una estrategia adecuada para neutralizar estos ataques. En los Ćŗltimos aƱos, estos ataques se han dirigido hacia la capa de aplicaciĆ³n. Este fenĆ³meno se debe principalmente a la gran cantidad de herramientas para la generaciĆ³n de este tipo de ataque. Por ello, ademĆ”s, en este trabajo se propone una alternativa de detecciĆ³n basada en el dinamismo del usuario web. Para esto, se evaluaron las caracterĆ­sticas del dinamismo del usuario extraĆ­das de las funciones del mouse y del teclado. Finalmente, el presente trabajo propone un enfoque de detecciĆ³n de bajo costo que consta de dos pasos: primero, las caracterĆ­sticas del usuario se extraen en tiempo real mientras se navega por la aplicaciĆ³n web; en segundo lugar, cada caracterĆ­stica extraĆ­da es utilizada por un algoritmo de orden (O1) para diferenciar a un usuario real de un ataque DDoS. Los resultados de las pruebas con las herramientas de ataque LOIC, OWASP y GoldenEye muestran que el mĆ©todo propuesto tiene una eficacia de detecciĆ³n del 100% y que las caracterĆ­sticas del dinamismo del usuario de la web permiten diferenciar entre un usuario real y un robot

    Countering internet packet classifiers to improve user online privacy

    Get PDF
    Internet traffic classification or packet classification is the act of classifying packets using the extracted statistical data from the transmitted packets on a computer network. Internet traffic classification is an essential tool for Internet service providers to manage network traffic, provide users with the intended quality of service (QoS), and perform surveillance. QoS measures prioritize a network\u27s traffic type over other traffic based on preset criteria; for instance, it gives higher priority or bandwidth to video traffic over website browsing traffic. Internet packet classification methods are also used for automated intrusion detection. They analyze incoming traffic patterns and identify malicious packets used for denial of service (DoS) or similar attacks. Internet traffic classification may also be used for website fingerprinting attacks in which an intruder analyzes encrypted traffic of a user to find behavior or usage patterns and infer the user\u27s online activities. Protecting users\u27 online privacy against traffic classification attacks is the primary motivation of this work. This dissertation shows the effectiveness of machine learning algorithms in identifying user traffic by comparing 11 state-of-art classifiers and proposes three anonymization methods for masking generated user network traffic to counter the Internet packet classifiers. These methods are equalized packet length, equalized packet count, and equalized inter-arrival times of TCP packets. This work compares the results of these anonymization methods to show their effectiveness in reducing machine learning algorithms\u27 performance for traffic classification. The results are validated using newly generated user traffic. Additionally, a novel model based on a generative adversarial network (GAN) is introduced to automate countering the adversarial traffic classifiers. This model, which is called GAN tunnel, generates pseudo traffic patterns imitating the distributions of the real traffic generated by actual applications and encapsulates the actual network packets into the generated traffic packets. The GAN tunnel\u27s performance is tested against random forest and extreme gradient boosting (XGBoost) traffic classifiers. These classifiers are shown not being able of detecting the actual source application of data exchanged in the GAN tunnel in the tested scenarios in this thesis

    Securing the Internet of Things: A Study on Machine Learning-Based Solutions for IoT Security and Privacy Challenges

    Get PDF
    The Internet of Things (IoT) is a rapidly growing technology that connects and integrates billions of smart devices, generating vast volumes of data and impacting various aspects of daily life and industrial systems. However, the inherent characteristics of IoT devices, including limited battery life, universal connectivity, resource-constrained design, and mobility, make them highly vulnerable to cybersecurity attacks, which are increasing at an alarming rate. As a result, IoT security and privacy have gained significant research attention, with a particular focus on developing anomaly detection systems. In recent years, machine learning (ML) has made remarkable progress, evolving from a lab novelty to a powerful tool in critical applications. ML has been proposed as a promising solution for addressing IoT security and privacy challenges. In this article, we conducted a study of the existing security and privacy challenges in the IoT environment. Subsequently, we present the latest ML-based models and solutions to address these challenges, summarizing them in a table that highlights the key parameters of each proposed model. Additionally, we thoroughly studied available datasets related to IoT technology. Through this article, readers will gain a detailed understanding of IoT architecture, security attacks, and countermeasures using ML techniques, utilizing available datasets. We also discuss future research directions for ML-based IoT security and privacy. Our aim is to provide valuable insights into the current state of research in this field and contribute to the advancement of IoT security and privacy

    A reputation framework for behavioural history: developing and sharing reputations from behavioural history of network clients

    Get PDF
    The open architecture of the Internet has enabled its massive growth and success by facilitating easy connectivity between hosts. At the same time, the Internet has also opened itself up to abuse, e.g. arising out of unsolicited communication, both intentional and unintentional. It remains an open question as to how best servers should protect themselves from malicious clients whilst offering good service to innocent clients. There has been research on behavioural profiling and reputation of clients, mostly at the network level and also for email as an application, to detect malicious clients. However, this area continues to pose open research challenges. This thesis is motivated by the need for a generalised framework capable of aiding efficient detection of malicious clients while being able to reward clients with behaviour profiles conforming to the acceptable use and other relevant policies. The main contribution of this thesis is a novel, generalised, context-aware, policy independent, privacy preserving framework for developing and sharing client reputation based on behavioural history. The framework, augmenting existing protocols, allows fitting in of policies at various stages, thus keeping itself open and flexible to implementation. Locally recorded behavioural history of clients with known identities are translated to client reputations, which are then shared globally. The reputations enable privacy for clients by not exposing the details of their behaviour during interactions with the servers. The local and globally shared reputations facilitate servers in selecting service levels, including restricting access to malicious clients. We present results and analyses of simulations, with synthetic data and some proposed example policies, of client-server interactions and of attacks on our model. Suggestions presented for possible future extensions are drawn from our experiences with simulation

    Method of Information Security Risk Analysis for Virtualized System

    Get PDF
    The growth of usage of Information Technology (IT) in daily operations of enterprises causes the value and the vulnerability of information to be at the peak of interest. Moreover, distributed computing revolutionized the out-sourcing of computing functions, thus allowing flexible IT solutions. Since the concept of information goes beyond the traditional text documents, reaching manufacturing, machine control, and, to a certain extent ā€“ reasoning ā€“ it is a great responsibility to maintain appropriate information security. Information Security (IS) risk analysis and maintenance require extensive knowledge about the possessed assets as well as the technologies behind them, to recognize the threats and vulnerabilities the infrastructure is facing. A way of formal description of the infrastructure ā€“ the Enterprise Architecture (EA) ā€“ offers a multiperspective view of the whole enterprise, linking together business processes as well as the infrastructure. Several IS risk analysis solutions based on the EA exist. However, lack of methods of IS risk analysis for virtualization technologies complicates the procedure, thus leading to reduced availability of such analysis. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces the problem of information security risk analysis and itsā€™ automation. Moreover, state-of-the-art methodologies and their implementations for automated information security risk analysis are discussed. The second chapter proposes a novel method for risk analysis of virtualization components based on the most recent data, including threat classification and specification, control means and metrics of the impact. The third chapter presents an experimental evaluation of the proposed method, implementing it to the Cyber Security Modeling Language (CySeMoL) and comparing the analysis results to well-calibrated expert knowledge. It was concluded that the automation of virtualization solution risk analysis provides sufficient data for adjustment and implementation of security controls to maintain optimum security level
    • ā€¦
    corecore