1,766 research outputs found

    On the subspace learning for network attack detection

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2019.O custo com todos os tipos de ciberataques tem crescido nas organizações. A casa branca do goveno norte americano estima que atividades cibernéticas maliciosas custaram em 2016 um valor entre US57bilho~eseUS57 bilhões e US109 bilhões para a economia norte americana. Recentemente, é possível observar um crescimento no número de ataques de negação de serviço, botnets, invasões e ransomware. A Accenture argumenta que 89% dos entrevistados em uma pesquisa acreditam que tecnologias como inteligência artificial, aprendizagem de máquina e análise baseada em comportamentos, são essenciais para a segurança das organizações. É possível adotar abordagens semisupervisionada e não-supervisionadas para implementar análises baseadas em comportamentos, que podem ser aplicadas na detecção de anomalias em tráfego de rede, sem a ncessidade de dados de ataques para treinamento. Esquemas de processamento de sinais têm sido aplicados na detecção de tráfegos maliciosos em redes de computadores, através de abordagens não-supervisionadas que mostram ganhos na detecção de ataques de rede e na detecção e anomalias. A detecção de anomalias pode ser desafiadora em cenários de dados desbalanceados, que são casos com raras ocorrências de anomalias em comparação com o número de eventos normais. O desbalanceamento entre classes pode comprometer o desempenho de algoritmos traficionais de classificação, através de um viés para a classe predominante, motivando o desenvolvimento de algoritmos para detecção de anomalias em dados desbalanceados. Alguns algoritmos amplamente utilizados na detecção de anomalias assumem que observações legítimas seguem uma distribuição Gaussiana. Entretanto, esta suposição pode não ser observada na análise de tráfego de rede, que tem suas variáveis usualmente caracterizadas por distribuições assimétricas ou de cauda pesada. Desta forma, algoritmos de detecção de anomalias têm atraído pesquisas para se tornarem mais discriminativos em distribuições assimétricas, como também para se tornarem mais robustos à corrupção e capazes de lidar com problemas causados pelo desbalanceamento de dados. Como uma primeira contribuição, foi proposta a Autosimilaridade (Eigensimilarity em inglês), que é uma abordagem baseada em conceitos de processamento de sinais com o objetivo de detectar tráfego malicioso em redes de computadores. Foi avaliada a acurácia e o desempenho da abordagem proposta através de cenários simulados e dos dados do DARPA 1998. Os experimentos mostram que Autosimilaridade detecta os ataques synflood, fraggle e varredura de portas com precisão, com detalhes e de uma forma automática e cega, i.e. em uma abordagem não-supervisionada. Considerando que a assimetria de distribuições de dados podem melhorar a detecção de anomalias em dados desbalanceados e assimétricos, como no caso de tráfego de rede, foi proposta a Análise Robusta de Componentes Principais baseada em Momentos (ARCP-m), que é uma abordagem baseada em distâncias entre observações contaminadas e momentos calculados a partir subespaços robustos aprendidos através da Análise Robusta de Componentes Principais (ARCP), com o objetivo de detectar anomalias em dados assimétricos e em tráfego de rede. Foi avaliada a acurácia do ARCP-m para detecção de anomalias em dados simulados, com distribuições assimétricas e de cauda pesada, como também para os dados do CTU-13. Os experimentos comparam nossa proposta com algoritmos amplamente utilizados para detecção de anomalias e mostra que a distância entre estimativas robustas e observações contaminadas pode melhorar a detecção de anomalias em dados assimétricos e a detecção de ataques de rede. Adicionalmente, foi proposta uma arquitetura e abordagem para avaliar uma prova de conceito da Autosimilaridade para a detecção de comportamentos maliciosos em aplicações móveis corporativas. Neste sentido, foram propostos cenários, variáveis e abordagem para a análise de ameaças, como também foi avaliado o tempo de processamento necessário para a execução do Autosimilaridade em dispositivos móveis.The cost of all types of cyberattacks is increasing for global organizations. The Whitehouse of the U.S. government estimates that malicious cyber activity cost the U.S. economy between US57billionandUS57 billion and US109 billion in 2016. Recently, it is possible to observe an increasing in numbers of Denial of Service (DoS), botnets, malicious insider and ransomware attacks. Accenture consulting argues that 89% of survey respondents believe breakthrough technologies, like artificial intelligence, machine learning and user behavior analytics, are essential for securing their organizations. To face adversarial models, novel network attacks and counter measures of attackers to avoid detection, it is possible to adopt unsupervised or semi-supervised approaches for network anomaly detection, by means of behavioral analysis, where known anomalies are not necessaries for training models. Signal processing schemes have been applied to detect malicious traffic in computer networks through unsupervised approaches, showing advances in network traffic analysis, in network attack detection, and in network intrusion detection systems. Anomalies can be hard to identify and separate from normal data due to the rare occurrences of anomalies in comparison to normal events. The imbalanced data can compromise the performance of most standard learning algorithms, creating bias or unfair weight to learn from the majority class and reducing detection capacity of anomalies that are characterized by the minority class. Therefore, anomaly detection algorithms have to be highly discriminating, robust to corruption and able to deal with the imbalanced data problem. Some widely adopted algorithms for anomaly detection assume a Gaussian distributed data for legitimate observations, however this assumption may not be observed in network traffic, which is usually characterized by skewed and heavy-tailed distributions. As a first important contribution, we propose the Eigensimilarity, which is an approach based on signal processing concepts applied to detection of malicious traffic in computer networks. We evaluate the accuracy and performance of the proposed framework applied to a simulated scenario and to the DARPA 1998 data set. The performed experiments show that synflood, fraggle and port scan attacks can be detected accurately by Eigensimilarity and with great detail, in an automatic and blind fashion, i.e. in an unsupervised approach. Considering that the skewness improves anomaly detection in imbalanced and skewed data, such as network traffic, we propose the Moment-based Robust Principal Component Analysis (mRPCA) for network attack detection. The m-RPCA is a framework based on distances between contaminated observations and moments computed from a robust subspace learned by Robust Principal Component Analysis (RPCA), in order to detect anomalies from skewed data and network traffic. We evaluate the accuracy of the m-RPCA for anomaly detection on simulated data sets, with skewed and heavy-tailed distributions, and for the CTU-13 data set. The Experimental evaluation compares our proposal to widely adopted algorithms for anomaly detection and shows that the distance between robust estimates and contaminated observations can improve the anomaly detection on skewed data and the network attack detection. Moreover, we propose an architecture and approach to evaluate a proof of concept of Eigensimilarity for malicious behavior detection on mobile applications, in order to detect possible threats in offline corporate mobile client. We propose scenarios, features and approaches for threat analysis by means of Eigensimilarity, and evaluate the processing time required for Eigensimilarity execution in mobile devices

    A Study on Techniques/Algorithms used for Detection and Prevention of Security Attacks in Cognitive Radio Networks

    Get PDF
    In this paper a detailed survey is carried out on the taxonomy of Security Issues, Advances on Security Threats and Countermeasures ,A Cross-Layer Attack, Security Status and Challenges for Cognitive Radio Networks, also a detailed survey on several Algorithms/Techniques used to detect and prevent SSDF(Spectrum Sensing Data Falsification) attack a type of DOS (Denial of Service) attack and several other  Network layer attacks in Cognitive Radio Network or Cognitive Radio Wireless Sensor Node Networks(WSNN’s) to analyze the advantages and disadvantages of those existing algorithms/techniques

    A model for multi-attack classification to improve intrusion detection performance using deep learning approaches

    Full text link
    This proposed model introduces novel deep learning methodologies. The objective here is to create a reliable intrusion detection mechanism to help identify malicious attacks. Deep learning based solution framework is developed consisting of three approaches. The first approach is Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) with seven optimizer functions such as adamax, SGD, adagrad, adam, RMSprop, nadam and adadelta. The model is evaluated on NSL-KDD dataset and classified multi attack classification. The model has outperformed with adamax optimizer in terms of accuracy, detection rate and low false alarm rate. The results of LSTM-RNN with adamax optimizer is compared with existing shallow machine and deep learning models in terms of accuracy, detection rate and low false alarm rate. The multi model methodology consisting of Recurrent Neural Network (RNN), Long-Short Term Memory Recurrent Neural Network (LSTM-RNN), and Deep Neural Network (DNN). The multi models are evaluated on bench mark datasets such as KDD99, NSL-KDD, and UNSWNB15 datasets. The models self-learnt the features and classifies the attack classes as multi-attack classification. The models RNN, and LSTM-RNN provide considerable performance compared to other existing methods on KDD99 and NSL-KDD datase

    Botnet Detection Using Graph Based Feature Clustering

    Get PDF
    Detecting botnets in a network is crucial because bot-activities impact numerous areas such as security, finance, health care, and law enforcement. Most existing rule and flow-based detection methods may not be capable of detecting bot-activities in an efficient manner. Hence, designing a robust botnet-detection method is of high significance. In this study, we propose a botnet-detection methodology based on graph-based features. Self-Organizing Map is applied to establish the clusters of nodes in the network based on these features. Our method is capable of isolating bots in small clusters while containing most normal nodes in the big-clusters. A filtering procedure is also developed to further enhance the algorithm efficiency by removing inactive nodes from bot detection. The methodology is verified using real-world CTU-13 and ISCX botnet datasets and benchmarked against classification-based detection methods. The results show that our proposed method can efficiently detect the bots despite their varying behaviors

    Fant\^omas: Understanding Face Anonymization Reversibility

    Full text link
    Face images are a rich source of information that can be used to identify individuals and infer private information about them. To mitigate this privacy risk, anonymizations employ transformations on clear images to obfuscate sensitive information, all while retaining some utility. Albeit published with impressive claims, they sometimes are not evaluated with convincing methodology. Reversing anonymized images to resemble their real input -- and even be identified by face recognition approaches -- represents the strongest indicator for flawed anonymization. Some recent results indeed indicate that this is possible for some approaches. It is, however, not well understood, which approaches are reversible, and why. In this paper, we provide an exhaustive investigation in the phenomenon of face anonymization reversibility. Among other things, we find that 11 out of 15 tested face anonymizations are at least partially reversible and highlight how both reconstruction and inversion are the underlying processes that make reversal possible

    Features extraction using random matrix theory.

    Get PDF
    Representing the complex data in a concise and accurate way is a special stage in data mining methodology. Redundant and noisy data affects generalization power of any classification algorithm, undermines the results of any clustering algorithm and finally encumbers the monitoring of large dynamic systems. This work provides several efficient approaches to all aforementioned sides of the analysis. We established, that notable difference can be made, if the results from the theory of ensembles of random matrices are employed. Particularly important result of our study is a discovered family of methods based on projecting the data set on different subsets of the correlation spectrum. Generally, we start with traditional correlation matrix of a given data set. We perform singular value decomposition, and establish boundaries between essential and unimportant eigen-components of the spectrum. Then, depending on the nature of the problem at hand we either use former or later part for the projection purpose. Projecting the spectrum of interest is a common technique in linear and non-linear spectral methods such as Principal Component Analysis, Independent Component Analysis and Kernel Principal Component Analysis. Usually the part of the spectrum to project is defined by the amount of variance of overall data or feature space in non-linear case. The applicability of these spectral methods is limited by the assumption that larger variance has important dynamics, i.e. if the data has a high signal-to-noise ratio. If it is true, projection of principal components targets two problems in data mining, reduction in the number of features and selection of more important features. Our methodology does not make an assumption of high signal-to-noise ratio, instead, using the rigorous instruments of Random Matrix Theory (RNIT) it identifies the presence of noise and establishes its boundaries. The knowledge of the structure of the spectrum gives us possibility to make more insightful projections. For instance, in the application to router network traffic, the reconstruction error procedure for anomaly detection is based on the projection of noisy part of the spectrum. Whereas, in bioinformatics application of clustering the different types of leukemia, implicit denoising of the correlation matrix is achieved by decomposing the spectrum to random and non-random parts. For temporal high dimensional data, spectrum and eigenvectors of its correlation matrix is another representation of the data. Thus, eigenvalues, components of the eigenvectors, inverse participation ratio of eigenvector components and other operators of eigen analysis are spectral features of dynamic system. In our work we proposed to extract spectral features using the RMT. We demonstrated that with extracted spectral features we can monitor the changing dynamics of network traffic. Experimenting with the delayed correlation matrices of network traffic and extracting its spectral features, we visualized the delayed processes in the system. We demonstrated in our work that broad range of applications in feature extraction can benefit from the novel RMT based approach to the spectral representation of the data

    A New Approach for Analyzing Financial Markets Using Correlation Networks and Population Analysis

    Get PDF
    With the current availability of massive data sets associated with stock markets, we now have opportunities to apply newly developed big data techniques and data-driven methodologies to analyze these complicated markets. As stock market data continues to grow, analyzing the behavior of companies listed on the market becomes a massive task, even for high-performance computing systems. Hence, new big data techniques like network models are very much needed. We conducted this study on data collected from CRSP during the years 2000-2021 inclusively. In this study, we proposed a novel population analysis by constructing a correlation network model based on the monthly data of different companies’ excess returns; additionally, we employed the Louvain clustering algorithm to generate individual clusters/communities. After constructing correlation networks from input data, hidden knowledge was extracted from the network by using community detection and measuring network centralities. The Louvain algorithm was applied to the network as a data analysis shortcut tool and grouped different companies with high correlations or similar financial behavior over the period of study. In each community, different centralities were measured. Centrality measurements came from Closeness, Betweenness, and Eigen centralities for this study. The empirical result of this study showed that the meaning of centrality measurement in network analysis in the stock market has a different meaning compared to social network analysis. In most networks, high central entities are the most important entities; however, in this study, we learned that high centrality is not something that researchers should look for when developing and building a portfolio with low risk. What was discovered was that nodes in the network with lower degrees of centrality led to developing a diverse portfolio with lower risk, with acknowledgment of the Modern Portfolio Theory. Since this new approach was applied on the years 2000-2021, this study revealed behavioral patterns from stock movements depending on different events such as the 9/11 attacks, 2008 economic crashes, and the Covid-19 pandemic. As a result of this study, we would like to suggest a system based on a weighted portfolio to make a proper decision in selecting portfolios that can outperform the benchmark during normal circumstances or crises

    Multi-Model Network Intrusion Detection System Using Distributed Feature Extraction and Supervised Learning

    Get PDF
    Intrusion Detection Systems (IDSs) monitor network traffic and system activities to identify any unauthorized or malicious behaviors. These systems usually leverage the principles of data science and machine learning to detect any deviations from normalcy by learning from the data associated with normal and abnormal patterns. The IDSs continue to suffer from issues like distributed high-dimensional data, inadequate robustness, slow detection, and high false-positive rates (FPRs). We investigate these challenges, determine suitable strategies, and propose relevant solutions based on the appropriate mathematical and computational concepts. To handle high-dimensional data in a distributed network, we optimize the feature space in a distributed manner using the PCA-based feature extraction method. The experimental results display that the classifiers built upon the features so extracted perform well by giving a similar level of accuracy as given by the ones that use the centrally extracted features. This method also significantly reduces the cumulative time needed for extraction. By utilizing the extracted features, we construct a distributed probabilistic classifier based on Naïve Bayes. Each node counts the local frequencies and passes those on to the central coordinator. The central coordinator accumulates the local frequencies and computes the global frequencies, which are used by the nodes to compute the required prior probabilities to perform classifications. Each node, being evenly trained, is capable of detecting intrusions individually to improve the overall robustness of the system. We also propose a similarity measure-based classification (SMC) technique that works by computing the cosine similarities between the class-specific frequential weights of the values in an observed instance and the average frequency-based data centroid. An instance is classified into the class whose weights for the values in it share the highest level of similarity with the centroid. SMC contributes alongside Naïve Bayes in a multi-model classification approach, which we introduce to reduce the FPR and improve the detection accuracy. This approach utilizes the similarities associated with each class label determined by SMC and the probabilities associated with each class label determined by Naïve Bayes. The similarities and probabilities are aggregated, separately, to form new features that are used to train and validate a tertiary classifier. We demonstrate that such a multi-model approach can attain a higher level of accuracy compared with the single-model classification techniques. The contributions made by this dissertation to enhance the scalability, robustness, and accuracy can help improve the efficacy of IDSs
    • …
    corecore