11,357 research outputs found
SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis
In this paper, we propose a novel approach, called SENATUS, for joint traffic
anomaly detection and root-cause analysis. Inspired from the concept of a
senate, the key idea of the proposed approach is divided into three stages:
election, voting and decision. At the election stage, a small number of
\nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{,
which are used} to represent approximately the total (usually huge) set of
traffic flows. In the voting stage, anomaly detection is applied on the senator
flows and the detected anomalies are correlated to identify the most possible
anomalous time bins. Finally in the decision stage, a machine learning
technique is applied to the senator flows of each anomalous time bin to find
the root cause of the anomalies. We evaluate SENATUS using traffic traces
collected from the Pan European network, GEANT, and compare against another
approach which detects anomalies using lossless compression of traffic
histograms. We show the effectiveness of SENATUS in diagnosing anomaly types:
network scans and DoS/DDoS attacks
New Methods for Network Traffic Anomaly Detection
In this thesis we examine the efficacy of applying outlier detection techniques to understand the behaviour of anomalies in communication network traffic. We have identified several shortcomings. Our most finding is that known techniques either focus on characterizing the spatial or temporal behaviour of traffic but rarely both. For example DoS attacks are anomalies which violate temporal patterns while port scans violate the spatial equilibrium of network traffic. To address this observed weakness we have designed a new method for outlier detection based spectral decomposition of the Hankel matrix. The Hankel matrix is spatio-temporal correlation matrix and has been used in many other domains including climate data analysis and econometrics. Using our approach we can seamlessly integrate the discovery of both spatial and temporal anomalies. Comparison with other state of the art methods in the networks community confirms that our approach can discover both DoS and port scan attacks. The spectral decomposition of the Hankel matrix is closely tied to the problem of inference in Linear Dynamical Systems (LDS). We introduce a new problem, the Online Selective Anomaly Detection (OSAD) problem, to model the situation where the objective is to report new anomalies in the system and suppress know faults. For example, in the network setting an operator may be interested in triggering an alarm for malicious attacks but not on faults caused by equipment failure. In order to solve OSAD we combine techniques from machine learning and control theory in a unique fashion. Machine Learning ideas are used to learn the parameters of an underlying data generating system. Control theory techniques are used to model the feedback and modify the residual generated by the data generating state model. Experiments on synthetic and real data sets confirm that the OSAD problem captures a general scenario and tightly integrates machine learning and control theory to solve a practical problem
Anomaly Detection Using Robust Principal Component Analysis
In this MQP, we focus on the development of a visualization-enabled anomaly detection system. We examine the 2011 VAST dataset challenge to efficiently generate meaningful features and apply Robust Principal Component Analysis (RPCA) to detect any data points estimated to be anomalous. This is done through an infrastructure that promotes the closing of the loop from feature generation to anomaly detection through RPCA. We enable our user to choose subsets of data through a web application and learn through visualization systems where problems are within their chosen local data slice. In this report, we explore both feature engineering techniques along with optimizing RPCA which ultimately lead to a generalized approach for detecting anomalies within a defined network architecture
Anomaly Detection Using Robust Principal Component Analysis
In this Major Qualifying Project, we focus on the development of a visualization-enabled anomaly detection system. We examine the 2011 VAST dataset challenge to efficiently generate meaningful features and apply Robust Principal Component Analysis (RPCA) to detect any data points estimated to be anomalous. This is done through an infrastructure that promotes the closing of the loop from feature generation to anomaly detection through RPCA. We enable our user to choose subsets of the data through a web application and learn through visualization systems where problems are within their chosen local data slice. We explore both feature engineering techniques along with optimizing RPCA which ultimately lead to a generalized approach for detecting anomalies within a defined network architecture
On the subspace learning for network attack detection
Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2019.O custo com todos os tipos de ciberataques tem crescido nas organizações. A casa branca do
goveno norte americano estima que atividades cibernéticas maliciosas custaram em 2016 um
valor entre US109 bilhões para a economia norte americana. Recentemente, é
possível observar um crescimento no número de ataques de negação de serviço, botnets,
invasões e ransomware.
A Accenture argumenta que 89% dos entrevistados em uma pesquisa acreditam que tecnologias
como inteligência artificial, aprendizagem de máquina e análise baseada em comportamentos,
são essenciais para a segurança das organizações. É possível adotar abordagens semisupervisionada e não-supervisionadas para implementar análises baseadas em
comportamentos, que podem ser aplicadas na detecção de anomalias em tráfego de rede, sem a
ncessidade de dados de ataques para treinamento.
Esquemas de processamento de sinais têm sido aplicados na detecção de tráfegos maliciosos
em redes de computadores, através de abordagens não-supervisionadas que mostram ganhos
na detecção de ataques de rede e na detecção e anomalias.
A detecção de anomalias pode ser desafiadora em cenários de dados desbalanceados, que são
casos com raras ocorrências de anomalias em comparação com o número de eventos normais.
O desbalanceamento entre classes pode comprometer o desempenho de algoritmos traficionais
de classificação, através de um viés para a classe predominante, motivando o desenvolvimento
de algoritmos para detecção de anomalias em dados desbalanceados.
Alguns algoritmos amplamente utilizados na detecção de anomalias assumem que observações
legítimas seguem uma distribuição Gaussiana. Entretanto, esta suposição pode não ser
observada na análise de tráfego de rede, que tem suas variáveis usualmente caracterizadas por
distribuições assimétricas
ou de cauda pesada. Desta forma, algoritmos de detecção de anomalias têm atraído pesquisas
para se tornarem mais discriminativos em distribuições assimétricas, como também para se
tornarem mais robustos à corrupção e capazes de lidar com problemas causados pelo
desbalanceamento de dados.
Como uma primeira contribuição, foi proposta a Autosimilaridade (Eigensimilarity em inglês), que
é uma abordagem baseada em conceitos de processamento de sinais com o objetivo de detectar
tráfego malicioso em redes de computadores. Foi avaliada a acurácia e o desempenho da
abordagem proposta através de cenários simulados e dos dados do DARPA 1998. Os
experimentos mostram que Autosimilaridade detecta os ataques synflood, fraggle e varredura de
portas com precisão, com detalhes e de uma forma automática e cega, i.e. em uma abordagem
não-supervisionada.
Considerando que a assimetria de distribuições de dados podem melhorar a detecção de
anomalias em dados desbalanceados e assimétricos, como no caso de tráfego de rede, foi
proposta a Análise Robusta de Componentes Principais baseada em Momentos (ARCP-m), que
é uma abordagem baseada em distâncias entre observações contaminadas e momentos
calculados a partir subespaços robustos aprendidos através da Análise Robusta de
Componentes Principais (ARCP), com o objetivo de detectar anomalias em dados assimétricos e
em tráfego de rede.
Foi avaliada a acurácia do ARCP-m para detecção de anomalias em dados simulados, com
distribuições assimétricas e de cauda pesada, como também para os dados do CTU-13. Os
experimentos comparam nossa proposta com algoritmos amplamente utilizados para detecção
de anomalias e mostra que a distância entre estimativas robustas e observações contaminadas
pode melhorar a detecção de anomalias em dados assimétricos e a detecção de ataques de
rede.
Adicionalmente, foi proposta uma arquitetura e abordagem para avaliar uma prova de conceito
da Autosimilaridade para a detecção de comportamentos maliciosos em aplicações móveis
corporativas. Neste sentido, foram propostos cenários, variáveis e abordagem para a análise de
ameaças, como também foi avaliado o tempo de processamento necessário para a execução do
Autosimilaridade em dispositivos móveis.The cost of all types of cyberattacks is increasing for global organizations. The Whitehouse of the
U.S. government estimates that malicious cyber activity cost the U.S. economy between US109 billion in 2016. Recently, it is possible to observe an increasing in numbers of
Denial of Service (DoS), botnets, malicious insider and ransomware attacks.
Accenture consulting argues that 89% of survey respondents believe breakthrough technologies,
like artificial intelligence, machine learning and user behavior analytics, are essential for securing
their organizations. To face adversarial models, novel network attacks and counter measures of
attackers to avoid detection, it is possible to adopt unsupervised or semi-supervised approaches
for network anomaly detection, by means of behavioral analysis, where known anomalies are not
necessaries for training models.
Signal processing schemes have been applied to detect malicious traffic in computer networks
through unsupervised approaches, showing advances in network traffic analysis, in network
attack detection, and in network intrusion detection systems.
Anomalies can be hard to identify and separate from normal data due to the rare occurrences of
anomalies in comparison to normal events. The imbalanced data can compromise the
performance of most standard learning algorithms, creating bias or unfair weight to learn from the
majority class and reducing detection capacity of anomalies that are characterized by the minority
class. Therefore, anomaly detection algorithms have to be highly discriminating, robust to
corruption and able to deal with the imbalanced data problem.
Some widely adopted algorithms for anomaly detection assume a Gaussian distributed data for
legitimate observations, however this assumption may not be observed in network traffic, which is
usually characterized by skewed and heavy-tailed distributions.
As a first important contribution, we propose the Eigensimilarity, which is an approach based on
signal processing concepts applied to detection of malicious traffic in computer networks. We
evaluate the accuracy and performance of the proposed framework applied to a simulated
scenario and to the DARPA 1998 data set. The performed experiments show that synflood,
fraggle and port scan attacks can be detected accurately by Eigensimilarity and with great detail,
in an automatic and blind fashion, i.e. in an unsupervised approach.
Considering that the skewness improves anomaly detection in imbalanced and skewed data,
such as network traffic, we propose the Moment-based Robust Principal Component Analysis (mRPCA) for network attack detection. The m-RPCA is a framework based on distances between
contaminated observations and moments computed from a robust subspace learned by Robust
Principal Component Analysis (RPCA), in order to detect anomalies from skewed data and
network traffic. We evaluate the accuracy of the m-RPCA for anomaly detection on simulated
data sets, with skewed and heavy-tailed distributions, and for the CTU-13 data set. The
Experimental evaluation compares our proposal to widely adopted algorithms for anomaly
detection and shows that the distance between robust estimates and contaminated observations
can improve the anomaly detection on skewed data and the network attack detection.
Moreover, we propose an architecture and approach to evaluate a proof of concept of
Eigensimilarity for malicious behavior detection on mobile applications, in order to detect possible
threats in offline corporate mobile client. We propose scenarios, features and approaches for
threat analysis by means of Eigensimilarity, and evaluate the processing time required for
Eigensimilarity execution in mobile devices
Application of a high-throughput process analytical technology metabolomics pipeline to Port wine forced ageing process
Metabolomics aims at gathering the maximum amount of metabolic information for a total interpretation of biological systems. A process analytical technology pipeline, combining gas chromatography–mass spectrometry data preprocessing with multivariate analysis, was applied to a Port wine “forced ageing” process under different oxygen saturation regimes at 60 °C.
It was found that extreme “forced ageing” conditions promote the occurrence of undesirable chemical reactions by production of dioxane and dioxolane isomers, furfural and 5-hydroxymethylfurfural, which affect the quality of the final product through the degradation of the wine aromatic profile, colour and taste. Also, were found high kinetical correlations between these key metabolites with benzaldehyde, sotolon, and many other metabolites that contribute for the final aromatic profile of the Port wine. The use of the kinetical correlations in time-dependent processes as wine ageing can further contribute to biological or chemical systems monitoring, new biomarkers discovery and metabolic network investigations.The author Castro C.C. (SFRH/BD/46737/2008) gratefully acknowledges her Doctoral grant to the Fundacao para a Ciencia e Tecnologia (FCT). This research was funded by the projects PTDC/BIO/69310/2006, PTDC/AGR-ALI/121062/2010, FCOMP-01-0124-008775 through FCT, and partially supported by CBMA, IBB and ESB/UCP plurianual funds through the POS-Conhecimento Program that includes FEDER funds through the program COMPETE (Programa Operacional Factores de Competitividade)
Design and implementation of a compliant robot with force feedback and strategy planning software
Force-feedback robotics techniques are being developed for automated precision assembly and servicing of NASA space flight equipment. Design and implementation of a prototype robot which provides compliance and monitors forces is in progress. Computer software to specify assembly steps and makes force feedback adjustments during assembly are coded and tested for three generically different precision mating problems. A model program demonstrates that a suitably autonomous robot can plan its own strategy
- …