2,137 research outputs found
A cell outage management framework for dense heterogeneous networks
In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner
Characteristics and Temporal Behavior of Internet Backbone Traffic
With the rapid increase demand for data usage, Internet has become complex and harder to analyze. Characterizing the Internet traffic might reveal information that are important for Network Operators to formulate policy decisions, develop techniques to detect network anomalies, help better provision network resources (capacity, buffers) and use workload characteristics for simulations (typical packet sizes, flow durations, common protocols).
In this paper, using passive monitoring and measurements, we show collected data traffic at Internet backbone routers. First, we reveal main observations on patterns and characteristics of this dataset including packet sizes, traffic volume for inter and intra domain and protocol composition. Second, we further investigate independence structure of packet size arrivals using both visual and computational statistics. Finally, we show the temporal behavior of most active destination IP and Port addresses
Characteristics and Temporal Behavior of Internet Backbone Traffic
With the rapid increase demand for data usage, Internet has become complex and harder to analyze. Characterizing the Internet traffic might reveal information that are important for Network Operators to formulate policy decisions, develop techniques to detect network anomalies, help better provision network resources (capacity, buffers) and use workload characteristics for simulations (typical packet sizes, flow durations, common protocols).
In this paper, using passive monitoring and measurements, we show collected data traffic at Internet backbone routers. First, we reveal main observations on patterns and characteristics of this dataset including packet sizes, traffic volume for inter and intra domain and protocol composition. Second, we further investigate independence structure of packet size arrivals using both visual and computational statistics. Finally, we show the temporal behavior of most active destination IP and Port addresses
Anomaly detection in SCADA systems: a network based approach
Supervisory Control and Data Acquisition (SCADA) networks are commonly deployed to aid the operation of large industrial facilities, such as water treatment facilities. Historically, these networks were composed by special-purpose embedded devices communicating through proprietary protocols. However, modern deployments commonly make use of commercial off-the-shelf devices and standard communication protocols, such as TCP/IP. Furthermore, these networks are becoming increasingly interconnected, allowing communication with corporate networks and even the Internet. As a result, SCADA networks become vulnerable to cyber attacks, being exposed to the same threats that plague traditional IT systems.\ud
\ud
In our view, measurements play an essential role in validating results in network research; therefore, our first objective is to understand how SCADA networks are utilized in practice. To this end, we provide the first comprehensive analysis of real-world SCADA traffic. We analyze five network packet traces collected at four different critical infrastructures: two water treatment facilities, one gas utility, and one electricity and gas utility. We show, for instance, that exiting network traffic models developed for traditional IT networks cannot be directly applied to SCADA network traffic. \ud
\ud
We also confirm two SCADA traffic characteristics: the stable connection matrix and the traffic periodicity, and propose two intrusion detection approaches that exploit them. In order to exploit the stable connection matrix, we investigate the use of whitelists at the flow level. We show that flow whitelists have a manageable size, considering the number of hosts in the network, and that it is possible to overcome the main sources of instability in the whitelists. In order to exploit the traffic periodicity, we focus our attention to connections used to retrieve data from devices in the field network. We propose PeriodAnalyzer, an approach that uses deep packet inspection to automatically identify the different messages and the frequency at which they are issued. Once such normal behavior is learned, PeriodAnalyzer can be used to detect data injection and Denial of Service attacks
Detection of DoS Attacks Using ARFIMA Modeling of GOOSE Communication in IEC 61850 Substations
Integration of Information and Communication Technology (ICT) in modern smart grids (SGs) offers many advantages including the use of renewables and an effective way to protect, control and monitor the energy transmission and distribution. To reach an optimal operation of future energy systems, availability, integrity and confidentiality of data should be guaranteed. Research on the cyber-physical security of electrical substations based on IEC 61850 is still at an early stage. In the present work, we first model the network traffic data in electrical substations, then, we present a statistical Anomaly Detection (AD) method to detect Denial of Service (DoS) attacks against the Generic Object Oriented Substation Event (GOOSE) network communication. According to interpretations on the self-similarity and the Long-Range Dependency (LRD) of the data, an Auto-Regressive Fractionally Integrated Moving Average (ARFIMA) model was shown to describe well the GOOSE communication in the substation process network. Based on this ARFIMA-model and in view of cyber-physical security, an effective model-based AD method is developed and analyzed. Two variants of the statistical AD considering statistical hypothesis testing based on the Generalized Likelihood Ratio Test (GLRT) and the cumulative sum (CUSUM) are presented to detect flooding attacks that might affect the availability of the data. Our work presents a novel AD method, with two different variants, tailored to the specific features of the GOOSE traffic in IEC 61850 substations. The statistical AD is capable of detecting anomalies at unknown change times under the realistic assumption of unknown model parameters. The performance of both variants of the AD method is validated and assessed using data collected from a simulation case study. We perform several Monte-Carlo simulations under different noise variances. The detection delay is provided for each detector and it represents the number of discrete time samples after which an anomaly is detected. In fact, our statistical AD method with both variants (CUSUM and GLRT) has around half the false positive rate and a smaller detection delay when compared with two of the closest works found in the literature. Our AD approach based on the GLRT detector has the smallest false positive rate among all considered approaches. Whereas, our AD approach based on the CUSUM test has the lowest false negative rate thus the best detection rate. Depending on the requirements as well as the costs of false alarms or missed anomalies, both variants of our statistical detection method can be used and are further analyzed using composite detection metrics
Detecção de ataques por canais laterais na camada física
Today, with the advent of IoT and the resulting fragmentation of wireless technologies,
they bring not only benefits, but also concerns. Daily, several individuals
communicate with each other using various communication methods. Individuals
use a variety of devices for innocuous day-to-day activities; however, there are
some malicious individuals (dishonest agents) whose aim is to cause harm, with
the exfiltration of information being one of the biggest concerns. Since the security
of Wi-Fi communications is one of the areas of greatest investment and research
regarding Internet security, dishonest agents make use of side channels to exfiltrate
information, namely Bluetooth. Most current solutions for anomaly detection on
networks are based on analyzing frames or packets, which, inadvertently, can reveal
user behavior patterns, which they consider to be private. In addition, solutions
that focus on inspecting physical layer data typically use received signal power
(RSSI) as a distance metric and detect anomalies based on the relative position
of the network nodes, or use the spectrum values directly on models classification
without prior data processing.
This Dissertation proposes mechanisms to detect anomalies, while ensuring the privacy
of its nodes, which are based on the analysis of radio activity in the physical
layer, measuring the behavior of the network through the number of active and
inactive frequencies and the duration of periods of silence and activity. After the
extraction of properties that characterize these metrics,an exploration and study
of the data is carried out, followed by the use of the result to train One-Class
Classification models.
The models are trained with data taken from a series of interactions between a
computer, an AP, and a mobile phone in an environment with reduced noise, in
an attempt to simulate a simplified home automation scenario. Then, the models
were tested with similar data but containing a compromised node, which periodically
sent a file to a local machine via a Bluetooth connection. The data show
that, in both situations, it was possible to achieve detection accuracy rates in the
order of 75 % and 99 %.
This work ends with some ideas of resource work, namely changes in the level
of pre-processing, ideas of new tests and how to reduce the percentage of false
negatives.Hoje, com o advento da IoT e a resultante fragmentação das tecnologias sem fio,
elas trazem não apenas benefícios, mas também preocupações. Diariamente vários
indivíduos se comunicam entre si usando vários métodos de comunicação. Os
indivíduos usam uma variedade de dispositivos para atividades inócuas do dia-adia;
no entanto, existem alguns indivíduos mal-intencionados (agentes desonestos)
cujo objetivo é causar danos, sendo a exfiltração de informação uma das maiores
preocupações. Sendo a segurança das comunicações Wi-Fi uma das áreas de
maior investimento e investigação no que toca a segurança na Internet, os agentes
desonestos fazem uso de canais laterais para exfiltrar informação, nomeadamente
o Bluetooth. A maioria das soluções atuais para deteção de anomalias em redes
baseiam-se em analisar tramas ou pacotes, o que, inadvertidamente, pode revelar
padrões de comportamento dos utilizadores, que estes considerem privados. Além
disso, as soluções que se focam em inspecionar dados da camada física normalmente
usam a potência de sinal recebido (RSSI) como uma métrica de distância
e detetam anomalias baseadas na posição relativa dos nós da rede, ou usam os
valores do espetro diretamente em modelos de classificação sem prévio tratamento
de dados.
Esta Dissertação propõe mecanismos para deteção de anomalias, assegurando simultaneamente
a privacidade dos seus nós, que se baseiam na análise de atividade
rádio na camada física, medindo os comportamentos da rede através do número
de frequências ativas e inativas e a duração de períodos de silêncio e atividade.
Depois da extração de propriedades que caracterizam estas métricas, é realizada
uma exploração dos dados e um estudo das mesmas, sendo depois usadas para
treinar modelos de classificação mono-classe.
Os modelos são treinados com dados retirados de uma série de interações entre
um computador, um AP, e um telemóvel num ambiente com ruído reduzido, numa
tentativa de simular um cenário de automação doméstica simplificado. De seguida,
os modelos foram testados com dados semelhantes mas contendo um nó comprometido,
que periodicamente enviava um ficheiro para uma máquina local através
de uma ligação Bluetooth. Os dados mostram que, em ambas as situações, foi
possível atingir taxas de precisão de deteção na ordem dos 75% e 99%.
Este trabalho finaliza com algumas ideias de trabalho futuro, nomeadamente alterações
ao nível do pré-processamento, ideias de novos testes e como diminuir a
percentagem de falsos negativos.Mestrado em Engenharia de Computadores e Telemátic
- …