1,003 research outputs found

    A framework for modelling mobile radio access networks for intelligent fault management

    Get PDF
    Postprin

    Spectrum Sensing and Security Challenges and Solutions: Contemporary Affirmation of the Recent Literature

    Get PDF
    Cognitive radio (CR) has been recently proposed as a promising technology to improve spectrum utilization by enabling secondary access to unused licensed bands. A prerequisite to this secondary access is having no interference to the primary system. This requirement makes spectrum sensing a key function in cognitive radio systems. Among common spectrum sensing techniques, energy detection is an engaging method due to its simplicity and efficiency. However, the major disadvantage of energy detection is the hidden node problem, in which the sensing node cannot distinguish between an idle and a deeply faded or shadowed band. Cooperative spectrum sensing (CSS) which uses a distributed detection model has been considered to overcome that problem. On other dimension of this cooperative spectrum sensing, this is vulnerable to sensing data falsification attacks due to the distributed nature of cooperative spectrum sensing. As the goal of a sensing data falsification attack is to cause an incorrect decision on the presence/absence of a PU signal, malicious or compromised SUs may intentionally distort the measured RSSs and share them with other SUs. Then, the effect of erroneous sensing results propagates to the entire CRN. This type of attacks can be easily launched since the openness of programmable software defined radio (SDR) devices makes it easy for (malicious or compromised) SUs to access low layer protocol stacks, such as PHY and MAC. However, detecting such attacks is challenging due to the lack of coordination between PUs and SUs, and unpredictability in wireless channel signal propagation, thus calling for efficient mechanisms to protect CRNs. Here in this paper we attempt to perform contemporary affirmation of the recent literature of benchmarking strategies that enable the trusted and secure cooperative spectrum sensing among Cognitive Radios

    A case study: Failure prediction in a real LTE network

    Get PDF
    Mobile traffic and number of connected devices have been increasing exponentially nowadays, with customer expectation from mobile operators in term of quality and reliability is higher and higher. This places pressure on operators to invest as well as to operate their growing infrastructures. As such, telecom network management becomes an essential problem. To reduce cost and maintain network performance, operators need to bring more automation and intelligence into their management system. Self-Organizing Networks function (SON) is an automation technology aiming to maximize performance in mobility networks by bringing autonomous adaptability and reducing human intervention in network management and operations. Three main areas of SON include self-configuration (auto-configuration when new element enter the network), self-optimization (optimization of the network parameters during operation) and self-healing (maintenance). The main purpose of the thesis is to illustrate how anomaly detection methods can be applied to SON functions, in particularly self-healing functions such as fault detection and cell outage management. The thesis is illustrated by a case study, in which the anomalies - in this case, the failure alarms, are predicted in advance using performance measurement data (PM data) collected from a real LTE network within a certain timeframe. Failures prediction or anomalies detection can help reduce cost and maintenance time in mobile network base stations. The author aims to answer the research questions: what anomaly detection models could detect the anomalies in advance, and what type of anomalies can be well-detected using those models. Using cross-validation, the thesis shows that random forest method is the best performing model out of the chosen ones, with F1-score of 0.58, 0.96 and 0.52 for the anomalies: Failure in Optical Interface, Temperature alarm, and VSWR minor alarm respectively. Those are also the anomalies can be well-detected by the model

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Empirical Evaluation Of Parallelizing Correlation Algorithms For Sequential Telecommunication Devices Data

    Get PDF
    Context: Connected devices within IoT is a source of generating big data. The data measured from devices consists of large number of features from hundreds to thousands. Analyzing these features is both data and computing intensive. Distributed and parallel processing frameworks such as Apache Spark provide in-memory processing technologies to design feature analytic workflows. However, algorithms for discovering data patterns and trends over time series are not necessarily ready to cooperate issues such as data partition, data shuffling that rise from distribution and parallelism. Aim: This thesis aims to explore the relation between algorithm characteristics and parallelisms as well as the effects on clustering results and the system performance. Method: System level techniques were developed to address particularly the data partition, load-balancing and data shuffling issues. Furthermore, these techniques are applied to adopt clustering algorithms on distributed parallel computing frameworks. In the evaluation, two workflows were built in which each consists of a clustering algorithm and its corresponding metrics for measuring distances of any two time series data. Result: These system level techniques improve the overall performance and execution of the workflows. Conclusion: The distribution and parallel workflows address both algorithmic factors and parallelism factors to improve accuracy and performance of processing big time series data of connected devices

    Machine Learning applied to fault correlation

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaOver the last years, one of the areas that have most evolved and extended its application to a multitude of possi bilities is Artificial Intelligence (AI). With the increasing complexity of the problems to be solved, human resolution becomes impossible, as the amount of information and patterns that can be detected is limited, while AI thrives on the dimension of the problem under analysis. Furthermore, as nowadays more and more traditional devices are computerized, an increasing number of elements are producing data that has many potential applications. Consequently, we find ourselves at the height of Big Data, where huge volumes of data are generated, being entirely unfeasible to process and analyze them manually. Additionally, with the increasing complexity of network topologies, it is necessary to ensure the correct func tioning of all equipment, avoiding cascade failures among devices, which can lead to catastrophic consequences depending on their use. Thus, Root Cause Analysis (RCA) tools become fundamental since these are developed to automatically, through rules established by its users, realize the underlying causes when some equipment mal functions. However, with the growing network complexity, the definition of rules becomes exponentially more complicated as the possible points of failure scale drastically. In this context, framed by the Altice Labs RCA and network environment use case, the main objective of this research project is defined. The aim is to use Machine Learning (ML) techniques to extrapolate the relationship between different types of equipment alarms, gathered by the Alarm Manager tool, to have a better understanding of the impact of a failure on the entire system, thus easing and helping the process of manual implementation of RCA rules. As this tool manages millions of daily alarms, it becomes unfeasible to process them manually, making the application of ML essential. Furthermore, ML algorithms have tremendous capabilities to detect patterns that humans could not, ideally exposing which specific failure causes a series of malfunctions, thus allowing system administrators to only focus their attention on the source problem instead of the multiple consequences. The ML approach proposed in this project is based on the causality among alarms, instead of their features, and uses the cartesian product of a specific problem, the involved technology, and the manufacturer, to extrap olate the correlations among faults. The results achieved reveal the tremendous potential of this approach and open the road to automatizing the definition of RCA rules, which represents a new vision on how to manage network failures efficiently.Ao longo dos últimos anos, uma das áreas que mais tem evoluído e estendido a sua utilização para uma infinidade de possibilidades é a Inteligência Artificial (IA). Com a crescente complexidade dos problemas, a resolução humana torna-se impossível, uma vez que a quantidade de informação e padrões que podem ser detectados é limitada, enquanto a IA prospera na dimensão do problema em análise. Além disso, como hoje em dia cada vez mais dispositivos tradicionais são informatizados, um número crescente de elementos está a pro duzir dados com muitas potenciais aplicações. Consequentemente, encontramo-nos no auge do Big Data, onde enormes volumes de dados são gerados, sendo totalmente inviável processá-los e analisá-los manualmente. Esta é uma das razões que tem levado à prosperidade da IA. Além disso, com a crescente complexidade das topologias de rede, é necessário assegurar o correcto fun cionamento de todos os equipamentos, evitando falhas em cascata entre dispositivos, o que pode levar a con sequências catastróficas dependendo da sua utilização. Assim, as ferramentas de Root Cause Analysis (RCA) tornam-se fundamentais, uma vez que são desenvolvidas para, através de regras estabelecidas pelos seus utilizadores, se aperceberem automaticamente das causas subjacentes quando algum equipamento apresenta anomalias. No entanto, com a crescente complexidade da rede, a definição de regras torna-se exponencial mente mais complicada, uma vez que os pontos possíveis de falha escalam tremendamente. Neste contexto, enquadrado pelo ambiente de rede e cenários de RCA da Altice Labs, foi definido o principal objectivo deste projecto de investigação. Este objectivo consiste na aplicação de técnicas de Machine Learning (ML) para extrapolar a relação entre os diferentes tipos de alarmes dos equipamentos, geridos pela ferramenta Alarm Manager, para ter uma melhor compreensão do impacto de uma falha em todo o sistema, facilitando e ajudando assim o processo de implementação manual das regras RCA. Como esta ferramenta gere milhões de alarmes diários, torna-se inviável processá-los manualmente, tornando essencial a aplicação do ML. Além disso, os algoritmos ML têm uma enorme capacidade para detectar padrões que os humanos não conseguem detectar, idealmente expondo quais as falhas específicas que causam uma série de falhas, permitindo assim que os administradores do sistema apenas concentrem a sua atenção no problema de raiz em vez das suas múltiplas consequências. A abordagem ML proposta neste projecto baseia-se na causalidade entre os alarmes, em vez das suas car acterísticas, e utiliza o produto cartesiano de um problema específico, da tecnologia envolvida, e do fabricante, para extrapolar as correlações entre falhas. Os resultados alcançados revelam o enorme potencial desta abor dagem e abrem o caminho para automatizar a definição de regras RCA, o que representa uma nova visão sobre como gerir eficazmente as falhas da rede
    corecore