695 research outputs found
Lateinisches und Romanisches in den Reichenauer Glossen
In today’s complex networks, timely identification and resolution of performance problems is extremely challenging. Current diagnostic practices to identify the root causes of such problems primarily rely on human intervention and investigation. Fully automated and scalable systems, which are capable of identifying complex problems are needed to provide rapid and accurate diagnosis. The study presented in this thesis creates the necessary scientific basis for the automatic diagnosis of network performance faults using novel intelligent inference techniques based on machine learning. We propose three new techniques for characterisation of network soft failures, and by using them, create the Intelligent Automated Network Diagnostic (IAND) system.
First, we propose Transmission Control Protocol (TCP) trace characterisation techniques that use aggregated TCP statistics. Faulty network components embed unique artefacts in TCP packet streams by altering the normal protocol behaviour. Our technique captures such artefacts and generates a set of unique fault signatures. We first introduce Normalised Statistical Signatures (NSSs) with 460 features, a novel representation of network soft failures to provide the basis for diagnosis. Since not all 460 features contribute equally to the identification of a particular fault, we then introduce improved forms of NSSs called EigenNSS and FisherNSS with reduced complexity and greater class separability. Evaluations show that we can achieve dimensionality reduction of over 95% and detection accuracies up to 95% while achieving micro-second diagnosis times with these signatures.
Second, given NSSs have features that are dependent on link properties, we introduce a technique called Link Adaptive Signature Estimation (LASE) using regression-based predictors to artificially generateNSSs for a large number of link parameter combinations. Using LASE, the system can be trained to suit the exact networking environment, however dynamic, with a minimal set of sample data. For extensive performance evaluation, we collected 1.2 million sample traces for 17 types of device failures on 8 TCP variants over various types of networks using a combination of fault injection and link emulation techniques.
Third, to automate fault identification, we propose a modular inference technique that learns from the patterns embedded in the signatures, and create Fault Classifier Modules (FCMs). FCMs use support vector machines to uniquely identify individual faults and are designed using soft class boundaries to provide generalised fault detection capability. The use of a modular design and generic algorithm that can be trained and tuned based on the specific faults, offers scalability and is a key differentiator from the existing systems that use specific algorithms to detect each fault. Experimental evaluations show that FCMs can achieve detection accuracies of between 90% – 98%.
The signatures and classifiers are used as the building blocks to create the IAND system with its two main sub-systems: IAND-k and IAND-h. The IANDk is a modular diagnostic system for automatic detection of previously known problems using FCMs. The IAND-k system is applied for accurately detecting faulty links and diagnosing problems in end-user devices in a wide range of network types (IAND-kUD, IAND-kCC). Extensive evaluation of the systems demonstrated high overall detection accuracies up to 96.6% with low false positives and over 90% accuracy even in the most difficult scenarios. Here, the FCMs use supervised machine learning methods and can only detect previously known problems. To extend the diagnostic capability to detect previously unknown problems, we propose IAND-h, a hybrid classifier system that uses a combination of unsupervised machine learning-based clustering and supervised machine learning-based classification. The evaluation of the system shows that previously unknown faults can be detected with over 92% accuracy. The IAND-h system also offers real-time detection capability with diagnosis times between 4 μs and 66 μs. The techniques and systems proposed during this research contribute to the state of the art of network diagnostics and focus on scalability, automation and modularity with evaluation results demonstrating a high degree of accuracy
A Survey on Data Plane Programming with P4: Fundamentals, Advances, and Applied Research
With traditional networking, users can configure control plane protocols to
match the specific network configuration, but without the ability to
fundamentally change the underlying algorithms. With SDN, the users may provide
their own control plane, that can control network devices through their data
plane APIs. Programmable data planes allow users to define their own data plane
algorithms for network devices including appropriate data plane APIs which may
be leveraged by user-defined SDN control. Thus, programmable data planes and
SDN offer great flexibility for network customization, be it for specialized,
commercial appliances, e.g., in 5G or data center networks, or for rapid
prototyping in industrial and academic research. Programming
protocol-independent packet processors (P4) has emerged as the currently most
widespread abstraction, programming language, and concept for data plane
programming. It is developed and standardized by an open community and it is
supported by various software and hardware platforms. In this paper, we survey
the literature from 2015 to 2020 on data plane programming with P4. Our survey
covers 497 references of which 367 are scientific publications. We organize our
work into two parts. In the first part, we give an overview of data plane
programming models, the programming language, architectures, compilers,
targets, and data plane APIs. We also consider research efforts to advance P4
technology. In the second part, we analyze a large body of literature
considering P4-based applied research. We categorize 241 research papers into
different application domains, summarize their contributions, and extract
prototypes, target platforms, and source code availability.Comment: Submitted to IEEE Communications Surveys and Tutorials (COMS) on
2021-01-2
Performance Evaluation of Network Anomaly Detection Systems
Nowadays, there is a huge and growing concern about security in information and communication
technology (ICT) among the scientific community because any attack or anomaly in
the network can greatly affect many domains such as national security, private data storage,
social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad
research area, and many different techniques and approaches for this purpose have emerged
through the years.
Attacks, problems, and internal failures when not detected early may badly harm an
entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection
system based on the statistical method Principal Component Analysis (PCADS-AD). This
approach creates a network profile called Digital Signature of Network Segment using Flow Analysis
(DSNSF) that denotes the predicted normal behavior of a network traffic activity through
historical data analysis. That digital signature is used as a threshold for volume anomaly detection
to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow
attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP
addresses and Ports, to provides the network administrator necessary information to solve them.
Via evaluation techniques, addition of a different anomaly detection approach, and
comparisons to other methods performed in this thesis using real network traffic data, results
showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection
accuracy on the detection schema.
The observed results seek to contribute to the advance of the state of the art in methods
and strategies for anomaly detection that aim to surpass some challenges that emerge from
the constant growth in complexity, speed and size of today’s large scale networks, also providing
high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia
da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer
ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade
em muitos domínios, como segurança nacional, armazenamento de dados privados,
bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias
é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito
surgiram ao longo dos anos.
Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar
gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo
de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes
Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital
do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal
previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa
assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar
disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo
de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e
portas de origem e destino para fornecer ao administrador de rede as informações necessárias
para resolvê-los.
Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem
de deteção distinta da proposta principal e comparações com outros métodos realizados nesta
tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego
pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção.
Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir
para o avanço do estado da arte em métodos e estratégias de deteção de anomalias,
visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade
e tamanho das redes de grande porte da atualidade, proporcionando também alta
performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para
que possa ser aplicado a deteção em tempo real
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Detecção de anomalias na partilha de ficheiros em ambientes empresariais
File sharing is the activity of making archives (documents, videos, photos) available to other users. Enterprises use file sharing to make archives available to their employees or clients. The availability of these files can be done through an internal network, cloud service (external) or even Peer-to-Peer (P2P). Most of the time, the files within the file sharing service have sensitive
information that cannot be disclosed. Equifax data breach attack exploited a zero-day attack that allowed arbitrary code execution, leading to a huge data breach as over 143 million user information was presumed compromised. Ransomware is a type of malware that encrypts computer data (documents, media, ...) making it inaccessible to the user, demanding a ransom for the decryption of the data. This type of malware has been a serious threat to enterprises.
WannaCry and NotPetya are some examples of ransomware that had a huge impact on enterprises with big amounts of ransoms, for example WannaCry reached more than 142,361.51 em
resgates. Neste tabalho, propomos um sistema que consiga detectar anomalias na partilha de ficheiros, como o ransomware (WannaCry, NotPetya) e roubo de dados (violação de dados Equifax), bem como a sua propagação. A solução consiste na monitorização da rede da empresa, na criação de perfis para cada utilizador/máquina, num algoritmo de machine learning para análise dos dados e num mecanismo que bloqueie a máquina afetada no caso de se detectar uma anomalia.Mestrado em Engenharia de Computadores e Telemátic
Resilience Strategies for Network Challenge Detection, Identification and Remediation
The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges
Mobile Ad-Hoc Networks
Being infrastructure-less and without central administration control, wireless ad-hoc networking is playing a more and more important role in extending the coverage of traditional wireless infrastructure (cellular networks, wireless LAN, etc). This book includes state-of-the-art techniques and solutions for wireless ad-hoc networks. It focuses on the following topics in ad-hoc networks: quality-of-service and video communication, routing protocol and cross-layer design. A few interesting problems about security and delay-tolerant networks are also discussed. This book is targeted to provide network engineers and researchers with design guidelines for large scale wireless ad hoc networks
- …