9 research outputs found
Scalability of Information Centric Networking Using Mediated Topology Management
Information centric networking is a new concept that places emphasis on the information items themselves rather than on where the information items are stored. Consequently, routing decisions can be made based on the information items rather than on simply destination addresses. There are a number of models proposed for information centric networking and it is important that these models are investigated for their scalability if we are to move from early prototypes towards proposing that these models are used for networks operating at the scale of the current Internet. This paper investigates the scalability of an ICN system that uses mediation between information providers and information consumers using a publish/subscribe delivery mechanism. The scalability is investigated by extrapolating current IP traffic models for a typical national-scale network provider in the UK to estimate mediation workload. The investigation demonstrates that the mediation workload for route determination is on a scale that is comparable to, or less than, that of current IP routing while using a forwarding mechanism with considerably smaller tables than current IP routing tables. Additionally, the work shows that this can be achieved using a security mechanism that mitigates against maliciously injected packets thus stopping attacks such as denial of service that is common with the current IP infrastructure
A Cyber Threat Intelligence Sharing Scheme based on Federated Learning for Network Intrusion Detection
The uses of Machine Learning (ML) in detection of network attacks have been
effective when designed and evaluated in a single organisation. However, it has
been very challenging to design an ML-based detection system by utilising
heterogeneous network data samples originating from several sources. This is
mainly due to privacy concerns and the lack of a universal format of datasets.
In this paper, we propose a collaborative federated learning scheme to address
these issues. The proposed framework allows multiple organisations to join
forces in the design, training, and evaluation of a robust ML-based network
intrusion detection system. The threat intelligence scheme utilises two
critical aspects for its application; the availability of network data traffic
in a common format to allow for the extraction of meaningful patterns across
data sources. Secondly, the adoption of a federated learning mechanism to avoid
the necessity of sharing sensitive users' information between organisations. As
a result, each organisation benefits from other organisations cyber threat
intelligence while maintaining the privacy of its data internally. The model is
trained locally and only the updated weights are shared with the remaining
participants in the federated averaging process. The framework has been
designed and evaluated in this paper by using two key datasets in a NetFlow
format known as NF-UNSW-NB15-v2 and NF-BoT-IoT-v2. Two other common scenarios
are considered in the evaluation process; a centralised training method where
the local data samples are shared with other organisations and a localised
training method where no threat intelligence is shared. The results demonstrate
the efficiency and effectiveness of the proposed framework by designing a
universal ML model effectively classifying benign and intrusive traffic
originating from multiple organisations without the need for local data
exchange
Protocolos para mejorar la performance de las aplicaciones Web
El protocolo HTTP ha sido revisado en un par de ocasiones, la última, ya hace más de una década. Sin embargo, las caracterÃsticas de la Web actual, los requerimientos de sus usuarios, y los niveles de masividad hicieron que los recursos que se proveen mediante éste hayan alcanzado un punto donde se han encontrado algunas limitaciones inherentes al diseño original del protocolo. Es por ello que el IETF (Internet Engineering Task Force), dentro del HTTPbis Working Group, está analizando modificaciones, ajustes y/o mejoras de fondo en pos de alcanzar un futuro estándar HTTP 2.0.
Como una propuesta reciente se presentó el protocolo SPDY, cuyo objetivo primordial es mejorar el rendimiento del servicio Web. Hoy en dÃa se ha convertido en la base sobre la cual dicho WG está trabajando. Este proyecto de investigación plantea estudiar las deficiencias mencionadas de HTTP y las mejoras disponibles, entre los cuales se encuentra SPDY. Además, producir un estudio académico de algunas de las condiciones que optimicen su implementación, tanto desde el punto de vista técnico y general como además involucrando un análisis de condiciones geográficas. Por último, se desarrollarán piezas de software que faciliten dichos análisis y sirvan como herramientas de apoyo para una eventual migración a SPDY o a HTTP/2.0.Eje: Arquitectura, Redes y Sistemas OperativosRed de Universidades con Carreras en Informática (RedUNCI
Measurement and Analysis of HTTP Traffic
The usage of Internet is rapidly increasing and a large part of the Internet
traffic is generated by the World Wide Web (WWW) and the associated protocol
HiperText Transfer Protocol (HTTP). Several important parameters that affect
the performance of the WWW are bandwidth, scalability and latency. To tackle
these parameters and to improve the overall performance of the system, it is
important to understand and to characterize the application level
characteristics. This article is reporting on the measurement and analysis of
HTTP traffic collected on the student access network at the Blekinge Institute
of Technology in Karlskrona, Sweden. The analysis is done on various HTTP
traffic parameters, e.g., inter-session timings, inter-arrival timings, request
message sizes, response code and number of transactions. The reported results
can be useful for building synthetic workloads for simulation and benchmarking
purposes
Measurement and Analysis of HTTP Traffic
The usage of Internet is rapidly increasing and a large part of the Internet traffic is generated by the World Wide Web (WWW) and the associated protocol HiperText Transfer Protocol (HTTP). Several important parameters that affect the performance of the WWW are bandwidth, scalability and latency. To tackle these parameters and to improve the overall performance of the system, it is important to understand and to characterize the application level characteristics. This article is reporting on the measurement and analysis of HTTP traffic collected on the student access network at the Blekinge Institute of Technology in Karlskrona, Sweden. The analysis is done on various HTTP traffic parameters, e.g., inter-session timings, inter-arrival timings, request message sizes, response code and number of transactions. The reported results can be useful for building synthetic workloads for simulation and benchmarking purposes