148 research outputs found
Caractérisation et analyse du trafic internet en fonction du type d'application
Les projets de métrologie actuels et passés ont montré que les caractéristiques et les modèles du trafic Internet étaient très éloignés des connaissances traditionnelles étant donné qu'ils mettent en évidence des propriétés de plus en plus complexes comme l'auto-similarité et la dépendence longue mémoire (LRD pour Long Range Dépendance). Ces propriétés sont très dangereuses pour la régularité du profil du trafic Internet ainsi que pour la QdS (Qualité de Service) du réseau. Ces projets ont aussi prouvés que la LRD est causée par la transmission de flux longs (appelés "éléphants") qui utilisent le protocole TCP. En conséquence, de nombreuses propositions ont été faites pour différencier la méthode de transmission des flux courts (appelés "souris") et des flux éléphants. Cependant, cette décomposition du trafic en souris et éléphants ne fournit pas des résultats explicites étant donné que de nombreux comportements sont mélangés au travers de ces deux classes. Ce papier propose donc une évolution de la décomposition souris / éléphants en se basant sur les différentes applications (P2P, "streaming", Web, etc.) qui génèrent la majorité des flux éléphants mais qui ne suivent probablement pas toutes le même modèle de trafic. De plus, une décomposition basée sur le volume d'information généré par ces applications est aussi proposée pour caractériser plus précisément les propriétés du trafic des flux éléphants. Cette analyse apporte des informations permettant d'isoler les classes applicatives qui ont un impact négatif sur la LRD et la QdS. En conséquence, les résultats fournis par cette méthode de décomposition du trafic fourniront des indications pour permettre une meilleure gestion de ces flux applicatifs et leur meilleur transfert dans le réseau
ENDEAVOUR: A Scalable SDN Architecture For Real-World IXPs.
Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level.
In this paper, we present, evaluate, and demonstrate EN- DEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite ev- eryone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.European Union’s Horizon 2020 research and innovation programme under the ENDEAVOUR project (grant agreement 644960)
Investigating adversarial attacks against Random Forest-based network attack detection systems
International audienceA significant research effort in cybersecurity currently deals with Machine Learning-based attack detection. It is aimed at providing autonomous attack detection systems that require less human expert resources, and are then less expensive in time and money. Indeed, such systems are able to autonomously learn about benign and malicious traffic, and to classify further traffic samples accordingly. In such context, hackers start designing adversarial learning approaches in order to design new attacks able to evade from the Machine Learningbased detection systems. The work presented in this paper aims at exhibiting how easy it is to modify existing attacks to make them evade from the Machine Learning-based attack detectors. The Random Forest algorithm has been selected for this work as it is globally evaluated as one of the best Machine Learning algorithm for cybersecurity, and it provides informations on how a decision is made. Indeed, the analysis of the related Random Forest trees helps explaining the limits of this Machine Learning algorithm, and gives some information that could be helpful for making attack detection somewhat explainable. Several other Machine Learning algorithms as SVM, kNN ans LSTM have been selected for evaluating their ability to detect the adversarial attack presented in this paper
A Near Real-Time Algorithm for Autonomous Identification and Characterization of Honeypot Attacks
International audienceMonitoring communication networks and their trac is of essential importance for estimating the risk in the Internet, and therefore designing suited protection systems for com-puter networks. Network and trac analysis can be done thanks to measurement devices or honeypots. However, an-alyzing the huge amount of gathered data, and characteriz-ing the anomalies and attacks contained in these traces re-main complex and time consuming tasks, done by network and security experts using poorly automatized tools, and are consequently slow and costly. In this paper, we present an unsupervised algorithm -called UNADA for Unsupervised Network Anomaly Detection Algorithm -for identification and characterization of security related anomalies and at-tacks occurring in honeypots. This automatized method does not need any attack signature database, learning phase, or labeled trac. This corresponds to a major step towards autonomous security systems. This paper also shows how it is possible from anomalies characterization results to infer filtering rules that could serve for automatically configuring network routers, switches or firewalls. The performances of UNADA in terms of attacks identification accuracy are eval-uated using honeypot trac traces gathered on the honeypot network of the University of Maryland. The time latency for producing such accurate results are also presented, es-pecially showing how the parallelization capabilities of the algorithm help reducing this latency
Unsupervised Classification and Characterization of Honeypot Attacks
International audienceMonitoring communication networks and their traffic is of essential importance for estimating the risk in the Internet, and therefore designing suited protection systems for computer networks. Network and traffic analysis can be done thanks to measurement devices or honeypots. However, analyzing the huge amount of gathered data, and characterizing the anomalies and attacks contained in these traces remain complex and time consuming tasks, done by network and security experts using poorly automatized tools, and are consequently slow and costly. In this paper, we present an unsupervised method for classification and characterization of security related anomalies and attacks occurring in honeypots. This as automatized as possible method does not need any attack signature database, learning phase, or labeled traffic. This corresponds to a major step towards autonomous security systems. This paper also shows how it is possible from anomalies characterization results to infer filtering rules that could serve for automatically configuring network routers, switches or firewalls
Investigating adversarial attacks against Random Forest-based network attack detection systems
International audienceA significant research effort in cybersecurity currently deals with Machine Learning-based attack detection. It is aimed at providing autonomous attack detection systems that require less human expert resources, and are then less expensive in time and money. Indeed, such systems are able to autonomously learn about benign and malicious traffic, and to classify further traffic samples accordingly. In such context, hackers start designing adversarial learning approaches in order to design new attacks able to evade from the Machine Learningbased detection systems. The work presented in this paper aims at exhibiting how easy it is to modify existing attacks to make them evade from the Machine Learning-based attack detectors. The Random Forest algorithm has been selected for this work as it is globally evaluated as one of the best Machine Learning algorithm for cybersecurity, and it provides informations on how a decision is made. Indeed, the analysis of the related Random Forest trees helps explaining the limits of this Machine Learning algorithm, and gives some information that could be helpful for making attack detection somewhat explainable. Several other Machine Learning algorithms as SVM, kNN ans LSTM have been selected for evaluating their ability to detect the adversarial attack presented in this paper
Conception et formalisation d'une application de visioconférence coopérative. Application et extension pour la téléformation
Recent progress in computer science and networking are able to support cooperative distributed multimedia applications. The set of problems related to the design of such applications consists of several points. First, multimedia data is characterized by its quality of service in terms of reliability, throughput, temporal synchronization,... Multimedia applications have to ensure the quality of service of each of the medias, the key point being to ensure intra and inter-streams synchronization constraints. Also, the communication system has to adapt itself to the constraints of the transported medias and to provide a suitable service in terms of throughput, reliability, end to end delay... Finally, users more and more require to jointly share applications, and cooperation mechanisms have to be introduced. In this thesis, mechanisms have been proposed to provide solutions for these problems and have been used in a visioconference application. Then, multimedia synchronization is performed by a synchronization software that uses advanced operating systems mechanisms and conforms to a scenario modeled by a time stream Petri net. Communication uses a partial order transport that adapt itself to the application constraints in terms of throughput, reliability and allows the application performances to be improved. Finally , this visioconference application has been extended to take into account the workgroups and provides, in particular, a control of the joining/leaving participants of the group. A general architecture ensuring these temporal and cooperative constraints has been proposed and realized. Finally, these mechanisms are shown to be applicable to a professionnal tele-training application in aeronautics.Les progrès récents dans le domaine de l'informatique et des réseaux de communications ont ouvert la voie aux applications distribuées multimédias coopératives. La problématique associée à la conception de telles applications comporte plusieurs points. Tout d'abord, les données multimédias se caractérisent par leur qualité de service en termes de fiabilité, de débit engendré, de synchronisation temporelle¿ Les applications multimédias doivent donc garantir le respect de la qualité de service de chacun des médias, le point essentiel consistant à assurer le respect des contraintes de synchronisation intra et inter-flux. De même, le support de communication doit pouvoir s'adapter aux contraintes des médias transportés et fournir un service adéquat en terme de débit, fiabilité, délai de bout en bout¿ Enfin, les utilisateurs ont de plus en plus besoin de travailler en groupe pour leurs applications informatiques, et des mécanismes de coopération doivent être introduits. Dans cette thèse, des mécanismes ont été proposés pour répondre à cette problématique et ont été mis en oeuvre dans le cadre d'une application de visioconférence. Ainsi, la synchronisation multimédia est réalisée par un moteur utilisant des mécanismes avancés des systèmes opératoires et respectant un scénario modélisé par un réseau de Petri à flux temporels. La communication utilise un transport à ordre partiel qui s'adapte bien aux contraintes de cette application autant en terme de débit que de fiabilité, et permet d'en augmenter les performances. Enfin, cette visioconférence a été étendue pour prendre en compte des notions de travail de groupe et offre, en particulier, un contrôle des entrées/sorties des participants et des interactions au sein du groupe. Une architecture générale garantissant ces contraintes temporelles et de coopération a ainsi été proposée et réalisée. Enfin, ces techniques ont été appliquées à une application de téléformation professionnelle dans le domaine de l'aéronautique
Contribution de la métrologie internet à l'ingénierie des réseaux
The increase and the evolution of the Internet during the two last decades have significantly increased the complexity of communication techniques in this network, as well as its traffic characteristics. Thus, while the Internet is requested to provide services with guaranteed qualities, the bad knowledge of its characteristics and its traffic lead to the failure of all proposals made for this purpose. My work during the 6 last years proposed to use Internet monitoring for solving this issue, and shows how this new (new in the Internet) "science of measurements" provides essential information for the technical evolution of networks. This work then essentially deals with the characterization and analysis of the Internet networks and traffic, whose results lead to the design of a new Internet architecture based on a global monitoring system which allows a significant increase of the Internet performance and quality of service. Note also that the results on traffic characterization allow the classification of traffic anomalies into legitimate (as flash crowds) and illegitimate (attacks), and open new ways for the security of communication networksLa croissance et les évolutions de l'Internet lors de ces deux dernières décennies ont considérablement augmenté la complexité des techniques mises en œuvre dans ce réseau, ainsi que les caractéristiques de son trafic. Ainsi, alors qu'on demande à l'Internet de fournir des services de communication de qualités garanties, la méconnaissance des caractéristiques de l'Internet et de son trafic ont conduit à l'échec de toutes les propositions faites en ce sens. Mon travail de ces 6 dernières années a donc proposé d'utiliser la métrologie Internet pour régler ce problème et montre comment cette nouvelle (dans l'Internet) " science des mesures " fournit des éléments essentiels à l'évolution technique des réseaux. Ce travail aborde donc essentiellement la caractérisation et l'analyse du réseau et de son trafic, dont les résultats ont été utilisés pour la conception d'une nouvelle architecture de l'Internet basée sur un système de métrologie global qui permet de sensiblement améliorer les performances et la qualité des services de l'Internet. A noter que les résultats de caractérisation du trafic permettent de classifier les anomalies du trafic en anomalies légitimes (foules subites) et illégitimes (attaques) et ouvrent donc la voie à une nouvelle façon de penser la sécurité des réseaux de communication
- …